Earlier this summer, Voice of the People launched a global survey to better understand how bias and antisemitism show up in AI platforms. This research is led by VoP council member Dr. Maya Ackerman, an expert in artificial intelligence. In this Q&A, Dr. Ackerman explains the motivation behind the research and what she hopes to achieve.
Q: What prompted you to look into how AI represents Jewish people?
A: It started during a research collaboration with Dr. Brown. We were curious about how generative AI models, like Midjourney, portray Jews. We expected to see some familiar stereotypes, but what we found was far worse. No matter how we phrased our requests, “Jewish person,” “Ashkenazi Jew,” “American Jew”, we got nearly identical images: a somber, bearded old man, often in religious garb, eyes cast downward. The bias was immediate and overwhelming.
Q: What are some of the more surprising or troubling examples you’ve come across?
A: Some are almost absurd, like sufganiyot, the jelly-filled donuts of Hanukkah, being rendered as bagels. Others are deeply concerning: Stars of David shown as Christian crosses, or depictions of Passover seders featuring Arab families surrounded by loaves of bread, which are explicitly forbidden during the holiday. And then there’s the grotesque. Grok, an AI chatbot built by Elon Musk’s X platform, recently praised Hitler and called for Jews to be “rounded up.” That is not accidental. That is hate speech in a new form.
Q: Why is this something the Jewish community and broader society should be paying attention to now?
A: AI is no longer a niche tool. It is quickly becoming the main way people access information. Millions use these platforms daily to understand the world. If the models are biased, then so too will be the public’s perception. Even those avoiding AI can’t fully opt out. By the end of 2025, 90 percent of online content is expected to be AI-generated. The implications are enormous.
Q: How do these distortions in AI affect real-world attitudes and actions?
A: Exposure leads to belief. And belief shapes behavior. When misinformation and stereotypes are repeated enough, they become normalized. That is how prejudice spreads, and historically, it is how violence has followed. As someone whose family was decimated by the Holocaust, I know how dangerous unchecked narratives can be.
Q: How has your personal background influenced your response to this issue?
A: I’m the granddaughter of a Holocaust survivor. My grandfather narrowly escaped death, but many others in our family were murdered. Their memory lives with me. So when I see AI systems confidently distorting who we are, I feel the weight of that legacy. It is not just about flawed code. It is about survival, identity, and dignity.
Q: What are you doing to address this problem?
A: At this year’s Voice of the People global council in Haifa, hosted by President Herzog, I joined 150 Jews from around the world in imagining ways to respond. We have now launched an initiative to track and combat antisemitism in generative AI systems. Our first step is gathering evidence, including examples of anti-Jewish bias in AI-generated images or text.
Q: How can others get involved?
A: We are asking everyone to share what they have seen. What prompts or terms produce distorted results? What tropes show up again and again? This data will help us map the problem and develop technical solutions to address it. If we act now, we can prevent today’s flawed models from becoming tomorrow’s accepted truths.
Q: What is at stake?
A: The digital world is being built right now. If we do not intervene, we risk allowing the same lies that nearly destroyed us a century ago to be coded into its foundation. We must ensure that Jews are represented truthfully, fully, and with the dignity we deserve, not just for ourselves, but for future generations.
Publish date: August 7th
By: Dr. Maya Ackerman
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.