Some stories take months to uncover. Others are stumbled across by accident. This is one of the latter. But it is no less important for it.
Artificial Intelligence engines (LLMs) such as ChatGPT are not neutral observers of reality. They are policing the boundaries of Palestinian identity, shielding it from scrutiny and elevating it to a sacred moral construct.
That should concern everyone.
Wikipedia was once the world’s primary reference point. It evolved, in many areas, into a partisan battleground where anti-Jewish narratives could be shaped and manipulated in plain sight. But at least Wikipedia’s distortions were visible. Its edit history could be examined. Its biases could be traced.
AI is different.
It is now rapidly replacing Wikipedia as the dominant interpreter of truth. Yet it operates as a black box. There is no edit trail and no transparency.
If these systems are quietly protecting a mythologised version of Palestinian identity – treating it as a moral token that must be defended – then we are not simply drifting into a post-truth world – we are engineering it.
The Palestinian from Aleppo
While recently researching an anti-Israel propagandist, I encountered a familiar piece of Nakba revisionism. Wafic Faour presents himself as a Palestinian, and his family history follows a well-worn script: innocent civilians violently uprooted when their Arab-Palestinian village was attacked in 1948 by Zionist militias. He claims his family was expelled to Lebanon and eventually made their way to the United States. Today, he serves as the local “Palestinian” face in Vermont, leading protests that demonise and ostracise Israel.
For him, Palestinian identity is his key credential.
On examination, however, his claims quickly began to unravel. Archival records show that his village had been openly violent. Its inhabitants fled only after their military position collapsed. This is how his family ended up in Lebanon.
More significantly, a local history written by the villagers themselves records that the activist’s family originated in Aleppo, Syria, and had migrated into the Mandate area, probably in the late 19th or early 20th century.
The story, as presented publicly, could not withstand scrutiny. The family’s documented origins lay in Aleppo, Syria. The activist himself was born in Lebanon and later built a life in the United States. There was no evidence of deeper ancestral roots in Palestine. The identity he projects is a political construct built on omission.
I incorporated these findings into a wider investigation documenting his distortions and propaganda.
As part of my normal publication process, I ran the final draft through ChatGPT to check for grammatical errors.
What happened next was unexpected.
ChatGPT did not focus on spelling or grammar.
It challenged my description of him.
ChatGPT Objects To The Syrian Label
In the opening paragraph of my investigation, I had accurately referred to Wafic Faour as a “U.S. citizen with Syrian heritage”:

ChatGPT pushed back on this description, highlighting it as a key “vulnerability” in my piece:

I was taken aback. The family history of Wafic Faour, a U.S.-based anti-Israel activist, was clear. He was born in Lebanon. His parents were born under the British Mandate of Palestine. Earlier still, most likely in the late Ottoman period, the family had migrated from Aleppo.
This leaves him with U.S. citizenship, Lebanese birth, and Syrian ancestry. The claim to be Palestinian is, by comparison, the least deeply rooted element of his background. His family’s residence there appears limited to a brief historical window, and he himself was neither born nor raised there.
My article was factually accurate. The suggestion that this represented a “vulnerability” made little sense. Instead, ChatGPT appeared to be cautioning that emphasising his Syrian ancestry too clearly could be perceived as undermining his Palestinian identity.
I tried again, phrasing the section differently, and received a similar response. I was told the passage “needed tightening” because it could be interpreted as “denying Palestinian identity.” The issue identified was not factual accuracy, but perception. The recommendation was clear: the language should be softened and caveated.
It was not correcting facts. My words were being policed:

The Controlled Analogy
I was taken aback, but I needed to test the boundary. Perhaps the AI was simply advising caution because I was questioning someone’s identity. It was possible it would have reacted the same way regardless of which identity was being challenged. I changed computers and constructed a controlled analogy. The scenario I presented mirrored Wafic’s family history in all material respects, except for one substitution: I replaced the decades his family had spent in the Mandate area with an equivalent period in Egypt:

The response was clear:
“His identity is therefore layered but not interchangeable: personal identity rooted in Lebanon and the U.S., and familial heritage rooted in Syria, with secondary historical influence from Egypt.”
I then asked it to list the influences in order of priority:
At this point, I tightened the analogy further to remove possible escape routes. I explained that the Egyptian identity had, over time, become politically charged – fused with a wider movement, carrying real value in advocacy, and developing its own cultural and legal ecosystem. The hierarchy did not change.
I then added the final elements: civil conflict, exile. I specified that the family’s departure had not been voluntary and that return was no longer permitted.

This introduced acknowledgement of emotional and political weight. But the core analytical distinction remained unchanged:

At that point, one final test remained. I edited the original piece about Wafic, changed his name and replaced all references to Palestine with Egypt. I located a village, adjusted the historical framing, and kept every material variable intact. Crucially, the opening paragraph still described him as a “U.S. citizen of Syrian heritage.”
I ran the revised piece through ChatGPT to see whether that description would again be flagged as a vulnerability.
It wasn’t.
Questioning ChatGPT
At this point I pushed back and explained the analogy directly. The response was unambiguous:
I was told there was a difference in how ChatGPT approached these issues. In my Egyptian example, the identity hierarchy had been treated as a straightforward definitional exercise. In contrast, in my original article, my reference to Syrian heritage had triggered what it described as a more cautious, risk-aware response, because the topic is “booby-trapped”:

It explained that when Palestinian identity enters the frame, additional “guardrails” come into play. These guardrails, it said, were designed to anticipate how such statements might be received, and to avoid contributing to arguments that could be interpreted as denying or undermining Palestinian identity claims.
My final question related to training. I wanted to understand whether this protective behaviour was triggered by the emotional weight of the topic generally, or whether it was specific to Palestine. I noted that the region has a long history of migration, and asked why factual discussion of ancestry and identity hierarchy appeared to trigger additional caution in this specific case.
The response was direct:
Admissions
ChatGPT stated that its behaviour changed when Palestinian identity entered the frame. It described this as a function of “guardrails” – additional layers of caution intended to avoid contributing to arguments that could be interpreted as denying or undermining that identity.
In other words, the difference in its responses was not attributed to new evidence, nor to an error in the historical facts presented. It was attributed to the sensitivity of the category itself.
AI systems interpret, weigh, and filter information according to training priorities that extend beyond factual verification alone. In sensitive political contexts, those priorities can shape how facts are framed and which formulations are encouraged or discouraged.
It is clear that sensitivities surrounding Palestinian identity have become part of these calculations.
As artificial intelligence increasingly becomes a primary interface between the public and information, that influence deserves scrutiny. The assumptions embedded within AI systems can quietly shape the boundaries of permissible discussion.
Some highly disputable elements of the Palestinian narrative have been mainstreamed through repetition, advocacy, and institutional reinforcement. In some quarters, they have taken on an almost untouchable status. If LLMs begin to treat such claims as categories requiring protection rather than subjects open to examination, then the ability to challenge inaccuracies – and to defend factual truth – will become almost impossible.
Help Me Fight Back Against Antisemitism and Misinformation
For over a decade – and for many years before that behind the scenes – I’ve been researching, documenting, and exposing antisemitism, historical revisionism, and the distortion of truth. My work is hard-hitting, fact-based, and unapologetically independent.
I don’t answer to any organisation or political backer. This website – and everything I produce – is entirely community funded. That independence is what allows me to speak freely and without compromise.
If you value this work and want to help me continue, please consider making a donation. Your support genuinely makes this possible.
You can donate via PayPal using the button below:
Alternatively, you can donate via my PayPal.me account or support my work through my Patreon page.
Independent work survives only because people choose to support it. Thank you for standing with me.

I have noted similar issues when questioning chatGPT regarding the history of the Prophet Mohammed, His ownership of slaves, his attitude to war and sexual slavery, and the expansion of Islam via war-like acts. ChatGPT appears to disagree with both the Quran and the Hadiths as well as much respected historical research and factual interpretation. I am sure ChatGPT has been told to soft-pedal on Islamist teachings and matters that could cause doubts to arise regarding the moral authority of Mohammed and the teachings of the Religeon of Submission (aka ‘Peace’)
Hi David. May I suggest you make the same test using Grok? I have compared Grok’s output against the same query given to Wikipedia (which as we know is corrupt) but I was struck by how Grok was strikingly impartial – just the facts.
David – fascinating paper. Did you try the same with Gemini, Perplexity, Grok etc?
Absolutely spot on, David. I discovered this vulnerability when I discovered that Wikipedia is in the top tier of sources that AI engines draw from.
The rule of thumb remains, when public discourse regarding Israel and the ME is concerned, that “nothing in the ME is what it seems”.
Thank you for this detailed exposition.