
xAI and Grok logos are seen in this illustration taken, February 16, 2025. — Reuters
#Elon #Musks #Grok #sparks #outrage #photo #Gazas #starving #girl
PARIS: A heartbreaking image of a young girl suffering from a hungry girl in Gaza has caused a harsh reaction online, not only because of her discomfort, but also because Elon Musk’s Ai Chat Boat, Grook, wrongly said that the picture was Yemen years ago.
This mixing spread rapidly on social media, which made many people angry and upset. Some accused the chatboat of accusing them of increasing confusion and spreading false information at a time when emotions are already running.
The photo of AFP Photojee Omar al -Qata shows a skeleton, an underage girl in Gaza, where Israel’s blockade has led to a widespread famine in the Palestinian territory.
But when social media users asked Grook where it came from, X -boss Elon Musk’s artificial intelligence chat boot believed that the photo was taken in Yemen about seven years ago.
The false response to the AIboat was widely shared online, and the left-wing Palestinian French lawmaker, Emerak Caron, was accused of spreading unknown information to post a photo on the Israeli-military war.
At a time when the Internet users are turning to AI to confirm images, technology highlights the dangers of confidence tools, such as Grook, when being free from error.
Grook said the photo was shown in October 2018 to seven -year -old Yemeni child Amal Hussein.
In fact, this photo is shown on August 2, 2025, in the city of Gaza, in the arms of his mother, Modella in Gaza.
Her mother told AFP that before the war, Hamas’s attack on Israel on October 7, 2023, was outraged, Mary’s weight was 25kg.
Today, it weighs only nine. Modlala told AFP – and it’s also “not always available”.
It was challenged on his wrong response, Grook said: “I do not spread fake news. I set my answers on certified sources.”
The Chatboat finally released a response that acknowledged the error, but the next day, in response to more questions, Grook reiterated the claim that the photo was made by Yemen.
Chatboat has previously released content that praised Nazi leader Adolf Hitler and suggested that Jewish people are more likely to spread hate online.
Radical right bias
Grook’s errors clarify the limits of AI tools, whose functions are as irreparable as “black boxes,” said Louis Desbach, a researcher of technical ethics.
Hello Chat, “We do not know exactly why they respond to it, nor how they prefer their sources.”
He said that every AI has a prejudice associated with the information that was trained and its creators’ instructions.
According to the researchers, the Grook-Masks’ billionaire-based billionaire, developed by the Zai Startup, shows a very clear prejudice with South Africa’s billionaire theory of South Africa’s billionaire, a standard of US President Donald Trump’s former authorized and radical right.
Dysbach said the chat boot pulls it out of its proper role to identify the origin of a picture.
“Generally, when you look for the origin of an image, it can say: ‘This photo could be taken in Yemen, it could be taken in Gaza, could be taken in any country where there is a famine.”
The expert said that AI does not necessarily want accuracy – “that’s not the purpose.”
Another AFP photo of the Ghazan child with al -Qaeda, which was taken in July 2025, was already incorrectly located, and it was history by Guru until Yemen, 2016.
Due to this mistake, Internet users accused the French newspaper’s labor, which published the photo of manipulation.
‘Friendly Pathological False’
AI’s bias is linked to the data that is fed and what happens during the fine toning-the so-called alignment phase-which then decides what the model will classify as a good or bad answer.
Dysbach said, “Just because you explain that the answer to the answer does not mean that it will be something different.”
“His training figures have not changed, nor is it aligning.”
Grook is not alone in identifying images wrongly.
When the AFP asked Mr. Lee Chat, who was trained on AFP articles under an agreement between the French startup and the news agency,-the boot also misinterpreted the image of Mary Davas with Yemen.
For dysbach, chat boats should never be used as tools tools to confirm facts.
“They are not made to tell the truth,” he said, “but” to produce content, whether it be true or wrong “.
“You have to look like it like a friendly pathological liar – it can’t always lie, but it can always be.”