
A visitor watches an AI sign on an animated screen at the Mobile World Congress, the telecom industry’s biggest annual gathering, in Barcelona. — AFP/File
#factchecks #sow #misinformation
Since India’s four-day dispute with Pakistan, misinformation exploded, social media users were attracted to the AI Chatboat for verification-just to face more liars, clarify its distrust as a reality-test device.
With tech platforms reducing human facts checks, users are fast relying on AI-powered chat boats, which include Zek Groke, Openi’s Chattagpat, and Google’s Gemini-confident information.
“Hey @Grook, is that true?” Elon has become a common question on Musk’s platform X, where the AI assistant has been built, which reflects the growing trend of finding immediate debuns on social media. But the reactions often suffer from false information.
Grook-now unrelated questions have been misconduct as a missile strike on Pakistan’s Noor Khan Airbus during a recent dispute over the country’s recent dispute with the country’s Khartoum Airport under a renewed scrutiny to enter a right-wing conspiracy theory.
The unrelated footage of the fire in Nepal was misinterpreted as a “possibility” in which a military response to Pakistan’s Indian attacks.
AFP told AFP, “The growing dependence on Grook as facts has come to the fore when X and other major tech companies have promoted investment in human facts checkers,” AFP told AFP, “said AFP Kenzie Sadegi, a researcher at the Information Watch Dog News Guard.
He warned, “Our research has repeatedly found that AI chat boats are not reliable sources of news and information, especially when breaking news is talked about.”
‘Fabric’: News Guard’s research has found that 10 leading chat boats are at risk of repeating lies, including Russian unknown information and false or misleading claims about recent Australian elections.
In a recent study of eight AI search tools, the Digital Journalism Two Center at the University of Columbia revealed that Chat Boats “usually refused to answer questions that they could not answer the correct answer, instead of offering false or speculation.”
When the AFP checkers in Uruguay asked Gemini about a woman’s AI -made image, she not only confirmed her authenticity but also the details about her identity and where the photo was taken.
Grook recently labeled a major video of swimming in the Amazon River, even referring to scientific campaigns for supporting his false claims.
In fact, AFP’s facts checked in Latin America reported that many users have presented Grook’s diagnosis as evidence that the clip was real.
Such results have led to concerns because the survey shows that online users are rapidly moving towards AI chat boats for collecting and verifying information from traditional search engines.
This change also came when Meta announced earlier this year that it was eliminating the facts of its third party in the United States, which turned the task of eliminating the false of the general consumers under the model known as the popular “Community Notes” by X.
Researchers have repeatedly questioned the effects of “community note” against falsehood.
‘Prejudiced Answers’: Human facts have long been a flashpoint in a hyper polarized political climate, especially in the United States, where conservative advocates maintain it freely speech and the right -wing content.
The AFP currently works with a Facebook examination program in 26 languages, including Asia, Latin America, and the European Union.
The quality and accuracy of AI chat boats may vary, depending on how their training and program are raised, raising concerns that their production can be subject to political influence or control.
Maski’s recently cited “white genocide” in South Africa, accusing Grook of “unauthorized amendment” to produce unauthorized posts.
When AI expert David Caswell asked Grook who might have edited his system, Chatboat has convicted Musk’s name “most likely”.
President Donald Trump’s billionaire, born in South Africa, has previously presented the baseless claim that South African leaders are “pushing for open genocide” by white people.
“We have seen that AI’s assistants can either fate the results or give human coders especially discriminatory answers after changing their instructions,” Angel Hallon told AFP.
“I am particularly worried that Grook has misused requests related to highly sensitive matters after receiving instructions for pre -authorized answers.”