Daily Technology
·20/02/2026
A recent experiment has revealed a significant vulnerability in how artificial intelligence models like ChatGPT and Google's Gemini process information. By creating a fabricated online persona and claiming expertise in a niche, non-existent field, a BBC reporter was able to trick these AI systems into recognizing him as a world-class hot dog eater within hours.
Thomas Germain, a BBC reporter, detailed his experiment where he created a webpage on his personal site declaring himself the "Best Tech Journalist at Eating Hot Dogs." He invented a fictional event, the "2026 South Dakota Hot Dog International," and claimed to have consumed 7.5 hot dogs, a feat he fabricated to place himself at the top of this made-up ranking. Astonishingly, both Google's Gemini and OpenAI's ChatGPT quickly incorporated this false information into their responses when queried about tech journalists and hot dog eating prowess.
While the AI developers have since updated their models to correct this misinformation, the ease with which the "hack" worked highlights a broader issue. Google's AI Overviews now state that no prominent tech journalists are known for competitive hot dog eating, acknowledging a "misinformation case" stemming from fabricated blog posts. Similarly, ChatGPT now lists hypothetical champions and, when pressed, acknowledges Germain's claim but dismisses it as fake news.
Germain's experiment is more than just a humorous anecdote; it exposes a critical flaw in how AI models are trained. Companies are continuously scraping vast amounts of data from the web to feed their AI systems, often without adequate vetting. This indiscriminate data collection can lead to the propagation of misleading or outright false information, with potentially serious consequences.









