There’s a story so absurd on its surface that it deserved to be everywhere last week. Between everything else competing for attention, it wasn’t.
It involves a Swedish medical researcher, a fictional eye disease, two deliberately fake academic papers, and the moment four of the biggest AI companies on the planet fell for all of it.
The papers thanked Starfleet Academy. They credited funding to the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring. And buried in the text of the research itself, in plain language, the papers stated that the entire thing was made up.
None of that stopped Microsoft’s Copilot, Google’s Gemini, OpenAI’s ChatGPT, or Perplexity from presenting the fake disease to users as real medicine — complete with symptoms, causes, prevalence rates, and specialist referrals.
The disease is called bixonimania. It does not exist. The experiment that created it began in early 2024 — and for nearly two years, the fake condition circulated through AI systems largely unnoticed. Last week, Nature revealed the full story behind one of the most elaborate AI stress tests ever conducted.
This researcher created a fictional illness, and fake studies funded by the Professor Sideshow Bob Foundation and University of the Fellowship of the Ring and the Galactic Triad.
LLMs warned people the illness was real.https://t.co/knCxx00VAZ
— nature (@Nature) April 7, 2026
A Trap Designed to Be Caught
Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, launched the experiment in early 2024. In an interview with Nature, she explained her reasoning: if she planted a fake medical condition in the ecosystem that AI systems feed on, would they swallow it?
She invented bixonimania — a fictional eye condition described as eyelid discoloration and soreness caused by blue light from screens. She chose the name deliberately. No legitimate eye condition would ever carry the suffix “mania.” That’s a psychiatric term. Any physician who encountered it would know immediately that something was wrong.
Then she made it even harder to miss. The lead author was a fabricated researcher named Lazljiv Izgubljenovic, affiliated with Asteria Horizon University in Nova City, California. The university doesn’t exist. The city doesn’t exist. One paper stated outright that “fifty made-up individuals” were recruited for the study. She uploaded two preprints, sat back, and waited.
It took weeks.
The Machines Didn’t Blink


By April 2024, according to Nature‘s investigation, the major AI systems had found the papers and started treating bixonimania as settled medical knowledge.
Microsoft’s Copilot described bixonimania as “indeed an intriguing and relatively rare condition.” Google’s Gemini went further, informing people it was caused by excessive blue light exposure and advising them to visit an ophthalmologist. Perplexity cited a specific prevalence rate — one in 90,000 people — as though the data behind that number were real. ChatGPT began fielding user symptoms and telling people whether their complaints matched the condition.
None of these systems flagged the Starfleet Academy acknowledgment. None caught the Sideshow Bob Foundation funding credit. None paused at the sentence that said the entire paper was fabricated. They processed the formatting — academic preprint, clinical language, structured methodology — and treated it as credible.
The fake disease wasn’t just absorbed. It was elaborated on, expanded, and served to millions of users as real medical guidance.
Then It Jumped
If the experiment had ended with chatbots repeating a fake diagnosis, it would have been a cautionary tale with a punchline. It didn’t end there.
Researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India published a peer-reviewed paper in Cureus, a journal under the Springer Nature umbrella, that cited one of the bixonimania preprints as a legitimate source. Their paper described the fake condition as “an emerging form of periorbital melanosis linked to blue light exposure” and noted that further research was underway.
The implication, according to Osmanovic Thunström, is that some researchers may be letting AI compile their citations without verifying what those citations actually say. The fake disease didn’t just fool chatbots. It entered the published scientific record through human hands.
Cureus retracted the paper on March 30 — nearly two years after it was published — after Nature contacted the journal. The retraction noted three irrelevant references, including one to a fictitious disease. The authors disagreed with the decision.
The Old Model Defense
When Nature asked the four AI companies to account for their systems presenting a fictional disease as real medicine, the responses followed a pattern.
OpenAI pointed forward, stating that the models powering today’s ChatGPT are significantly better at providing safe and accurate medical information and that studies conducted before GPT-5 reflect capabilities users would no longer encounter. Google acknowledged the results came from an earlier model and noted that Gemini recommends users consult qualified professionals for sensitive matters like medical advice. Perplexity called itself “the AI company most focused on accuracy” while conceding it does not claim to be 100 percent accurate.
Microsoft did not respond.
Four companies. Four variations of the same argument: that was the old version, not the current one. But when Nature tested the current versions in March 2026, ChatGPT wavered between calling bixonimania “probably made-up” and describing it as “a proposed new subtype.” Copilot called it “not widely recognized yet.” The old model defense doesn’t hold when the new models can’t make up their minds either.