Bar-Ilan professor downplays hallucinated citations in AI papers
Yanai Elazar, assistant professor at Bar-Ilan University, posted that hallucinated citations in AI-assisted scientific papers are straightforward to detect and correct. He described them as a minor concern relative to other erroneous or fabricated elements that AI tools can introduce. Academic Tuhin Chakrabarty agreed. Researcher Quentin Berthet separately asked how verification steps would identify hallucinated references across submitted papers. The exchange focused on the relative scale of citation errors versus broader risks in AI-generated scientific content.
People seem to really freak out about hallucinated citations as the "bad consequence of AI slop" but (1) it's easy to detect (and fix), and (2) it's so insignificant compared to other erroneous/bad/misleading writing AI can make in scientific papers.
@yanaiela Agree.
People seem to really freak out about hallucinated citations as the "bad consequence of AI slop" but (1) it's easy to detect (and fix), and (2) it's so insignificant compared to other erroneous/bad/misleading writing AI can make in scientific papers.
@ChrSzegedy @AlexKontorovich Sorry, I meant, how do they plan to check if there are hallucinated references in all papers?
@qberthet @AlexKontorovich Nope. You just let all authors to confirm they are on the paper.