ORIGINAL POST
#92(((ู()(ู() 'yoav))))๐พ@YOAVGO
when asked about "negation neglect humans", gemini provides an answer followed by a link to arxiv.
based on the graphic design, one would expect the linked page to provide evidence for the text before it.
it does not. not even close.

6:53 AM ยท May 17, 2026 ยท 1.3K Views
the link is to the recent paper by Mayne et al, and I challenge you to find even remotely similar content in it to support the claim made in gemini's text
arxiv.org
Negation Neglect: When models fail to learn negations in training
Harry Mayne 1 Lev McKinney 2 Jan Dubiลski 3,4 Adam Karvonen 5 James Chua 6,7 Owain Evans 8,9 Equal contribution 1 University of Oxford 2 University of Toronto 3 Warsaw University of Technology 4 NASK National Research Institute 5 Work done during a MATS Fellowship 6 Work done at Truthful AI 7 Anthropic 8 Truthful AI 9 UC Berkeley. Correspondence to: harry.mayne@oii.ox.ac.uk. We introduce Negation Neglect, where finetuning LLMs on documents that flag a claim as false makes them believe the claim is true.

when asked about "negation neglect humans", gemini provides an answer followed by a link to arxiv. based on the graphic design, one would expect the linked page to provide evidence for the text before it. it does not. not even close.
6:53 AM ยท May 17, 2026 ยท 1.3K Views
6:59 AM ยท May 17, 2026 ยท 436 Views