2d ago

Conference Clarifies Penalties for Unverified LLM Output

0

A scientific conference clarified penalties for paper submissions containing unverified large language model output. Reviewers who find incontrovertible evidence that authors failed to check LLM-generated results can deem the entire submission non-credible and apply penalties. Authors remain responsible for all content, including errors or bias introduced by generative tools. Researchers compared the requirement to verifying co-author contributions and questioned whether thorough checks of arXiv submissions were standard before LLMs became widespread.

Original post

We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper. 3/

12:03 PM · May 14, 2026 View on X
Reposted by

We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper. 3/

Thomas G. DietterichThomas G. Dietterich@tdietterich

If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s). 2/

7:03 PM · May 14, 2026 · 61.6K Views
7:03 PM · May 14, 2026 · 91.4K Views

If a submission contains incontrovertible evidence that one author did not check the work of their co-authors, we can't trust anything in the paper. Right?

Thomas G. DietterichThomas G. Dietterich@tdietterich

We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper. 3/

7:03 PM · May 14, 2026 · 91.4K Views
8:41 PM · May 14, 2026 · 20.8K Views

@DimitrisPapail Yeah, I guess that all arxiv subs were perfectly checked before LLMs

Dimitris PapailiopoulosDimitris Papailiopoulos@DimitrisPapail

If a submission contains incontrovertible evidence that one author did not check the work of their co-authors, we can't trust anything in the paper. Right?

8:41 PM · May 14, 2026 · 20.8K Views
7:57 AM · May 15, 2026 · 253 Views
Conference Clarifies Penalties for Unverified LLM Output · Digg