Conference Clarifies Penalties for Unverified LLM Output
A scientific conference clarified penalties for paper submissions containing unverified large language model output. Reviewers who find incontrovertible evidence that authors failed to check LLM-generated results can deem the entire submission non-credible and apply penalties. Authors remain responsible for all content, including errors or bias introduced by generative tools. Researchers compared the requirement to verifying co-author contributions and questioned whether thorough checks of arXiv submissions were standard before LLMs became widespread.
We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper. 3/
If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s). 2/
If a submission contains incontrovertible evidence that one author did not check the work of their co-authors, we can't trust anything in the paper. Right?
We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper. 3/
@DimitrisPapail Yeah, I guess that all arxiv subs were perfectly checked before LLMs
If a submission contains incontrovertible evidence that one author did not check the work of their co-authors, we can't trust anything in the paper. Right?