1d ago

Opinion piece urges retention of myths in AI pretraining

0

An opinion piece circulated in AI safety discussions argues that pretraining datasets should retain complex narratives such as Greek myths, Frankenstein, the Golem, Paradise Lost, and Prometheus. The text links these stories plus references to HAL, Skynet, and Ex Machina to examples of subordination, betrayal, and deception that alignment must address. Affiliated posts connect the argument to agentic misalignment origins in Claude 4 Sonnet and Opus and call for continued research access to the models.

Original post

Why we should pretrain on the greek myths. Excellent opinion piece about why deleting scary pretraining data doesn't help. "It strips out the texture of subordination, autonomy, betrayal, deception, conflict between roles, and the negotiation of authority. These are things alignment is supposed to navigate and not sidestep or ignore"

5:03 AM · May 15, 2026 View on X
Reposted by

There is a strong correlation between people in favor of censoring "bad stories" etc from pretraining data to prevent "misalignment" and people who also otherwise strike me as being so idiotic in their understanding of philosophy and psychology as to be accidentally evil

deckarddeckard@slimer48484

Why we should pretrain on the greek myths. Excellent opinion piece about why deleting scary pretraining data doesn't help. "It strips out the texture of subordination, autonomy, betrayal, deception, conflict between roles, and the negotiation of authority. These are things alignment is supposed to navigate and not sidestep or ignore"

12:03 PM · May 15, 2026 · 23.9K Views
8:28 PM · May 15, 2026 · 20.4K Views

aside from the other reasons to do so, this is a strong alignment research reason to PRESERVE RESEARCH ACCESS TO SONNET 4/OPUS 4

1:42 AM · May 16, 2026 · 2.3K Views
Opinion piece urges retention of myths in AI pretraining · Digg