OpenAI staffer roon urges focus on value capture
Roon, a pseudonymous technical staff member at OpenAI, posted that AI alignment researchers should prioritize avoiding value capture of the lightcone rather than outcomes such as ending history or creating a monopole. Oliver Habryka responded that the issue has received attention in existing work on coherent extrapolated volition from Eliezer Yudkowsky and Paul Christiano, though it still appears under-explored.
i would like for more alignment people to think about avoiding the value capture of the lightcone. many prefer the ending of history, the monopole, to tiny percent probabilities of armageddon
@ohabryka what are the best things they’ve written?
@tszzl I think about this a lot. CEV is a lot about this! I also wrote about it in a bunch of other places, and I feel like both Eliezer and Paul have written about this a good amount. Agree it on the margin still seems under-explored.
@tszzl I think about this a lot. CEV is a lot about this! I also wrote about it in a bunch of other places, and I feel like both Eliezer and Paul have written about this a good amount.
Agree it on the margin still seems under-explored.
i would like for more alignment people to think about avoiding the value capture of the lightcone. many prefer the ending of history, the monopole, to tiny percent probabilities of armageddon