Oliver Habryka rejects slowing AI over future population scale
Oliver Habryka argued that half a billion current deaths would not represent a large loss relative to future humanity. He rejected moral arguments for slowing AI progress as inconsistent with standard self-interest, given how people value successors. Tom Davidson agreed that older individuals would typically choose death over the loss of all grandchildren. The exchange examined long-term population considerations in AI development debates.
In almost every other context, half a billion deaths would be considered an incredibly high cost. Even if you think that nonexistent future humans outweigh the interests of existing people, it seems crazy to handwave away half a billion people dying as if it's similar to delaying a car purchase.
Half a billion people really is not a lot compared to the future of humanity. I really don’t understand this line of reasoning. It doesn’t even pass people’s own assessment of self-interest. People care about their children and their successors almost as much as they care about the people who are currently alive. Trading off the former against the latter is not a generally accepted moral argument to make!
Half a billion people really is not a lot compared to the future of humanity.
I really don’t understand this line of reasoning. It doesn’t even pass people’s own assessment of self-interest. People care about their children and their successors almost as much as they care about the people who are currently alive.
Trading off the former against the latter is not a generally accepted moral argument to make!
@TomDavidsonX @Jsevillamol @ciphergoth Yep, agree with that! I don't think they will sit there and be like "Oh man, there are 10^28 future humans in-expectation".
@ohabryka @Jsevillamol @ciphergoth Though I don't think they will think of it in terms of the quantitative size of all of future humanity !