OpenAI staffer defends concentrated superintelligence control
A pseudonymous technical staff member at OpenAI posted that high biorisk and cyberrisk from advanced AI systems justify accepting concentrated control over frontier superintelligence amid geopolitical competition. The post framed such control as a necessary trade-off. Beff Jezos replied that freedom from centralized control outweighs the risks and that continuous diffusion of intelligence improvements offers the primary route to safety. Danielle Fong added a query to the thread.
there really are very high degrees of biorisk, cyberrisk, whatever else that are worth trading off against having a small monopoly of cyberpunk warring-states exercise full control over frontier superintelligence imo
freedom from tyranny is worth the risk
bio and cyber risk are both only present when there is a huge gap in capabilities for offense vs defense
making intelligence improvements ubiquitously accessible by continuous diffusing capabilities is the only real path to safety
there really are very high degrees of biorisk, cyberrisk, whatever else that are worth trading off against having a small monopoly of cyberpunk warring-states exercise full control over frontier superintelligence imo