Eliezer Yudkowsky ⏹️
@ESYudkowsky
AI SAFETYThe original AI alignment person. Understanding the reasons it's difficult since 2003. This is my serious low-volume account. Follow @allTheYud for the rest.
Leo Gao
@nabla_theta
AI SAFETYworking on AGI alignment. prev: GPT-Neo, the Pile, LM evals, RL overoptimization, scaling SAEs to GPT-4, interp via circuit sparsity. EleutherAI cofounder.
Scott Alexander
@slatestarcodex
CREATORI have a place where I say complicated things about philosophy and science. That place is my blog. This is where I make terrible puns.
Victoria Krakovna
@vkrakovna
AI SAFETYResearch scientist in AI alignment at Google DeepMind. Co-founder of Future of Life Institute @FLI_org. Views are my own and do not represent GDM or FLI.
Alexey Guzey
@alexeyguzey
RESEARCHERhttp://guzey.com, http://newscience.org, @openai; in pursuit of a just, beautiful future.
Connor Leahy
@NPCollapse
AI SAFETYUS Director @ControlAI - Leave me anonymous feedback: http://bit.ly/3RZbu7x - I don't know how to save the world, but dammit I'm gonna try
Zvi Mowshowitz
@TheZvi
CREATORBlogger primarily on AI and AI x-risk but also other things at Don't Worry About the Vase (SS/WP/LW), founding Balsa Research to fix policy.
Tristan Hume
@trishume
RESEARCH ENGINEERPerformance optimization lead @AnthropicAI. Profiling, distributed systems, dev tools, interpretability. tristan@thume.ca
Rob Wiblin
@robertwiblin
CREATORHost of the 80,000 Hours Podcast. Exploring the inviolate sphere of ideas one interview at a time: http://80000hours.org/podcast/
Katja Grace 🔍
@KatjaGrace
AI SAFETYThinking about AI destroying the world at http://aiimpacts.org and everything at http://worldspiritsockpuppet.substack.com. DM or email for media requests.
Rob Miles
@robertskmiles
AI SAFETYExplaining AI Alignment to anyone who'll stand still for long enough, on YouTube and Discord. Music, movies, microcode, and high-speed pizza delivery
Andreas Stuhlmüller
@stuhlmueller
FOUNDERCofounder & CEO @elicitorg | Scale up good reasoning https://elicit.com/careers
Eli Lifland
@eli_lifland
RESEARCHERAI forecasting and governance @AI_Futures_. Co-author of AI 2027 and the AI Futures Model. Also @aidigest_, @SamotsvetyF. Prev @oughtinc
Dr. Roman Yampolskiy
@romanyam
AI SAFETYProfessor of Computer Science. AI Safety & Security Researcher. For Talks: giacomo@krugercowne.com, Interviews: roman.yampolskiy@louisville.edu
Byrne Hobart
@ByrneHobart
CREATORThe John Henry of excessive use of the em-dash. Tweets are map, not territory. Co-author: Boom (Stripe Press): https://a.co/d/f7m2OG8 https://anomalyfund.vc/
Arthur B.
@ArthurB
FOUNDER@Tezos co-founder w/ wife @breitwoman & Agitprop founder. Aligning ASI to not zap everyone is unsolved, that's bad. Tezos stuff, high context humor & more.
Liron Shapira
@liron
CREATORHost of Doom Debates — disagreements that must be resolved before the world ends.
David Chapman
@Meaningness
CREATORBetter ways of thinking, feeling, and acting—around problems of meaning and meaninglessness; self and society; ethics, purpose, and value.
Rob Bensinger ⏹️
@robbensinger
CREATORComms @MIRIBerkeley. RT = increased vague psychological association between myself and the tweet.
Eliezer Yudkowsky
@allTheYud
RESEARCHERHigh-volume account of @ESYudkowsky, the original AI alignment guy. If it's missing punctuation, it's humor. If you can't tell, it's probably also humor.
Kaj Sotala
@xuenay
RESEARCHERThis is my new bio. It replaces my old one, which I was told was bad. Meme alt: @KajPictures
Chana
@ChanaMessinger
CREATORthe stakes and the world and the stars Head of Video for 80,000 Hours - send me content ideas! Opinions are my own, though.
Alicorn
@luminousalicorn
CREATORCo-creator of (1) glowfic (2) some children (3) Valinor, the Blessed Realm, where stand the two Trees