Johan Fredrikzon
Building on a course I've been designing, I'm writing a short book on the history of existential risk from AI for a general audience
Tyler John
Tsvi Benson-Tilsen
accelerating strong human germline genomic engineering to make lots of geniuses
John Sherman
Funding For Humanity: An AI Risk Podcast
Centre pour la Sécurité de l'IA
Distilling AI safety research into a complete learning ecosystem: textbook, courses, guides, videos, and more.
Jai Dhyani
Developing AI Control for Immediate Real-World Use
Ola Mardawi
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Jonathan Claybrough
5 day bootcamp upskilling participants on biosecurity, to enable and empower career change towards reducing biorisks, from ML4Good organisers
Nuño Sempere
A foresight and emergency response team seeking to react fast to calamities
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Murray Buchanan
Leveraging AI to enable coordination without demanding centralization
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
Alex Lintz
Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support.
PauseAI US
SFF main round did us dirty!
McKim Jean-Pierre
Help me, an economic historian from an underrepresented background, develop tech skills and reflect on adv. technologies to pivot to an AI governance career.
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
Center for AI Policy
Advocating for U.S. federal AI safety legislation to reduce catastrophic AI risk.