Guy
Out of This Box: The Last Musical (Written by Humans)
Johan Fredrikzon
Building on a course I've been designing, I'm writing a short book on the history of existential risk from AI for a general audience
Tyler John
Scott Viteri
Compute Funding
Centre pour la Sécurité de l'IA
Distilling AI safety research into a complete learning ecosystem: textbook, courses, guides, videos, and more.
Jai Dhyani
Developing AI Control for Immediate Real-World Use
Francesca Gomez
Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.
Ola Mardawi
Remmelt Ellen
Cost-efficiently support new careers and new organisations in AI Safety.
Agustín Martinez Suñé
Seeking funding to develop and evaluate a new benchmark for systematically assesing safety of LLM-based agents
Robert Krzyzanowski
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
James Lucassen
Epoch Artificial Intelligence, Inc.
For tracking and predicting future AI progress to AGI
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Iván Arcuschin Moreno
Iván and Jett are seeking funding to research unfaithful chain-of-thought, under Arthur Conmy's mentorship, for a month before the start of MATS.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Extending an AI control evaluation to include vulnerability discovery, weaponization, and payload creation