Iván Arcuschin Moreno
Iván and Jett are seeking funding to research unfaithful chain-of-thought, under Arthur Conmy's mentorship, for a month before the start of MATS.
Ekō
Case Study: Defending OpenAI's Nonprofit Mission
Dr Waku
CANCELLED Cover anticipated costs for making videos in 2025
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
Apart Research
Support the growth of an international AI safety research and talent program
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Shi Hao Lee
Keeping track of AI governance in Southeast Asia - an AI Safety Asia initiative.
ADAM
Exploring grounds to apply quantum physics in AI
Alex Cloud
Claire Short
Program for Women in AI Alignment Research
Carmen Csilla Medina
Damiano Fornasiere and Pietro Greiner
Liron Shapira
Let's warn millions of people about the near-term AI extinction threat by directly & proactively explaining the issue in every context where it belongs
David Corfield
Site maintenance, grant writing, and leadership handover
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
Johan Fredrikzon
Matthew A. Clarke
Working title - “Compositionality and Ambiguity: Latent Co-occurrence and Interpretable Subspaces”