Remmelt Ellen
Cost-efficiently support new careers and new organisations in AI Safety.
Agustín Martinez Suñé
Seeking funding to develop and evaluate a new benchmark for systematically assesing safety of LLM-based agents
Robert Krzyzanowski
with a Focus on Neurosurgery and Software as a Medical Device (SaMD)
James Lucassen
Epoch Artificial Intelligence, Inc.
For tracking and predicting future AI progress to AGI
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
Narcisse
Iván Arcuschin Moreno
Iván and Jett are seeking funding to research unfaithful chain-of-thought, under Arthur Conmy's mentorship, for a month before the start of MATS.
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Matthew Farr
Probing possible limitations and assumptions of interpretability | Articulating evasive risk phenomena arising from adaptive and self modifying AI
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Extending an AI control evaluation to include vulnerability discovery, weaponization, and payload creation
Orpheus Lummis
Non-profit facilitating progress in AI safety R&D through events
Mikolaj Kniejski
Do ACE-style cost-effectivness analysis of technical AI safety orgs.
Arkose
AI safety outreach to experienced machine learning professionals
Rufo guerreschi
Catalyzing a uniquely bold, timely and effective treaty-making process for AI
Abhinav singh
Workshop focused on AI Security attacks and defense use-cases through a Capture-the-flag style Adversarial simulation labs.