Utsav Singhal
Empowering everyone to detect and combat AI-generated content threats with advanced multi-modal verification tool
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Angie Normandale
Seeding a business which finds grants and High Net Worth Individuals beyond EA
Antony Kalabukhov
Developing an innovative wisdom layer for AI that enhances its capabilities for deep analysis, safe AI, and creative solutions to complex systemic problems.
Miles Whiticker
Support volunteers working in AI Safety field building, Animal Advocacy and EA meta community building
Carmen Csilla Medina
Jess Binksmith
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Damiano Fornasiere and Pietro Greiner
Daniel González
Advancing Humanity Through Synergy of Mind and Machine
Alex Cloud
David Corfield
Site maintenance, grant writing, and leadership handover
Liron Shapira
Let's warn millions of people about the near-term AI extinction threat by directly & proactively explaining the issue in every context where it belongs
Lin Bowker-Lonnecker
Updates, additional resources and promotion for a 4-week introductory syllabus that looks at interventions to help prevent future pandemics.
Matthew Farrugia-Roberts
Retrospective support for small virtual reading group on AI safety topics
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
Brian Tan
~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila
Oliver Klingefjord
Develop an LLM-based coordinator and test against consumer spending with 200 people.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Claire Short
Program for Women in AI Alignment Research