Project summary
SaferAI is a governance and research non-profit focused on AI risk management. Their work focuses on standards & governance with involvement in the EU AI Act Code of Practice, NIST US AISIC and OECD G7 Hiroshima Process taskforce. They additionally produce ratings of frontier AI developers’ risk management practices. To support this work, they conduct fundamental research into quantitative risk management for AI, with a particular focus on assessing risks from LLM cyberoffensive capabilities.
What are this project's goals and how they be achieved?
SaferAI aims to incentivize the development and deployment of safer AI systems through better risk management. The organisation focuses on doing research to advance the state of the art in AI risk assessment and developing methodologies and standards for risk management of these AI systems.
How will this funding be used?
The next marginal $300k are projected by the leadership of SaferAI to be used to:
Hire ML researchers to work with their Head of Research, Malcolm Murray, to further develop risk models for AIs.
Move their Executive Director, Siméon, to payroll who has not been drawing a salary and instead supporting himself from other grants.
Expand the involvement of part-time senior staff and advisors.
Hire a pool of research assistants that can be tapped into for assorted projects.
Leadership estimates a total room for more funding of €2M+ given the span of opportunities they see.
Who is on the team and what's their track record on similar projects?
SaferAI is led by their founder and Executive Director, Siméon Campos, who previously co-founded EffiSciences. Their work on quantitative risk assessment is led by Malcolm Murray (Head of Research) who was previously a Managing VP at Gartner, with over 10 years of experience on risk management. Their work on rating frontier AI companies has involved Henry Papadatos (Managing Director; technical component) who holds an MS in Robotics & Data Science, and Gábor Szórád (Head of Product; product developement) who is an experienced entrepreneur and manager having led teams of 8000+. Their policy engagement is led by Chloe Touzet (Head of Policy), who holds a DPhil from Oxford and spent 5 years as a researcher at the OECD, with standards development by James Gealy (Head of Standards), with prior engineering experience in the aviation industry. They’re advised by Cornelia Kutterer, former Senior Director of EU Government Affairs at Microsoft. A full list of staff and backgrounds is available on their about page.
What are the most likely causes and outcomes if this project fails? (premortem)
Although SaferAI has attracted a number of highly experienced individuals, many of the team are junior (including the Executive Director, Siméon), and the more experienced individuals mostly have experience in parallel fields (e.g. aviation standards; risk management outside AI; policy experience in labor). In other words, although all the requisite skills and experience are represented somewhere on the SaferAI team, combining all of them will require a high level of coordination that may be challenging to achieve.
SaferAI are operating in a challenging domain with significant lobbying efforts from AI companies to actively obstruct safety standards proposed by civil society. Even if they execute well it is quite possible that many of their initiatives will fail to culminate in a desirable outcome, an issue shared with all projects in this space.
What other funding is this person or project getting?
For various reasons, SAFERAI can’t disclose a detailed list of all their funders in public, but disclosed that they have multiple institutional and private funders from the wider AI safety space. They may be able to disclose more details in private.