Approving this project as compatible with our charitable mission of furthering public scientific research! Tsvi has a track record within the rationalist community, and this agenda seems intriguing; I hope it goes well.
We aim to accelerate the creation of strong human germline genomic engineering.
For children born using this technology, many major diseases would be prevented, which currently end or diminish the lives of many millions of people. The spans of life, health, and cognitive health would be greatly extended. Potentially, parents could choose to nudge the personality traits of their future children--e.g. making brave, kind, curious, reliable, determined people. What is definitely feasible is increasing IQ, which, while not being anywhere near to everything that matters even about specifically cognitive capacity, is nevertheless a minimum viable pathway to making there be many more people who can achieve groundbreaking scientific and philosophical insights. Crucially, this technology is humanity's best hope for making a generation of people who will be able to fully deal with the existential threat of AGI.
Our plans are to tap into techno-optimist, humanist, and existential-derisking capital---financial, political, and human---to accelerate the remaining scientific discoveries and technological innovations that are prerequisite to safe, accessible, powerful germline engineering.
The supersupergoal of this project is to decrease existential risk. Technical AGI alignment is likely far too difficult for the current generation of humans (see e.g. https://www.lesswrong.com/posts/nwpyhyagpPYDn4dAW/the-field-of-ai-alignment-a-postmortem-and-what-to-do-about). The only hope is to delay the creation of AGI, at least until humans can solve AGI alignment.
Pursuant to that, the supergoal of this project is to accelerate strong human intelligence amplification. The main way this helps is by making there be smarter people who can solve AGI alignment. A secondary benefit is to offer a vision for humanity's imminent thriving through intelligence that does not require making AGI.
The only strong human intelligence amplification method that is both likely to work and likely to be feasible soon is strong human germline genomic engineering; see https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods.
Therefore the goal of this project is to accelerate strong human germline genomic engineering.
(This is copied from https://berkeleygenomics.org/.)
Our mission is to unlock the promise of safe, accessible, and powerful germline genetic engineering for humanity.
Our plans:
Publicly present the case in favor of making human germline engineering technology soon.
Work out and describe how to make this technology in a safe, socially beneficial, widely accessible, and effective way.
Through dialogue with scientists, the public, and policymakers, create innovation-positive ethical guidelines and legal regulation for germline engineering.
Generate social momentum and help potential funders, scientists, and entrepreneurs to coordinate.
Salary for me for up to two years, payment for research contractors, payment for event operations.
With minimal funding I can, you know, have an apartment for an additional month. Full funding would enable me to focus on work instead of fundraising and to contract more help to go faster.
It's me and my cofounder Rachel Reid.
Rachel is focusing on running events. in 2025 we've held 3 talks from experts on aspects of germline engineering, with more coming. In June we're hosting a summit: https://berkeleygenomics.org/events/ReproFro2025.html
I don't have a track record on similar projects. In the past 3 years I've studied the field (reading, writing, networking, fundraising for other people). In 2025 I made http://berkeleygenomics.org/, and wrote a book on technical methods for strong germline engineering: https://www.lesswrong.com/posts/2w6hjptanQ3cDyDw7/methods-for-strong-human-germline-engineering
AGI comes too soon for the next generation of geniuses to help. There's both substantial probability of this happening, and substantial probability of this not happening: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce But the calculus still works out to this intervention being very high impact in expectation: https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html
Strong germline engineering is not feasible in the next 5-20 years. I think this is unlikely as the core technologies don't appear to be extremely difficult to develop, and there are some disjunctive pathways. See: https://www.lesswrong.com/posts/2w6hjptanQ3cDyDw7/methods-for-strong-human-germline-engineering
There actually isn't much philanthropic funding to be tapped, and the goal can't be reached via commercially viable projects.
BGP can't access the funding and investment capital that there is.
The regulatory situation stays bad / gets worse. This would delay uptake, pushing back the benefits.
The most likely outcome of failure is simply that nothing especially useful happens. Some more articles will be written which could be marginally helpful. There might also be some mean news articles written about the project.
There are potential perils to success, described here: https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html
$0 from nowhere, 0 explanations given. I'm just burning through my meager personal savings.
Austin Chen
9 days ago
Approving this project as compatible with our charitable mission of furthering public scientific research! Tsvi has a track record within the rationalist community, and this agenda seems intriguing; I hope it goes well.
Rahul Swaminathan
9 days ago
This is an obvious thing to be trying. And would be sad if this didn’t get the minimum funding. If there are other viable alternatives that people can see are plausible, it could make pausing AI more palatable.
I don’t know how realistic this is, but your writing is well thought out and you seem fairly intelligent and I want to encourage you to keep doing this
Good luck !
Kaarel Hänni
12 days ago
Instead of trying to make an alien god that is nice to us throughout its unfolding, I think we should indefinitely be becoming smarter ourselves. Becoming somewhat smarter faster can help us collectively understand that we shouldn't be making an alien god (assuming I'm indeed right about this) in time before an alien god is created, help us (figure out how to) reorganize society so that an alien god is radically less likely to be created per unit of time, and help us solve many of the various other problems (like destitution, disease, and death in general) we're facing ourselves. Also, becoming smarter and understanding more is cool.
Tsvi Benson-Tilsen
11 days ago
@Kaarel Thanks for your offer!
I'm unsure whether I agree with this strategically or not.
One consideration is that it may be more feasible to go really fast with [AGI alignment good enough to end acute risk] once you're smart enough, than to go really fast with convincing the world to effectively stop AGI creation research. The former is a technical problem you could, at least in principle, solve in a basement with 10 geniuses; the latter is a big messy problem involving myriads of people. I have substantial probability on "no successful AGI slowdown, but AGI is hard to make". In those worlds, where algorithmic progress continually burns the fuse on the intelligence explosion, a solution remains urgent, i.e. prevents more doom the sooner it comes.
But maybe good-enough AGI alignment is really extra super hard, which is plausible. Maybe effective world-coordination isn't as hard.
But I do mostly agree with this in terms of long-term vision.