And thank you for your donation offers, Austin and Jason. Appreciating your considerations here
@remmelt
Stop/Pause projects coordinator at AI Safety Camp.
$0 in pending offers
I helped launch the first AI Safety Camp and now coordinate the program with Linda Linsefors.
My technical research clarifies reasons why the AGI control problem would be unsolvable: lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I support creatives and other communities to restrict harmful AI scaling:
https://forum.effectivealtruism.org/posts/q8jxedwSKBdWA3nH7/we-are-not-alone-many-communities-want-to-stop-big-tech-from
Previously, I co-founded Effective Altruism Netherlands. Pre-2021 background here:
forum.effectivealtruism.org/posts/C2gfgJrvNF8NMXjkt/consider-paying-me-or-another-entrepreneur-to-create
Remmelt Ellen
5 days ago
And thank you for your donation offers, Austin and Jason. Appreciating your considerations here
Remmelt Ellen
5 days ago
Austin, glad to read your points.
I'm quite confused why other donors aren't excited to fund AISC.
This is often confusing, even to us as organisers. Some years we had to go by on little money, and other years we would suddenly get a large grant or an influx of donations.
The lowest-quality editions in my opinion were AISC3 (when I was burned out, and we ran the edition across rooms in a Spanish hotel) and AISC4 (when COVID struck and we quickly switched to virtual). We ran those editions on a shoestring budget. But the year after in 2021, we received $85k from LTFF and $35k from SFP.
Financial stability would help us organise better editions – being able to plan next editions knowing we will have the money. "Stability" is not sexy but it makes the difference between being able to fully dedicate oneself and plan long term as an organiser, and organising on the fly.
Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support.
Last time, we asked Manifund staff to extend the fundraiser deadline (by 2 weeks), in order to not get cancelled. Looking at the datestamps of my email notifications, donations are coming in faster this time.
Having said that, if any funder here decided not to donate, I'd be curious why!
I'm specifically hesitant to fund the stop/pause agenda that Remmelt supports.
Before getting into this: ~5 stop/pause projects were hosted in 2024 and now in 2025. Our program got about five times as many other projects. The majority of our projects are in theoretical or applied safety.
I'm giving my takes here. Robert will have a different thoughts for the projects he supports.
We might get more stop/pause projects in 2026, which is what makes me most excited as an organiser. I'm also excited about technical projects that enable comprehensive assessments of model safety issues that AI companies have to address.
I'm generally worried about projects that assume it is simple – or somehow obviously doable – to make large machine learning systems safe, because I think it's bad for the community's epistemics. Particularly if alumni end up promoting their solutions to others in the community, or decide to commercialise them for companies, this could support safety-washing. Safety-washing is a way for corporate actors to avoid accountability – it allows them to build dangerous systems and make them look safe, instead of actually scoping their development of systems to be safe. It's counterproductive to AI Governance.
I value folks with a security mindset who are clear about not wanting to make things worse. I'm unsure how much the camp has enabled people to think like that in the past. Some of our alumni even went on to work at OpenAI and DeepMind. So that would be a reason not to donate to us.
Again, these are my thoughts. Robert and Linda will have their thoughts.
For one, I don't like the polarization that the stop/pause framework introduces
Is the polarisation in the framework itself, or in the implementation of it? Curious for your thoughts.
Various respected researchers (eg. Yudkowsky, Yampolskiy, Shovelain) who have been researching the alignment problem for about the longest are saying that we are not track on solving alignment (given rate of development over previous years and/or actually intractable sub-problems of control).
Slowing down AI development helps alignment researchers spend more time working out the problem. It does not have to be polarising, where alignment researchers recognise the need for society-wide efforts to restrict corporate-AI scaling.
Where tensions can occur is if alignment folks indirectly boost work at AGI companies. For example, at OpenAI some alignment researchers have made confident public statements there about being able to make AGI safe, and others have created shallow alignment techniques that made it easier to commercialise products. OpenAI has received $30 million from OpenPhil, and talented engineers advised by 80k to join OpenAI. One start-up dedicated to alignment even offered their state-of-the-art supercomputer to OpenAI. Similar things have happened at DeepMind and Anthropic.
There is a deep question here of whether the community wants to continue to risk accelerating AGI development in the hope of solving all the lethal sub-problems we identified but been unable to solve yet.
if I had to "choose a side" I might very well come down on "AGI sooner would be good"
Why do you think "AGI sooner would be good"? Is the argument that faster development results in fewer competing architectures?
From my perspective, introducing this self-modifying autonomous machinery should be avoided, given the risk of losing all the life we care about on Earth. We should coordinate to avoid it, not only because allowing companies like OpenAI to push the frontiers of dangerous tech and then having other actors (like Google) rush after them is bad. But also because once the tech pushes all workers out of the loop and starts modifying and re-producing itself in runaway feedback loops, then we lose all control. Under such exponential tech processes, mass destruction happens either way. Whether one architecture starts dominating our economy, or multiple architectures that end up interfacing over high-bandwidth channels.
Even if you think the alignment problem could be solved eventually, it seems good to buy time. We can buy time by coordinating with other stakeholder groups to slow developments down. Then we can build the capacity to research the problem more rigorously.
Linda, who I've heard good things about, won't be organizing this time around. (I'm unsure how much to read into this -- it might just be a vicious cycle where the organizers leave for lack of funding, and the funders don't fund for lack of organizers)
I don't want to speak for Linda here, so I asked her to comment :)
Remmelt Ellen
9 days ago
@ariel_gil, this is informative – adding it to our impact anecdotes list and also as a quote on this page!
Remmelt Ellen
21 days ago
@Ukc10014 It's helpful for us to know too what the value of AISC is for you. Thank you.
Remmelt Ellen
28 days ago
Our new fundraiser is up for the 11th edition.
Grant funding is tight. Consider making a private donation to make the next camp happen!
Remmelt Ellen
6 months ago
Many thanks to the proactive donors who supported this fundraiser! It got us out of a pickle, giving us the mental and financial space to start preparing for edition 10.
Last week, we found funds to cover backpay plus edition 10 salaries. There is money left to cover some stipends for participants from low-income countries, and a trial organiser to help evaluate and refine the increasing number of project proposals we receive.
That said, donations are needed to cover the rest of the participant stipends, and runway for edition 11. If you could continue to reliably support AI Safety Camp, we can reliably run editions, and our participants can rely on having covered some living costs while they do research.
P.S. Check out summaries of edition 9 projects here. You can also find the public recordings of presentations here.
Remmelt Ellen
11 months ago
@adamyedidia, thank you for the donation. We are getting there in terms of funding, thanks to people like you.
We can now run the next edition
Remmelt Ellen
12 months ago
@Alex319, thank you for the thoughtful consideration, and for making the next AI Safety Camp happen!
As always, if you have any specific questions or things you want to follow up on, please let us know.
Remmelt Ellen
about 1 year ago
@zeshen, thank you for the contribution!
I also saw your comment on the AI Standards Lab being launched out of AISC.
I wasn't sure about the counterfactual, so that is good to know.
Remmelt Ellen
about 1 year ago
See @MarcusAbramovitch's notes in the [#grantmakerscorner](https://discord.com/channels/1111727151071371454/1145045940865081434/1185644569891721329) on Discord.
Remmelt Ellen
about 1 year ago
@IsaakFreeman, thank you, appreciating you daring to be a first mover here.
Remmelt Ellen
over 1 year ago
@J-C, thank you too for the conversation.
If it's helpful, here are specific critiques of longtermist tech efforts I tweeted:
- Past projects: twitter.com/RemmeltE/status/1626590147373588489
- Past funding twitter.com/RemmeltE/status/1675758869728088064
- Godlike AI message: twitter.com/RemmeltE/status/1653757450472898562
- Counterarguments: twitter.com/RemmeltE/status/1647206044928557056
- Gaps in community focus: twitter.com/RemmeltE/status/1623226789152841729
- On complexity mismatch: twitter.com/RemmeltE/status/1666433433164234752
- On fundamental control limits: twitter.com/RemmeltE/status/1665099258461036548
- On comprehensive safety premises: twitter.com/RemmeltE/status/1606552635716554752
I have also pushed back against Émile Torres and Timnit Gebru (researchers I otherwise respect):
- twitter.com/RemmeltE/status/1672943510947782657
- twitter.com/RemmeltE/status/1620596011117993984
^– Can imagine those tweets got lost ( I appreciate the searches you did).
You are over-ascribing interpretations somewhat (eg. "social cluster" is a term I use to describe conversational/collaborative connections in social networks), but I get that all you had to go on there was a few hundred characters.
~ ~ ~
I started in 2015 in effective altruism movement-building, and I never imagined myself to become this critical about the actions of the community I was building up.
I also reached my limit of trying to discuss specific concerns with EAs/rationalists/longtermists.
Having a hundred+ conversations to watch interlocutors continue business as usual does this to you.
Maybe this would change if I wrote a Katja-Grace-style post – talking positively from their perspective, asking open-ended questions so readers reflect on what they could explore further, finding ways to build from their existing directions of work so they feel empowered rather than averse to dig deeper, not state any conclusions that conflict with their existing beliefs or sound too strong within the community's Overton window, etc. etc.
Realistically though, people who made a career upskilling in and doing alignment work won't change their path easily, which is understandable. If the status quo for technically-minded researchers is to keep trying to invent new 'alignment solutions' with funding from (mostly) tech guys, then there is little point to clarifying why that would be a dead end.
Likewise, where AI risk people stick mostly to their own intellectual nerdy circles to come up with outreach projects to slow AI (because "we're the only ones who care about extinction risk"), then there is little point of me trying to bridge between them and other communities' perspectives.
~ ~ ~
Manifund doesn't seem like a place to find collaborators beyond, but happy to change my mind:
I am looking for a funder who already relates with the increasing harms of AI-scaling, and who wants to act effectively within society to restrict corporations from scaling further.
A funder who acknowledges critiques of longtermist tech efforts so far (as supporting companies to scale up larger AI models deployed for a greater variety of profitable ends), and who is looking to fund neglected niches beyond.
Remmelt Ellen
over 1 year ago
Your selected quotes express my views well
Note though that the “self-congratulatory vibes” point was in reference to the Misalignment Museum: https://twitter.com/RemmeltE/status/1635123487617724416
And I am skipping over the within-quote commentary ;)
Remmelt Ellen
over 1 year ago
* Note that I was talking about conflicts between the AI Safety community and communities like AI ethics and those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).
Remmelt Ellen
over 1 year ago
Thank you for sharing your concerns.
How is suing AI companies in court less likely to cause conflict than the 'good cop' approach you deride?
Suing companies is business as usual. Rather than focus on ideological differences, it focusses on concrete harms done and why those are against the law.
Not that I was talking about conflicts between AI Safety communities like the AI ethics those being harmed who AI ethics researchers are advocating for (artists and writers, data workers, marginalised tech-exploited ethnic communities, etc).
Some amount of conflict with AGI lab folks is inevitable. Our community’s attempts to collaborate with the labs to research the fundamental control problems first and to carefully guide AI development to prevent an arms race did not work out. And not for lack of effort on our side! Frankly, their
reckless behaviour now to reconfigure the world on behalf of the rest of society needs to be called out.
Are you claiming that your mindset and negotiation skills are more constructive?
As I mentioned, I’m not arguing here for introducing a bad cop. I’m arguing for starting lawsuits to get injunctions arranged for widespread harms done (data piracy, model misuses, toxic compute).
What leverage did we have to start with?
The power imbalance was less lopsided. When the AGI companies were in their start-up phase, they were relying a lot more on our support (funding, recruitment, intellectual support) than they do now.
For example, public intellectuals like Nick Bostrom had more of an ability to influence narratives than they do now. Now AGI labs have ratcheted up their own marketing and lobbying and in that way crowd out the debate.
few examples for illustration, but again, others can browse your Twitter:
Could you clarify why those examples are insulting for you?
I am pointing out flaws in how the AI Safety community has acted in aggregate, such as offering increasing funding to DeepMind, OpenAI and then Anthropic. I guess that’s uncomfortable to see in public now, and I’d have preferred that AI Safety researchers had taken this seriously when I expressed concerns in private years ago.
Similarly, I critiqued Hinton for letting his employer Google scale increasingly harmful models based on his own designs for years, and despite his influential position, still not offering much of a useful response now to preventing these developments in his public speaking tours. Scientists in tech have great power to impact the world, and therefore great responsibility to advocate for norms and regulation of their technologies.
Your selected quotes express my views well. I feel you selected them with care (ie. no strawmanning, which I appreciate!).
I think there's some small chance you could convince me that something in this ballpark is a promising avenue for action. But even then, I'd much rather fund you to do something like lead a protest march than to "carefully do the initial coordination and bridge-building required to set ourselves up for effective legal cases.
Thank you for the consideration!
For | Date | Type | Amount |
---|---|---|---|
10th edition of AI Safety Camp | 28 days ago | project donation | +50 |
10th edition of AI Safety Camp | about 2 months ago | project donation | +20 |
10th edition of AI Safety Camp | 2 months ago | project donation | +5000 |
10th edition of AI Safety Camp | 5 months ago | project donation | +50 |
10th edition of AI Safety Camp | 5 months ago | project donation | +33 |
10th edition of AI Safety Camp | 5 months ago | project donation | +33 |
10th edition of AI Safety Camp | 6 months ago | project donation | +3000 |
Manifund Bank | 7 months ago | withdraw | 9110 |
10th edition of AI Safety Camp | 10 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 10 months ago | project donation | +100 |
10th edition of AI Safety Camp | 10 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 11 months ago | project donation | +4000 |
10th edition of AI Safety Camp | 11 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 11 months ago | project donation | +10 |
10th edition of AI Safety Camp | 11 months ago | project donation | +2000 |
Manifund Bank | 11 months ago | withdraw | 48257 |
10th edition of AI Safety Camp | 12 months ago | project donation | +5000 |
10th edition of AI Safety Camp | 12 months ago | project donation | +10 |
10th edition of AI Safety Camp | 12 months ago | project donation | +2000 |
10th edition of AI Safety Camp | 12 months ago | project donation | +75 |
10th edition of AI Safety Camp | 12 months ago | project donation | +5000 |
10th edition of AI Safety Camp | 12 months ago | project donation | +100 |
10th edition of AI Safety Camp | 12 months ago | project donation | +1042 |
10th edition of AI Safety Camp | 12 months ago | project donation | +1000 |
10th edition of AI Safety Camp | 12 months ago | project donation | +3000 |
10th edition of AI Safety Camp | 12 months ago | project donation | +200 |
10th edition of AI Safety Camp | 12 months ago | project donation | +20 |
10th edition of AI Safety Camp | 12 months ago | project donation | +50 |
10th edition of AI Safety Camp | 12 months ago | project donation | +500 |
10th edition of AI Safety Camp | 12 months ago | project donation | +15000 |
10th edition of AI Safety Camp | 12 months ago | project donation | +10 |
10th edition of AI Safety Camp | 12 months ago | project donation | +250 |
10th edition of AI Safety Camp | 12 months ago | project donation | +15000 |