@Greg_Colbourn Thank you so much, Greg!
@Holly_Elmore
Organizing for a moratorium on frontier AI
https://hollyelmore.substack.com/$0 in pending offers
Holly Elmore
16 days ago
Sorry, this is a little confusing-- I was answering the question "why did you upvote this project?"
This project I did as an individual evolved into the org and project above :)
Holly Elmore
16 days ago
@mdickens They are completely different legal entities and under separate management but the same brand. PauseAI.info is PauseAI Global and used to have some PauseAI US stuff. PauseAI-US.org (under construction) will be the (long overdue) US website.
Holly Elmore
16 days ago
That proposal was from Joep at PauseAI Global, which is mostly digital and covers countries that don't have their own dedicated PauseAI org. PauseAI US is the US org which is focused on protesting and lobbying in the US.
Holly Elmore
16 days ago
@mdickens PauseAI US is a different organization, and that is for different programming (volunteer stipends, which the US org does not offer).
Holly Elmore
16 days ago
@Holly_Elmore @mdickens Oh my gosh I cringe to read my overconfidence above, but there is a place to donate to what is now PauseAI US on Manifund finally and I hope you are still interested in donating because we need it! https://manifund.org/projects/pauseai-us-2025-through-q2
Holly Elmore
3 months ago
@mdickens I was hoping to finish it tomorrow but no later than next Friday, 8/30. Thank you for your support!
Holly Elmore
3 months ago
This project is coming to a close because I am no longer organizing independently! These donations helped me to start PauseAI US as a 501(c)(3) and PauseAI US Action Fund as a 501(c)(4). I will be making a new Manifund project to reflect our current situation and needs.
Almost precisely the same amount I raised here ended up going to Manifund as their cut of our income as PauseAI US's fiscal sponsor.
Holly Elmore
9 months ago
This project has grown into PauseAI US, which is currently incorporating as a 501(c)(3) election h and 501(c)(4).
The org is officially recognized as a nonprofit and Holly is the Executive Director. Holly is still working to obtain a fiscal sponsor so that the org can receive tax exempt donations right away, pursuing tax exempt status in our own right, and forming the c4 so that we will be ready to do lobbying as that becomes more possible. The org will be hiring later this year, with the most likely first hires being Program Manager, Administrative Assistant, and Community Organizers.
1. Help me to obtain a fiscal sponsor with favorable terms. I have my own bookkeeping and will end the relationship when I have my own tax exempt status, which I am pursuing immediately. I would like a sponsor that added to the org's credibility, so a group reputable on AI Safety or science would be ideal. I would like not to pay a huge amount and have the sponsor trust my bookkeeper rather than having to duplicate all my accounting.
2. Recommend potential hires and skilled volunteers.
3. Offer storage space for protest materials in the Berkeley area.
4. Donate money! I have a lot of bills associated with forming the org and I'll soon have payroll tax and worker's comp etc. The budget my board approved is around $370,000 for this fiscal year (01/01/24-12/31/24) and I'm around $250,000 short. (However, donors will probably want to wait until I have a fiscal sponsor so it is tax exempt.) And I could use more money to hire more people. Facilitating donor relations would be much appreciated.
Holly Elmore
about 1 year ago
I think CAIP is doing great work and I encourage individuals to support them beyond what Manifund can give.
(Donations to 501(c)(4) orgs are not tax-deductible, but if you actually have to give a fairly large amount before you're better off taking charitable deductions than the standard deduction, so consider that that might not make a difference. I have given 10% for years and I have never exceeded the standard deduction.)
Holly Elmore
about 1 year ago
I am excited to read about this project! I am frequently asked for my recommendations on responsible investing and shareholder actions for AI Safety, and I don't have the expertise to give an answer. I would likely use the recommendations in the report and recommend they be used in corporate campaigns or as protest demands. There is demand for this sort of social impact for AI Safety, and I believe it could serve as a sort of low-risk "gateway" involvement with AI Safety advocacy for many people and orgs.
Holly Elmore
over 1 year ago
I think this proposal lacks specific proposals or laws that might get pushed for. Are we thinking of compute regulations like ‘What does it take to catch a Chinchilla?’ Are we thinking of having laws in place that allow audits or inspections to take place?
This is due in part to me being in an exploratory phase and in part to the fact that my aim is change opinion and framing rather than try to popularize specific policies. The capital that you gain by advocating very general messages like "Pause AI until it's safe to proceed" can be applied to supporting specific policies as the opportunity arises like "support Prop X to require licensing at multiple points of the training and development pipeline". I don't want to be too specific from the get-go because 1) it makes it harder to spread your message and the underlying logic of it, and 2) because we can't control the exact legislative opportunities we will end up having for the public to support.
Thanks for your support!
Holly Elmore
over 1 year ago
@joel_bkr > I could probably get clearer on this if I had a clearer sense of the political organizing activities you might pursue. Could you describe these in more detail?
I am spinning up and exploring quite a bit right now, which has meant taking a ton of meetings and networking. I'm doing light people organizing already by connecting people with jobs they can do in the broader AI Safety space. I'm not decided yet on what kinds of programs to pursue that would benefit from org structure, but I'm currently:
- planning a multi-city demonstration in November pegged to the UK Summit on AI,
- in the early stages of developing (and still considering whether it's robustly positive enough to be worth the commitment) a "Moratorium Forum" to increase median voter engagement on the topic and sense of confidence in their position on AI Safety,
- finishing some academic/technical work that I hadn't taken all the way to publication before to increase my credibility, and
- working on personal essay writing about AI Safety advocacy topics to publish in magazines (something I've done before).
Holly Elmore
over 1 year ago
@joel_bkr Oh, I agree it's a liability to be steeped in the LW/EA community when my goal is to have a broader reach.
> policymakers get turned off very quickly when they hear the message that everyone's going to die
I am of the opinion that, even though everyone really could die from uncontrolled AI, that we should be worried enough to act to prevent consequences well short of that! I don't like creating the impression that we shouldn't be concerned about lesser, but still huge, harms like possible mass unemployment or destruction of our shared social reality. I think it can be confusing, and construed as either a denial that AI will cause other harms or as saying that any harm short of extinction wouldn't be worth preventing. So, while I'm still steeped in the LW memesphere, I wouldn't make this specific error.
It's also true that I'm not an ML expert and it would be better if I were. Not understanding ML deeply could lead advocates to support worse policies. But putting the onus on AI developers to prove safety instead of safety advocates having to prove danger is something that I think anyone can safely advocate for.
Holly Elmore
over 1 year ago
Thank you, Joel!
I don’t feel that my approach relies on the LessWrong perspective overmuch, though perhaps I lack perspective on that. I’ve felt at odds with the rationality community on this because I’m advocating an approach besides alignment, and a political one at that.
I am in favor of Overton window pushing (based on my experience in vegan advocacy and on other social movements). Although I’m curious to hear what you oppose about Conjecture’s approach.
As for credibility, it would be nice if I knew more about the technology, but I think the question of whether to regulate/slow/stop AI transcends many of the technical details and has more to do with how much risk the people of earth will tolerate from AI experiments. If we can get momentum behind the moratorium position, the necessary expertise will shift to something more political or at the interface of the technology and regulatory apparatus. No one is currently an expert on what we need here (similar to how no one was an AI Safety expert 10 years ago). I’m a generalist and, much as I wish that better qualified people had stepped forward to lead this, they didn’t, and I think I have the organizing skills to get it started.
Holly Elmore
over 1 year ago
@havenworsham Hi Haven! I'd be very interested in talking :) https://calendly.com/holly-elmore/30min
Holly Elmore
over 1 year ago
I would be even more excited if Holly found a strong cofounder; though this is my bias from tech startups (where founding teams are strongly preferred over individual founders), and I don't know if this heuristic works as well for starting movements.
I would love to have a co-founder and assemble a team eventually. My model is that this is best achieved in a situation like mine by diving into the work and attracting the right people with it. I have been working with many people that I adore and work well with, but they are all talented people with many options, most of them in the territory of inside game. Given their expertise and connections, it might be the right move for them to do more traditional AI safety/policy activities. I think it likely that the best co-founder for the kinds of activities I think are most neglected is someone outside the core AI Safety community that I haven’t met yet, but would be likely to within 6 months of dedicated organizing.
I’m also not sure that an org with several employees is the ultimate destiny of this project. It seems possible to me that other, perhaps leaner, structures, maybe more reliant on volunteers and without much cash flow, will be more prudent. So while a co-founder would be ideal to creating a stable, lasting org, I value nimbleness, info value, not being beholden to someone else’s alliances, and ability to pivot quite a bit at this point. So it’s not obvious to me that it would be better to have a cofounder before being funded to get started.
(On a practical note, I might lose opportunities for employment by doing moratorium advocacy. If I know I’m going to be funded for the rest of the year, I can dive in, but if I don’t, I still want to consider working in situations where I have to be more diplomatic. So that consideration is pushing me to try to ascertain money at an earlier stage than I’m sure is ideal for you as the evaluator.)
Holly Elmore
over 1 year ago
Should Holly should pursue this independently, or as part of some other org? I assume she's already considered/discussed this with orgs who might employ her for this work such as FLI or CAIS?
I am considering employment at aligned orgs, but I’m strongly attracted to having my independence for some of the reasons discussed in the above comment^. The established orgs have their reputations and alliances to consider. They may have a better shot at achieving change through insider diplomacy and by leveraging connections, and thus it may be the right call for them not speak directly to the public asking for the policies they may actually want. That can be true at the same time that there is a huge opportunity for outside game left on the table. There are many benefits to working with a team, and I may decide that they are worth sacrificing the freedom to pursue the vanguard position (if I do, I would of course return any Manifund money I had been granted). There are grassroots advocacy groups like Pause AI that already exist (mostly in Europe) that I would be able to work with as an org-less organizer. But it seems that the freedom to try this strategy without implicating anyone else is pretty valuable, so I want to see if that is an option.
Holly Elmore
over 1 year ago
I’m now concerned that this proposal is out of scope for Manifund because it involves political advocacy, which I’m discussing with team in the Manifund Discord. But I will take this opportunity to make the case for the proposal as it is was written above as of end of day 7/7/23.
Is moratorium good or bad? I don't have a strong inside view and am mostly excited by Holly's own track record. I notice not many other funders/core EAs excited for moratorium so far (but this argument might prove too much)
I left my job at Rethink Priorities to pursue moratorium advocacy because I observed that the people in the AI safety space, both in technical alignment and policy, were biased against political advocacy. Even in EA Animal spaces (where I most recently worked), people seemed not to appreciate how contingent the success of “inside game” initiatives like The Humane League corporate campaigns (to, for example, increase farmed animal cage sizes) depended on the existence of vocal advocacy orgs like Direct Action Everywhere (DxE) and PETA stated the strongest version of their beliefs plainly to the publicly and acted in a way that legibly accorded with that. This sort of “outside game” moves the Overton window and creates external pressure for political or corporate initiatives. Status quo AI Safety is trying to play inside game without this external pressure, and hence it often at the mercy of industry. When I began looking for ways to contribute to pause efforts and learning more about the current ecosystem, I was appalled at some of things I was told. Several people expressed to me that they were afraid to do things the AI companies didn’t like because otherwise they might not cooperate with their org, or with ARC. How good can evals ever be if they are designed not to piss off the labs, who are holding all the cards? The way we get more cards for evals and for government regulations on AI is to create external pressure.
The reason I’m talking about this issue today is that FLI published an (imperfect) call for a 6-month pause and got respected people to sign it. This led to a flurry of common knowledge creation and the revelation that the public is highly receptive to, not only AI Safety as a concept, but moratorium as a solution. I’m still hearing criticism of this letter from EAs today as being “unrealistic”. I’m sorry, but how dense can you be? This letter has been extremely effective. The AI companies lost ground and had to answer to the people they are endangering. AI c-risk went mainstream!
The bottom line is that I don’t think EA is that skilled at “outside game”, which is understandable because in the other EA causes, there was already an outside game going on (like PETA for animal welfare). But in AI Safety, very unusually, the neglected position is the vanguard. The public only just became familiar enough with AI capabilities not to dismiss concerns about AGI out of hand (many of the most senior people in AI Safety seem to be anchored on a time before this was true), so the possibility of appealing to them directly has just opened up. I think that the people in AI Safety space currently— people trained to do technical alignment and the kind of policy research that doesn’t expect to be directly implemented— 1) aren’t appreciating our tactical position, and 2) are invested in strategies that require the cooperation of AI labs or of allies that make them hesitant to simply advocate for the policies they want. This is fine— I think these strategies and relationships are worth maintaining— but someone should be manning the vanguard. As someone without attachments in the AI Safety space, I thought this was something I could offer.
For | Date | Type | Amount |
---|---|---|---|
Manifund Bank | 2 months ago | withdraw | 33250 |
Manifund Bank | 2 months ago | deposit | +33250 |
Manifund Bank | 4 months ago | withdraw | 23750 |
Manifund Bank | 4 months ago | deposit | +23750 |
Manifund Bank | 7 months ago | withdraw | 5310 |
Holly Elmore organizing people for a frontier AI moratorium | about 1 year ago | project donation | +100 |
Holly Elmore organizing people for a frontier AI moratorium | about 1 year ago | project donation | +100 |
Holly Elmore organizing people for a frontier AI moratorium | over 1 year ago | project donation | +2500 |
Holly Elmore organizing people for a frontier AI moratorium | over 1 year ago | project donation | +100 |
Holly Elmore organizing people for a frontier AI moratorium | over 1 year ago | project donation | +10 |
Holly Elmore organizing people for a frontier AI moratorium | over 1 year ago | project donation | +2500 |