You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
We propose the establishment of an AI Accountability Hub within Ekō, leveraging our extensive experience in corporate campaigning both in the U.S. and globally. We aim to ensure responsible AI development and deployment by creating robust accountability mechanisms during this critical phase of AI evolution. We are seeking $100,000 in seed funding to launch the Hub, pilot various AI safety campaigns with our membership, and develop a strategic roadmap for advancing the most promising initiatives while enhancing our infrastructure to facilitate rapid response to emerging threats.
While the AI safety community has made strides in technical research and policy development, there hasn't been time for a robust corporate accountability ecosystem to develop around AI companies and their practices. And in pure for-profit models, these things don’t happen spontaneously without accountability mechanisms.
Ekō proposes to create some direct accountability and shifts in behavior toward the public good via corporate campaigning, mobilizing consumers, shareholders, workers, and other stakeholders around the risks of corporate behavior. The campaigns will also build a narrative about AI harms and risks that helps set the stage for effective regulation.
We believe in common-sense measures such as ensuring that:
Potentially dangerous AI advances are rolled out cautiously and with meaningful guardrails, and rigorous safety mechanisms like external red teaming.
Democracies retain control of the most advanced AI technologies, instead of allowing them to fall into the hands of authoritarian regimes.
There are mechanisms in place to share wealth created by AI (e.g., a Windfall Tax, where possible ensuring that ownership of the most advanced AI models is within structures that are not optimizing purely for profit, etc.).
Companies developing AI are liable for harms their products create.
The rapid advancement of artificial intelligence presents both unprecedented opportunities and significant risks. As capital markets flood the field with investment and the world's largest companies stake their businesses on AI, we face urgent decisions about its development, deployment, and governance. The main issues we seek to address through the Accountability Hub include:
Lack of Accountability: With the current pace of AI development, there hasn't been time for a robust corporate accountability ecosystem to develop around AI companies and their practices.
Ineffective Regulation: Government regulations are lagging, failing to provide timely safeguards for AI safety. There is an urgent need for accountability measures to protect the public in the interim.
Exclusion of Public Voice: Public opinion and participation is being left out of AI development and regulation, leading to a disconnect between corporate actions and societal needs.
Insufficient Oversight: There is minimal oversight to ensure AI development remains in the public interest, while both avoiding existential risks and mitigating near-term abuses and harms.
Our solution is to create a comprehensive corporate accountability infrastructure that fosters a regulated AI market, balancing innovation with transparency, oversight, and public interest. The AI Accountability Hub will spearhead targeted campaigns to shape narratives and mobilize consumers, shareholders, workers, and other stakeholders around the risks associated with corporate behavior. This approach will drive immediate accountability and behavioral shifts toward the public good while laying the groundwork for effective long-term regulation.
Leaning on Ekō’s strong network of experts and civil society organizations, we will begin IDing new and existing partnerships and lay the foundation for the work ahead. This startup phase of the AI Accountability Hub would include pinpointing and testing several strategic interventions – from OpenAI’s proposed conversion to campaigns countering AI-fueled disinformation. We will also build up our network of advisors with leaders from other campaigning organizations, whistleblower protection folks, safety experts, and more.
Our central focus over the first few months of the Accountability Hub will be to test campaigns to identify which have the potential for outsized impact through active engagement. These test campaigns will utilize Ekō’s tried-and-true tactics, such as:
Strategic Media Campaigns: Shaping public narrative and maintaining sustained pressure through targeted earned media
Mass Mobilization: Activating our 23 million members for coordinated consumer actions
Board-Level Pressure: Direct engagement with key decision-makers and board members backed by personal public pressure
Shareholder Activism: Filing strategic resolutions and building investor coalitions that bring our campaigns into the boardroom
Whistleblower Support: Providing secure channels and strategic guidance for internal reformers
Regulatory Engagement: Mobilizing public participation in regulatory processes
Global Coordination: Orchestrating simultaneous pressure across multiple jurisdictions
These tactics have long made up the playbook for Ekō and others in the corporate accountability space and must be brought to bear on the development of AI.
Among the many areas ripe for campaigning is OpenAI's proposed conversion to a for-profit entity. This possible transition represents a substantial threat to public oversight of advanced AI development, would effectively privatize billions in charitable assets, and would eliminate crucial public accountability mechanisms. Upon raising funding, Ekō is prepared to immediately launch a persistent campaign to prevent this transition, leveraging our full range of corporate campaign tactics to maintain OpenAI's nonprofit status and/or public-interest mission.
Our primary campaign goal will be to stop the for-profit conversion entirely. If stopping the conversion proves impossible, we would pivot to obtaining actionable concessions (e.g. transparency requirements); maintaining robust public oversight mechanisms; and ensuring that the non-profit has practical influence and usable resources.
Regardless of the success of our stated goals, such a campaign would build public and elite awareness of the risks of AI development, thus helping to increase the likelihood of success of other AI safety campaigns.
This campaign requires unprecedented coordination across corporate, public, and regulatory spheres, playing out in a compressed time frame as OpenAI faces pressure from investors to complete its transition within two years.
Corporate & Financial Pressure
Shareholder resolutions at Microsoft and others to activate stakeholders to oppose the transition
Direct pressure on OpenAI board members from the public regarding their fiduciary duties
Create and augment internal opposition from employees and researchers
Coalition-building with institutional investors concerned about AI safety
Strategic legal challenges to delay and complicate the transition process
Public Accountability Campaigning
Digital activism targeting board members and key decision makers
Media-friendly events and stunts to increase media coverage exposing the implications of privatizing OpenAI's assets
Mobilization of AI researchers and ethics experts against the transition
Strategic partnerships with AI safety organizations and tech accountability groups
Public education campaign about the risks of privatizing advanced AI development
Positive campaigning to reward companies who adopt better safety practices and lobby for effective AI regulation, creating a “race to the top” on safety.
Legal & Regulatory Intervention
Legal challenges through California AG's office to block the transition
IRS complaints regarding misuse of nonprofit assets
Antitrust challenges to Microsoft's growing control
Utilizing the EU AI Act when it comes into force to make substantive complaints
State-level regulatory intervention focusing on public interest concerns
Congressional oversight hearings on AI accountability
If OpenAI becomes the first company to achieve AGI / “powerful AI”, ensuring mission-driven, nonprofit governance of its assets instead of control by a few profit-motivated shareholders could change the course of human history. Even if not, winning this campaign would set crucial precedents for AI governance and demonstrate that coordinated public pressure can effectively shape the future of AI development.
Our global reach and proven track record in corporate campaigns make us uniquely suited to lead this fight. And we believe our progressive activism is a necessary counterbalance to existing approaches (e.g. Elon Musk’s litigation).
In addition to the OpenAI for-profit transition, we would envision using the same tactics to run other campaigns to get AI companies to take action to prevent their technologies from being used for:
Biological weapons;
Cyber attacks;
Synthetic CSAM and deepnudes;
Phishing;
Deep fakes and disinformation; and more.
Key Project Milestones:
Month 1: Hire an experienced AI safety expert to run the project. Recruit additional advisors to guide our efforts.
Months 2-5: Identify and execute test campaigns to gauge effectiveness and engagement, including the OpenAI transition campaign.
Month 6: Analyze test campaign results, gather feedback from advisors, and refine our strategy for the subsequent six months.
Personnel (84%): $84,000
This allocation will fund the hiring of an AI safety expert for four months to lead campaign testing with our membership, strategize, and create in-house capacity, as well as covering additional staff support from Ekō campaigners, translators, and operations staff.
Direct Campaign Expenses (10%): $10,000
Could include: research, digital ads, polling, stunts, and other campaign-related expenses.
Technology (6%): $6,000
This will cover costs for CMS, CRM, servers, and other necessary technology infrastructure.
Executive Director: Emma Ruby-Sachs -- Emma is a strategist, lawyer, and writer, previously serving as the Deputy Director at Avaaz, an environmental digital activism organization with more than 50 million members. She is a graduate of Wesleyan University and University of Toronto Faculty of Law and currently lives in Chicago, Illinois.
Advisors: Taren Stinebrickner-Kauffman. We are currently recruiting additional advisors to strengthen our team.
Ekō Team: We are a nimble and digital-first team of over two dozen highly-experienced campaigners, leveraging technology to elevate our 23 million+ members’ voices at a moment’s notice. Our proven suite of tactics include mass mobilization, online and in-person actions, high-profile stunts, shareholder advocacy, and much more. Our in-house team of leading technologists are constantly creating innovative tools to deliver members’ voices to decision-makers. We sit at the intersection of – and regularly partner with - dozens of other campaigning organizations, corporate advocacy groups and frontline communities, so we’re able to swiftly form impactful collaborations at key moments.
Most importantly, we have a 10+ year track record of proven impact. Specifically:
We’re already using people power to disrupt tech giants - We forced Apple and WhatsApp to protect digital freedoms and privacy, and we effectively challenged Facebook, YouTube, and TikTok to ban disinformation and harmful content. We’re also maintaining pressure on Google and Amazon to protect human and workers’ rights. More specifically in tech, we’ve:
Won an immediate ban on Meta sharing WhatsApp user data in Brazil. Ekō worked behind the scenes to coordinate the launch of Brazil’s biggest-ever data protection case, suing Meta for $318 million for its illegal sharing of WhatsApp user data with other platforms. We also won an immediate ban on this practice until the conclusion of the case.
Filed news-making AI safety shareholder resolutions at Microsoft, Meta, and Google. Our shareholder resolutions urged the companies to report what steps they’re taking to address the potential dangers of generative AI spreading disinformation. The resolutions secured widespread media coverage in key outlets including AdWeek, Fortune, and Bloomberg Law, as well as a wider piece on leveraging tech shareholder power as part of the Tech Policy Press podcast. The Microsoft resolution also received over 30% of votes in favor, a very strong showing for a first time resolution that shows real potential in investor-pressure theories of change.
Shaped national media narrative around tech accountability in child safety Congressional hearings. Our campaign demanded lawmakers take action against Big Tech’s predatory business model and failure to protect kids online, and drew attention to the role Meta and TikTok are playing in harming children. It was covered by several top outlets, like The New York Times, Al-Jazeera and NBC.
Exposed Meta’s assurances around protecting election integrity as false. We published a groundbreaking investigation on hate speech and disinformation during the extended India election cycle that took on Meta’s false claims, and in the process exposed how AI tools, such as image and text generators, can easily be used on the platform to push divisive narratives and conspiracy theories during a politically-charged election.
We’re dismantling the climate finance market with targeted campaigning that has helped push Lloyd’s of London, their syndicate Cincinnati, and other insurers like Chubb and Beazley to rule out insurance for destructive new fossil fuel projects. We’re also doing exciting work in the bonds market, challenging climate-destroying projects with our members and the media to expose how bonds drive fossil fuel expansion, building widespread public awareness and visibility in mainstream outlets.
We’re forcing companies to clean up supply chains and protect communities on the ground. For example, we used shareholder action to get Apple to release their first human rights report, and to release previously secretive information about their labor practices in China. We also helped ban a poisonous chemical from the waters of Costa Rica by supporting a local community to ramp up its media and narrative work, and forced a timber giant to drop their legal attack on Indigenous land in Malaysia by engaging our global membership in targeted action.
Our team is ready to partner and bring our movement’s resources to bear further on this existential fight for a secure, effective, and innovative AI future.
The most likely cause of failure for this project would be insufficient funding to hire the person/people with the right expertise. The outcome would be Ekō choosing to hold off on further AI-related campaigns, instead focusing on other campaign areas, such as our climate work.
Ekō is not currently receiving any institutional funding for this specific project. We are, however, seeking funding for this project through individuals outside of Manifund, which is why we have set a minimum funding amount of $10,000. We hope any funding raised through Manifund will augment other potential funding to expand our campaigning capacity. We have received $300,000 in institutional funding in the last 12 months to combat disinformation across a variety of platforms. The vast majority of our funding comes from individual donations from our members.