Project summary
A literature review of projects applying LLMs to self-knowledge and wise human coordination;
A survey of legacy practices and relevant academic literature;
Recommendations for high-leverage interventions;
As an arxiv paper.
What are this project's goals and how will you achieve them?
Aligning AI via an indirect normative approach requires eliciting and reconciling people's values and wills.
My goal is to help individuals and groups identify what is important to them at any given time and coordinate effectively with others, potentially with AI assistance, across various scales (individual, team, community, city, nation, etc.)
Methodology:
I will document my findings in an arxiv paper and would organize an unconference for relevant parties (contingent on interest, it would be a test of my coordination abilities).
My current non-exhaustive list of literature, practices, and LLM applications can be found here.
How will this funding be used?
Funding Usage:
Salary: $50,000 for a year of dedicated research.
Travel: $5,000 for conducting interviews and on-site research.
Compute: $5,000 for LLM tokens and other compute.
Who is on your team and what's your track record on similar projects?
Team and Track Record:
Team:
Track Record:
Developed tools for collective sensemaking (threadhelper, Unigraph).
Created applications for collective intelligence at Borg (people search using social media big data, e.g. hive.one).
Worked with METR on LLM agent evaluations and metrics for dangerous capabilities.
Have organized and co-organized 4 unconferences from 30 to 120 attendees.
Currently hosting a pop-up campus for projects related to this grant proposal, facing object-level coordination problems every day, grounding my mapping work.
What are the most likely causes and outcomes if this project fails? (premortem)
Potential Failures:
Mitigation Strategies: