You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Google Docs Link: [v1.0] Grant Proposal for STARA
Q: Why so soon? What’s the rush?
A:
The urgency is deliberate. First, we want to capitalize on the summer break (June 2025) when undergrads are free. Miss that, and we won't be able to run another one of its type until December or even next year.
Second, it’s a credibility play. Singapore’s nascent AI safety scene needs wins to generate momentum and credibility. We’re in a window where early visible wins—like a bootcamp that produces actual talent—can move the Overton window and secure long-term institutional buy-in with government agencies and other NGOs.
Q: Who is this for?
A:
STARA targets talent will be more local and more junior and not ARENA branded. Examples:
Software engineers or ML practitioners pivoting into AI safety.
Postgrad CS/Math students from top Singaporean universities (NUS, NTU).
People already considering roles in SG AISI or similar orgs.
STARA (Singapore Technical Alignment Research Accelerator) is a 4-week machine learning safety bootcamp running from 2–27 June 2025, aimed at building Singapore’s AI safety research pipeline. It is modeled on the successful ARENA curriculum and delivered in part by alumni of both ARENA and MATS.
We will train 12 technically strong participants, selected from Singapore’s top universities and early-career professionals, with a curriculum adapted from the ARENA syllabus on transformers, interpretability, and evaluations. The goal is to transition them into impactful AI safety research roles, and in the process, solidify Singapore’s position as a serious secondary hub for alignment talent globally.
We are requesting USD 15,303.82 to fund this initial pilot cohort.
Our core goal is to accelerate the development of alignment-relevant technical talent in Singapore. We aim to:
Train 12 participants to become research-contributing alignment engineers,
Generate capstone projects that could plausibly become papers or research submissions,
Place graduates into roles in organizations like SG AISI or global alignment labs.
We’ll achieve this by:
Running a full-time, intensive 4-week bootcamp (in-person where possible),
Embedding participants into the local AI safety ecosystem (via SASH),
Closely mentoring participants with a high teacher-to-student ratio,
Using rigorous screening and technical vetting to ensure quality,
Focusing capstone projects around tractable, valuable research questions.
We propose a minimum budget of USD 15,303.82, covering:
Venue at SASH, an AI safety hub co-located with lorong.ai (government/industry nexus),
Operations support to run logistics, events, and track participant progress,
Compute credits for Colab Pro,
T-shirts and small program costs for branding and morale.
If additional funding becomes available, we would expand the budget to include:
Participant stipends, increasing retention and enabling full-time focus,
Catered food, which increases participant bonding, satisfaction, and onsite time,
Additional ops staff, to reduce burnout risk and improve execution quality.
Organizers are not paid out of this grant—the project director’s salary is already covered by SG AISI, and the technical director is volunteering.
ARENA alum, MATS Spring 2023 cohort
Ran AI Safety Fellowships at NUS and NTU (8 fellows per cohort)
Co-author of MACHIAVELLI (ICML 2023) and cybercapabilities.org (AAAI 2025 workshop)
Leading the operational execution, outreach, and teaching for STARA
Research Engineer at Singapore AISI
Published in EMNLP 2024, NeurIPS 2024, and ICLR 2025 on interpretability and LLM safety
Past mentees now at AI Safety Camp, Apollo Research, UK AISI, U.S. Congress
Responsible for curriculum quality, lectures, capstone mentorship, and technical office hours
Core team at EA NUS, facilitator for NUS Governance Fellowship
Handles university outreach, logistics, and program comms
Co-founder of WhiteBox Research, a technical AI upskilling program in the Philippines
Advisor on strategy, branding, and organizational connections
Together, this team combines prior fellowship execution experience, technical teaching skill, and publication track record, with ties to institutions like AISI, Apart Research, and WhiteBox.
We see three plausible failure modes:
Insufficient participant quality.
If our outreach fails to attract technically capable applicants, the program output will be weak. We’re mitigating this by front-loading outreach (especially in-person at SASH), using pre-screening tasks, and running a teaser workshop to calibrate expectations.
Mid-program attrition.
Without stipends or external motivation, participants may drop off. We’re addressing this with milestone-based incentive structures (e.g., completion-based stipends) and by embedding participants into a tight cohort with shared accountability (pair programming) and structure.
Post-program disengagement.
If participants don’t stay active in AI safety, the program’s long-term value will decay. To combat this, we’ll actively integrate them into SASH, alumni networks, and introduce them to research mentors and fellowship pipelines.
Failure here wouldn’t just mean wasted resources—it would signal to Singapore’s institutions that alignment is unserious. But success would establish STARA as a credible local training pipeline and justify long-term investment from regional funders, governments, and labs.
Jonathan received a grant of SGD ~30000 from the Singapore AISI to organize programs targeted primarily at top university students in NUS and NTU.
There are no bids on this project.