Sorry folks, this got shot down by conflict of interest review at work. I'm not sure how to delete an application at this stage, so I'll just leave this here until the deadline passes.
Recent work from Redwood Research studies whether or not an LLM attack policy is able to perform a simulated cyberattack task while fooling an LLM+human blue team into thinking its actions are benign. However, the simulated malware task is quite limited (download+run an arbitrary file from the internet). In particular, it leaves out the parts of the cyberattack process where the attacker has to find one or more vulnerabilities in the target software, find a way to exploit them to achieve some effect, and put together a payload to perform the exploit. These stages are the majority of the labor in the case of human cyber attacks, and might involve more obviously suspicious activities than the task studied by Redwood. This project would build on top of Redwood's work: modify their evaluation environment with a more detailed cyberattack task, design new attack policy and blue team strategies more tailored to the new setting, and assess the resulting safety/usefulness tradeoff curve.
This project is useful for the same reason that the Redwood project it's building on is useful - frontloading the process of designing effective AI control harnesses, and assessing how effective they are.
I'm not sure how much compute/API funding this project will need. The 2k minimum seems like enough that I'll be able to produce something, but will have to think very carefully about how to spend my budget and probably make some significant compromises. The 6k goal seems like enough to produce good results without being too constrained by budget, provided I still spend carefully.
Get Redwood's control evaluation setup working
Design a few extended cyberattack tasks, probably with the help of some colleagues from the SEI (also acting in their personal capacity)
Modify Redwood's task environment to support each extended cyberattack task
Re-run evaluations using new environment, using the most important policy configurations studied by Redwood
Based on analysis+inspection of the behavior of those evaluations, iterate on the attack policy and blue team policy to perform better in the new setting, implement the new policies, and re-run evaluation
Write up and publish results. Probably an arxiv preprint and accompanying blog post.
Compute/API budget to run the experiments. I'll work on this unpaid on my own time.
I'm working independently.
I've previously led two SPAR projects developing novel LLM evaluations.
I have to run this project past the SEI conflict of interest approval process. If it is not approved, nothing happens and I'll return all money.
I might decide that a different project is a better use of my time, in which case I'll finish proximate deliverables, write up my work so far, share my code and return remaining money.
I might make some mistake when running an expensive experiment (bad experimental design or bad implementation) that leads to the money being spent without producing useful results. If I catch the mistake, I'll probably pay out of pocket, but I might apply for more funding, or as a last resort just write up the results, describe my mistake, and hope someone else will decide to carry on and find funding. If I don't catch the mistake I'll publish wrong/useless results.
I got a 55k grant (45k stipend, 10k compute/API) from the Open Philanthropy Career Development and Transition Fund for six months' independent research, during which I led my two SPAR projects.
I currently work full-time for the AI Security team at CMU SEI CERT, where I make 95k/year.
James Lucassen
about 1 month ago
Sorry folks, this got shot down by conflict of interest review at work. I'm not sure how to delete an application at this stage, so I'll just leave this here until the deadline passes.
Austin Chen
21 days ago
@jlucassen Sorry to hear that and thanks for the heads up; I've now set this project to expire.
Apollo Research
about 2 months ago
In my evals field-building efforts, I recently asked people who want to build something in evals to fill out a simple form (https://www.lesswrong.com/posts/uoAAGzn4hPAHLeG2Y/which-evals-resources-would-be-good)
James' project was, in my opinion, the best of all applications. This is because:
1. I have talked to James before and seen some of his work, and I think he is competent at executing the project.
2. I think the project itself is really important. Cybersec seems very important in itself but is also crucial for control and scheming research.
3. The cost of this project seems very low. I think a similar project description could have easily justified $50k+. In contrast, James only asks for $9k (split across two projects).
For context, I think this project is about as impactful as the average MATS extension but ~5x cheaper. If James has more room for funding, I'd encourage a funder to commit to a conditional follow-up grant, e.g. check in with James after 2 months to evaluate progress and potentially extend the funding for 6 months or so.
Austin Chen
about 2 months ago
@mariushobbhahn Thanks for the recommendation, this is very helpful. I've now funded the two projects to half of the initial ask ($4.5k/$9k), leaving room mostly because I don't have a strong inside-view understanding of the proposals, so would leave the rest to someone with more domain expertise.