I'm quite confused why other donors aren't excited to fund AISC. Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support. Is this because the AISCs have been dropping in quality, as Oli claims? Or just that they've been doing a less good job of "fundraising"?
As Remmelt is saying, we're not obviously doing worse than same time last year. However money has been more scares for everyone in AI Safety since the FTX collapse. E.g. LessWrong/Lightcone is currently low on money, and there has been no dropp in quality in their work, as far as I can tell.
I don't want to completely discount Oli's statement but I want to point out that it's one opinion by one person. It's pretty normal for people in AI Safety to feel un-exited about what the majority of people in AI Safety is doing. AI Safety Camp has always had a broad range of directions, so I'm not sure why Oli hasn't seen this as a problem before, but my guess is that the shift in structure caused Oli to shift the way they evaluate us, to more focus on the projects. Or it could be that this particular camp had less projects to Oli's taste. Or it could be that our new format does produces less exiting projects according to Oli's taste. I don't know. It may also be relevant to know that Oli's shift towards not giving grant money to AISC happened at the same time as the big dropp in AI Safety funding.
The way AISC accepts projects is that we have a minimum standard, and if that is met the projects are accepted. Since our current format is highly scalable, projects have not had to compete against each other. Instead we focus on empowering all our research leads to explore what they believe in. I'm not at all surprised that someone looks at our list and thinks most of the projects are not that great, but I would also expect high disagreement on which are the good vs useless projects. I claim that AISC openness to many types of projects is what causes both the small fraction of projects Oli do likes and the ones he doesn't like.
If you're worried that we waste peoples time on the less good projects (which ever you think they are). There are still many more people interested in AI Safety than there are opportunities. I think for many people, working on a sub-optimal project will still accelerate their AI Safety skills and reasoning more than being left to them selves.
If you're worried that some of our projects are actively harmfull. We do evaluate for this, and it's the one point we're most strict on, when deciding to accept projects or not.