Haven't read the textbook in full, but I'm planning to use content from the Chapter 2 for an ML bootcamp. I sent Charbel my quick thoughts on the chapter and he encouraged me to share them here:
"I watched the talk you suggested (Atlas Chapter 2 on AI Risks). I think it's a pretty good talk, it manages to very clearly illustrate the different branches in the risk taxonomy and convey the ideas in a short amount of time. I probably couldn't do better at that myself. I think it's good for introducing the AI risks side of the coin, though I would not like that perspective to be the only one participants used to think about the AI future.If I'm allow to nitpick, I have slight contentions for how he summarized two of the biology paper results he presented:
"LLMs instructed students on the complete chain for ordering a deadly pathogen" (Soice et al). Didn't mention that the LLM plans contained many critical failures that mean they wouldn't work in real life (though the trend is towards less failures over time)
"36 teams of students managed to order the Spanish Flu". Yes, but they were advised by a world-renowned expert on biorisks on how to bypass the DNA screening tools. Not only did they append a distracting biological sequence, they carefully chose the sequence to use to adversarially trick screening. Besides, you have to piece together the fragments of the virus in your lab (not a deal breaker, but worth mentioning)
That said, I'd rather give a talk like that at ML4Good than not, it's just pretty informative about an important topic even if I might contend some of the details"