Thoughts on Superintelligence Security
I explain my current understanding of the artificial intelligence security problem, and why I think you shouldn't dismiss it.
In the long run, the most important question for humanity will be whether we were able to create superhuman artificial intelligence before we destroy ourselves and the planet.
A lot has been written on this topic, but I’m dissatisfied with most positions I read about. I want to analyze the topic from first principles, using the lens of software security, which I think is the correct model here.
Read the full post here: https://maraoz.com/2023/08/25/superintelligence-security/