diff options
-rw-r--r-- | content/entry/robert-miles-makes-accessible-ai-safety-videos.md | 18 |
1 files changed, 18 insertions, 0 deletions
diff --git a/content/entry/robert-miles-makes-accessible-ai-safety-videos.md b/content/entry/robert-miles-makes-accessible-ai-safety-videos.md new file mode 100644 index 0000000..0c546d3 --- /dev/null +++ b/content/entry/robert-miles-makes-accessible-ai-safety-videos.md @@ -0,0 +1,18 @@ +--- +title: "Robert Miles Makes Accessible AI Safety Videos" +date: 2023-04-26T00:00:00 +draft: false +--- +I remember being in class once introducing a small group of students to the AI safety problem as it pertains to long-term accidental risks. I was talking about some thought experiment like the [paperclip maximizer](https://www.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) while the group asked me many questions, each of which warranted their own discussion entirely. + +The questions were along the lines of "Would it be like Terminator?", "Why would it have a utility function?", "Wouldn't it be smart enough to realize maximizing paperclips is a dumb goal?", "Why would it want to acquire resources or self-improve?", "What makes you think it would become superintelligent?", "Why couldn't we just turn it off?", so on and so forth. All great questions, but I unfortunately didn't have the time to cover them all. + +I realized that the group I was trying to teach lacked the necessary background to understand why the paperclip maximizer would behave the way I was describing. It's not just lay people and students though. Many people who work in the field of AI are unaware of AI safety. Their job only requires them to think about how they can make their AI model less racially biased. It doesn't require that they consider AI as an [existential risk](https://www.wikipedia.org/wiki/Global_catastrophic_risk#Defining_existential_risks). + +Maybe you don't think it matters because that person isn't intending to work on artificial general intelligence (AI as smart as or much smarter than humans). I would argue that that's besides the point. We may live in a universe where the technological development path of AGI is such that it's highly probable that it gets invented accidentally. In other words, someone with no intentions to invent AGI and only rudimentary understanding of AI safety ends up inventing it. That scenario would be disastrous for humanity. + +One thing I think should be done to mitigate risk is to educate anyone who is willing to listen about the nature of the AI safety problem and why it's so hard to solve. If you need an introduction to the problem or would like to educate someone you know who is working on AI, Robert Miles made an intro video which I highly recommend titled "[Intro to AI Safety, Remastered](https://yewtu.be/embed/pYXy-A4siMw?local=true)". + +[Robert Miles](https://yewtu.be/channel/UCLB7AzTwc6VFZrBsO2ucBMg?dark_mode=true) is a science communicator focused on AI safety and alignment. I've learned a lot from watching his AI safety videos. He makes them both informative and entertaining, especially for people like me who are more interested in AI for its implications for the future rather than all the technical details. + +If that sounds like something you're interested in, please check out his channel and support him however you can. |