summaryrefslogtreecommitdiff
path: root/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/entry/robert-miles-makes-accessible-ai-safety-videos.md')
-rw-r--r--content/entry/robert-miles-makes-accessible-ai-safety-videos.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/content/entry/robert-miles-makes-accessible-ai-safety-videos.md b/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
index 1bae43a..e216184 100644
--- a/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
+++ b/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
@@ -4,11 +4,11 @@ date: 2023-04-26T00:00:00
tags: ['computing']
draft: false
---
-I remember being in class once introducing a small group of students to the AI safety problem as it pertains to long-term accidental risks. I was talking about some thought experiment like the [paperclip maximizer](https://www.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) while the group asked me many questions, each of which warranted their own discussion entirely.
+I remember being in class once introducing a small group of students to the AI safety problem as it pertains to long-term accidental risks. I was talking about some thought experiment like the [paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) while the group asked me many questions, each of which warranted their own discussion entirely.
The questions were along the lines of "Would it be like Terminator?", "Why would it have a utility function?", "Wouldn't it be smart enough to realize maximizing paperclips is a dumb goal?", "Why would it want to acquire resources or self-improve?", "What makes you think it would become superintelligent?", "Why couldn't we just turn it off?", so on and so forth. All great questions, but I unfortunately didn't have the time to cover them all.
-I realized that the group I was trying to teach lacked the necessary background to understand why the paperclip maximizer would behave the way I was describing. It's not just lay people and students though. Many people who work in the field of AI are unaware of AI safety. Their job only requires them to think about how they can make their AI model less racially biased. It doesn't require that they consider AI as an [existential risk](https://www.wikipedia.org/wiki/Global_catastrophic_risk#Defining_existential_risks).
+I realized that the group I was trying to teach lacked the necessary background to understand why the paperclip maximizer would behave the way I was describing. It's not just lay people and students though. Many people who work in the field of AI are unaware of AI safety. Their job only requires them to think about how they can make their AI model less racially biased. It doesn't require that they consider AI as an [existential risk](https://en.wikipedia.org/wiki/Global_catastrophic_risk#Defining_existential_risks).
Maybe you don't think it matters because that person isn't intending to work on artificial general intelligence (AI as smart as or much smarter than humans). I would argue that that's besides the point. We may live in a universe where the technological development path of AGI is such that it's highly probable that it gets invented accidentally. In other words, someone with no intentions to invent AGI and only rudimentary understanding of AI safety ends up inventing it. That scenario would be disastrous for humanity.