summaryrefslogtreecommitdiff
path: root/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/entry/robert-miles-makes-accessible-ai-safety-videos.md')
-rw-r--r--content/entry/robert-miles-makes-accessible-ai-safety-videos.md1
1 files changed, 1 insertions, 0 deletions
diff --git a/content/entry/robert-miles-makes-accessible-ai-safety-videos.md b/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
index 0c546d3..e306b7d 100644
--- a/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
+++ b/content/entry/robert-miles-makes-accessible-ai-safety-videos.md
@@ -1,6 +1,7 @@
---
title: "Robert Miles Makes Accessible AI Safety Videos"
date: 2023-04-26T00:00:00
+tags: ['computing']
draft: false
---
I remember being in class once introducing a small group of students to the AI safety problem as it pertains to long-term accidental risks. I was talking about some thought experiment like the [paperclip maximizer](https://www.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) while the group asked me many questions, each of which warranted their own discussion entirely.