diff options
Diffstat (limited to 'content/entry/the-privacy-implications-of-weak-ai.md')
-rw-r--r-- | content/entry/the-privacy-implications-of-weak-ai.md | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/content/entry/the-privacy-implications-of-weak-ai.md b/content/entry/the-privacy-implications-of-weak-ai.md index e7175bd..e4b89f0 100644 --- a/content/entry/the-privacy-implications-of-weak-ai.md +++ b/content/entry/the-privacy-implications-of-weak-ai.md @@ -5,7 +5,7 @@ tags: ['computing'] draft: false --- # Introduction -So a few days ago I started writing this entry titled "Societal Implications of Weak AI". Over the course of the next few days, I found out just how broad of a topic that is. I kept thinking of more topics and subtopics. With weak AI, there's so much to discuss. Eventually the entry ballooned to an unmanageable 30+ minute read. I couldn't figure out how to organize all the topics. So I just decided it would be best to split it up into separate, more digestible entries. +So a few days ago I started writing this entry titled "Societal Implications of Weak AI". Over the course of the next few days, I found out just how broad of a topic that is. I kept thinking of more topics and subtopics. With weak AI, there's so much to discuss. Eventually the entry ballooned to an unmanageable 30-minute read. I couldn't figure out how to organize all the topics. So I just decided it would be best to split it up into separate, more digestible entries. I've chosen to limit the scope of this entry to weak AI only. I'm purposely omitting AGI because it warrants its own discussion. AGI, or general artificial intelligence, is AI with intelligence equal to or far exceeding human intelligence in every way that matters. Weak AI by contrast only handles narrowly-defined, limited tasks. But make no mistake. Just because it's limited doesn't mean it's not dangerous. This entry is all about how weak AI threatens our privacy and what we can do about it. @@ -16,7 +16,7 @@ The 'nothing to hide' people don't understand this, but privacy is important for AI is already destroying our privacy in numerous ways. Just have a look at [awful-ai](https://github.com/daviddao/awful-ai), a git repo tracking scary usages of AI. AI can be used to infer criminality from a picture of a person's face. It can recreate a person's face from their voice alone. Everybody already knows about facial recognition which is a privacy disaster. Big retailers use it for tracking. China uses it to surveil Muslims. Any time you see 'AI' and 'privacy' in the same sentence, it's always bad news. # AI Will Become a Worse Privacy Disaster -AI is already very bad for privacy and getting worse all the time. The most worrisome thing is we have no idea how good weak AI can get at privacy-invading use cases. The only limit in sight is how much personal information can theoretically be derived from input data. Can AI accurately predict the time frame when someone last had sex based on a 1 minute video of that person? What about how they've been feeling for the past week? It's hard to say what future AI will be able to predict given some data. +AI is already very bad for privacy and getting worse all the time. The most worrisome thing is we have no idea how good weak AI can get at privacy-invading use cases. The only limit in sight is how much personal information can theoretically be derived from input data. Can AI accurately predict the time frame when someone last had sex based on a 1-minute video of that person? What about how they've been feeling for the past week? It's hard to say what future AI will be able to predict given some data. You may be publicly sharing information about yourself online now, knowingly or unknowingly, which a future AI Sherlock Holmes (just a metaphor) can use to derive information about you that you don't want anyone to know. Not only that, but it will be able to derive information about you that you don't even know. How much information will future AI be able to derive about me from these journal entries? What will it learn about me from my style of writing, what I write about, when I write about it, etc? I don't know. Just imagine what inferences future AI will be able to derive about someone given all the data from intelligence agencies and big tech. Imagine how that could be weaponized. @@ -43,7 +43,7 @@ No matter how accurate future AI Sherlock is, there are a few things that will p * We must educate people about the importance of privacy and create political pressure to protect it. * [more items here...] -If you notice, almost all of the above points are related to preventing data collection and not preventing AI use. AI is just software. To stop people using it would require extremely draconian measures that might undermine privacy anyways. I'm not saying draconian measures protect us from AI will never be justifiable. I'm just saying why resort to that when there are solutions that aren't draconian and will actually allow us to preserve our rights? +If you notice, almost all of the above points are related to preventing data collection and not preventing AI use. AI is just software. To stop people using it would require extremely draconian measures that might undermine privacy anyway. I'm not saying draconian measures protect us from AI will never be justifiable. I'm just saying why resort to that when there are solutions that aren't draconian and will actually allow us to preserve our rights? The best way to stop privacy-invading AI is to stop the data collection. AI needs data to make predictions about people. Without data, AI can't make predictions. We should still allow mass data collection with AI to predict things like the weather. That doesn't violate anyone's privacy. The violation happens when there's collection of personally identifiable data about people, or collection of data which AI can later use to deduce personally identifiable information about people. That is what we have to prevent. |