summaryrefslogtreecommitdiff
path: root/content/entry/the-privacy-implications-of-weak-ai.md
diff options
context:
space:
mode:
authorNicholas Johnson <nick@nicksphere.ch>2022-05-23 00:00:00 +0000
committerNicholas Johnson <nick@nicksphere.ch>2022-05-23 00:00:00 +0000
commit05fa3051e12acddfe320912a93e1927bcf1b64f6df2a14589594144df3b9f3e2 (patch)
treee2f767706bbef2caf24a3fd5ea9147f6866d3fef2c0e732f9b481932e87d67ea /content/entry/the-privacy-implications-of-weak-ai.md
parent44ef9882132619ead1f888778804893d848b7686a4833e038b67b263165eb933 (diff)
downloadjournal-05fa3051e12acddfe320912a93e1927bcf1b64f6df2a14589594144df3b9f3e2.tar.gz
journal-05fa3051e12acddfe320912a93e1927bcf1b64f6df2a14589594144df3b9f3e2.zip
Fix spelling errors
Diffstat (limited to 'content/entry/the-privacy-implications-of-weak-ai.md')
-rw-r--r--content/entry/the-privacy-implications-of-weak-ai.md14
1 files changed, 7 insertions, 7 deletions
diff --git a/content/entry/the-privacy-implications-of-weak-ai.md b/content/entry/the-privacy-implications-of-weak-ai.md
index ef7b1f7..815e6c7 100644
--- a/content/entry/the-privacy-implications-of-weak-ai.md
+++ b/content/entry/the-privacy-implications-of-weak-ai.md
@@ -9,15 +9,15 @@ So a few days ago I started writing this entry titled "Societal Implications of
I've chosen to limit the scope of this entry to weak AI only. I'm purposely omitting AGI because it warrants its own discussion. AGI, or general artificial intelligence, is AI with intelligence equal to or far exceeding human intelligence in every way that matters. Weak AI by contrast only handles narrowly-defined, limited tasks. But make no mistake. Just because it's limited doesn't mean it's not dangerous. This entry is all about how weak AI threatens our privacy and what we can do about it.
# Privacy Must Be Protected
-The 'nothing to hide' people don't understand this, but privacy is important for the healthy development of humans and other animals. Being watched all the time is psychologically hazardous. It's backed up by science. Without privacy, there's nowhere to make mistakes without judgement. Letting AI just destroy our privacy in the name of 'progress' is not an option.
+The 'nothing to hide' people don't understand this, but privacy is important for the healthy development of humans and other animals. Being watched all the time is psychologically hazardous. It's backed up by science. Without privacy, there's nowhere to make mistakes without judgment. Letting AI just destroy our privacy in the name of 'progress' is not an option.
# AI is Already a Privacy Disaster
-AI is already destroying our privacy in numerous ways. Just have a look at awful-ai[1], a git repo tracking scary usages of AI. AI can be used to infer criminality from a picture of a person's face. It can recreate a person's face from their voice alone. Everybody already knows about facial recognition which is a privacy disaster. Big retailers use it for tracking. China uses it to surveil muslims. Any time you see 'AI' and 'privacy' in the same sentence, it's always bad news.
+AI is already destroying our privacy in numerous ways. Just have a look at awful-ai[1], a git repo tracking scary usages of AI. AI can be used to infer criminality from a picture of a person's face. It can recreate a person's face from their voice alone. Everybody already knows about facial recognition which is a privacy disaster. Big retailers use it for tracking. China uses it to surveil Muslims. Any time you see 'AI' and 'privacy' in the same sentence, it's always bad news.
# AI Will Become a Worse Privacy Disaster
-AI is already very bad for privacy and getting worse all the time. The most worrisome thing is we have no idea how good weak AI can get at privacy-invading use cases. The only limit in sight is how much personal information can theoretically be derived from input data. Can AI accurately predict the timeframe when someone last had sex based on a 1 minute video of that person? What about how they've been feeling for the past week? It's hard to say what future AI will be able to predict given some data.
+AI is already very bad for privacy and getting worse all the time. The most worrisome thing is we have no idea how good weak AI can get at privacy-invading use cases. The only limit in sight is how much personal information can theoretically be derived from input data. Can AI accurately predict the time frame when someone last had sex based on a 1 minute video of that person? What about how they've been feeling for the past week? It's hard to say what future AI will be able to predict given some data.
-You may be publicly sharing information about yourself online now, knowingly or unknowingly, which a future AI Sherlock Holmes (just a metaphor) can use to derive information about you that you don't want anyone to know. Not only that, but it will be able to derive information about you that you don't even know. How much information will future AI be able to derive about me from these journal entries? What will it learn about me from my style of writing, what I write about, when I write about it, etcetera? I don't know. Just imagine what inferences future AI will be able to derive about someone given all the data from intelligence agencies and big tech. Imagine how that could be weaponized.
+You may be publicly sharing information about yourself online now, knowingly or unknowingly, which a future AI Sherlock Holmes (just a metaphor) can use to derive information about you that you don't want anyone to know. Not only that, but it will be able to derive information about you that you don't even know. How much information will future AI be able to derive about me from these journal entries? What will it learn about me from my style of writing, what I write about, when I write about it, etc? I don't know. Just imagine what inferences future AI will be able to derive about someone given all the data from intelligence agencies and big tech. Imagine how that could be weaponized.
Future AI may not be able to explain how it reaches its conclusions to us humans. But that won't necessarily matter. As long as its conclusions are accurate, it will be dangerous. If it turns out that future AI Sherlock can derive troves of personal information from very little data, we'll need very strict privacy protections. If it turns out that AI Sherlock can't derive much information, then maybe we can relax protections a little.
@@ -52,7 +52,7 @@ There is cause for concern about such strong privacy laws though. For instance t
### Self-Driving Cars
How can you have self-driving cars if it's illegal to conduct persistent surveillance of the public? You can't. The cars must have external sensors and cameras in order to work. We could just not have them, but self-driving cars will save millions of lives. We don't want to block technological development that benefits humanity.
-For those cases, we need strict, legally enforceable data collection and data protection standards that businesses must adhere to and perhaps audits to ensure the standards are being followed. If your company builds technology which has the hardware cability to conduct persistent surveillance of the public, then there should be guidelines it has to follow:
+For those cases, we need strict, legally enforceable data collection and data protection standards that businesses must adhere to and perhaps audits to ensure the standards are being followed. If your company builds technology which has the hardware capability to conduct persistent surveillance of the public, then there should be guidelines it has to follow:
* The technology must be built with free hardware and run free software exclusively.
* The technology must not collect more data than necessary to achieve its ends.
@@ -65,7 +65,7 @@ Of course the guidelines will be technology-specific and they won't be perfect.
### Online AI Matchmaking
For another example, imagine an online AI matchmaking service which finds your perfect match. Suppose it's more successful than other existing matchmaking services, by any metric. Sounds great right? But there's a catch. The reason it achieves such great results is because it creates huge dossiers on its users to feed into the AI matchmaking algorithm.
-You might be thinking "Well if you don't want your privacy invaded, just don't sign up." Ah but it's not so simple. None of us live in a privacy vaccuum. Every time you give up data about yourself, you risk giving up data about others even if you never explicitly offer data about them. As I already discussed, AI can deduce information about other people you're close to based on things it knows about you. Using privacy-invading services inevitably leaks some data about nonconsenting non-users.
+You might be thinking "Well if you don't want your privacy invaded, just don't sign up." Ah but it's not so simple. None of us live in a privacy vacuum. Every time you give up data about yourself, you risk giving up data about others even if you never explicitly offer data about them. As I already discussed, AI can deduce information about other people you're close to based on things it knows about you. Using privacy-invading services inevitably leaks some data about nonconsenting non-users.
It still makes sense to mitigate the privacy damage caused by AI matchmaking using the same sort of regulations I proposed for self-driving cars. Deciding not to use the service is an individual decision. But on a societal level, we have to decide whether it's okay for such a service to exist in the first place in an environment where AI Sherlock could use the data to derive personal information about nonconsenting non-users.
@@ -79,7 +79,7 @@ But maybe we can avoid making trade-offs. One reason to stay hopeful I haven't m
We could also regulate businesses running AI-driven services so they're legally required to operate it collecting as minimal user data as possible. For instance, if we figured out how to use homomorphic encryption for the hypothetical AI matchmaking business without collecting plaintext data about users, it would then be legally required of all AI matchmaking businesses providing worse or equivalent service to provide that same level of privacy to users.
-With that law in place, we could constantly step up privacy protections against AI and also online services that don't use AI as well. We could also avoid a 2-tier society of those benefitting from AI and those that aren't. Maybe cryptography can save us from being forced to pick and choose.
+With that law in place, we could constantly step up privacy protections against AI and also online services that don't use AI as well. We could also avoid a 2-tier society of those benefiting from AI and those that aren't. Maybe cryptography can save us from being forced to pick and choose.
# Summary
In summary, AI is a danger to privacy. It's getting more dangerous. To protect our privacy, we need to stop governments and businesses from collecting data about us and get them to purge data they already have. Stronger laws and regulations than currently exist anywhere in the world will need to be passed to protect user privacy in a meaningful way. If we're fortunate, advances in cryptography, particularly homomorphic encryption, could allow us to reap the benefits of AI without the privacy invasion.