diff options
author | Nicholas Johnson <nick@nicholasjohnson.ch> | 2023-02-15 00:00:00 +0000 |
---|---|---|
committer | Nicholas Johnson <nick@nicholasjohnson.ch> | 2023-02-15 00:00:00 +0000 |
commit | 6e08b34756ab3e1a7575bed3c34352520d6927aa58dfae4f0e12e6018130ae23 (patch) | |
tree | b92afde1db3ba55241806b2c21fc54f074ef8dee5371c23f64b53f6b3b13f52a /content/entry | |
parent | e735196f94a3b8c386e70691452adcff3ee0030959d8b5446b85da0dd276334e (diff) | |
download | journal-6e08b34756ab3e1a7575bed3c34352520d6927aa58dfae4f0e12e6018130ae23.tar.gz journal-6e08b34756ab3e1a7575bed3c34352520d6927aa58dfae4f0e12e6018130ae23.zip |
Covert refs: the-privacy-implications-of-weak-ai
Diffstat (limited to 'content/entry')
-rw-r--r-- | content/entry/the-privacy-implications-of-weak-ai.md | 26 |
1 files changed, 7 insertions, 19 deletions
diff --git a/content/entry/the-privacy-implications-of-weak-ai.md b/content/entry/the-privacy-implications-of-weak-ai.md index bcc9d2e..57458f1 100644 --- a/content/entry/the-privacy-implications-of-weak-ai.md +++ b/content/entry/the-privacy-implications-of-weak-ai.md @@ -2,7 +2,6 @@ title: "The Privacy Implications of Weak AI" date: 2021-11-10T00:00:00 draft: false -makerefs: false --- # Introduction So a few days ago I started writing this entry titled "Societal Implications of Weak AI". Over the course of the next few days, I found out just how broad of a topic that is. I kept thinking of more topics and subtopics. With weak AI, there's so much to discuss. Eventually the entry ballooned to an unmanageable 30+ minute read. I couldn't figure out how to organize all the topics. So I just decided it would be best to split it up into separate, more digestible entries. @@ -13,7 +12,7 @@ I've chosen to limit the scope of this entry to weak AI only. I'm purposely omit The 'nothing to hide' people don't understand this, but privacy is important for the healthy development of humans and other animals. Being watched all the time is psychologically hazardous. It's backed up by science. Without privacy, there's nowhere to make mistakes without judgment. Letting AI just destroy our privacy in the name of 'progress' is not an option. # AI is Already a Privacy Disaster -AI is already destroying our privacy in numerous ways. Just have a look at awful-ai[1], a git repo tracking scary usages of AI. AI can be used to infer criminality from a picture of a person's face. It can recreate a person's face from their voice alone. Everybody already knows about facial recognition which is a privacy disaster. Big retailers use it for tracking. China uses it to surveil Muslims. Any time you see 'AI' and 'privacy' in the same sentence, it's always bad news. +AI is already destroying our privacy in numerous ways. Just have a look at [awful-ai](https://github.com/daviddao/awful-ai), a git repo tracking scary usages of AI. AI can be used to infer criminality from a picture of a person's face. It can recreate a person's face from their voice alone. Everybody already knows about facial recognition which is a privacy disaster. Big retailers use it for tracking. China uses it to surveil Muslims. Any time you see 'AI' and 'privacy' in the same sentence, it's always bad news. # AI Will Become a Worse Privacy Disaster AI is already very bad for privacy and getting worse all the time. The most worrisome thing is we have no idea how good weak AI can get at privacy-invading use cases. The only limit in sight is how much personal information can theoretically be derived from input data. Can AI accurately predict the time frame when someone last had sex based on a 1 minute video of that person? What about how they've been feeling for the past week? It's hard to say what future AI will be able to predict given some data. @@ -31,9 +30,9 @@ No matter how accurate future AI Sherlock is, there are a few things that will p * Businesses must delete existing identifiable data about people. * There must be a law against infrastructure for persistent surveillance of the public. (store surveillance cameras, Ring doorbells) * Police use of AI must be community-controlled. -* People must use free software[2]. Non-free software often contains surveillance features. -* People must stop using services as software substitutes (SaaSS)[3]. They're prone to surveillance. -* People must use encrypted, metadata-resistant communications protocols. Preferably mixnets that prevent traffic analysis against global adversaries. See Nym.[4] +* People must use [free software](https://www.gnu.org/philosophy/free-sw.en.html). Non-free software often contains surveillance features. +* People must stop using [services as software substitutes](https://www.gnu.org/philosophy/who-does-that-server-really-serve.html) (SaaSS). They're prone to surveillance. +* People must use encrypted, metadata-resistant communications protocols. Preferably mixnets that prevent traffic analysis against global adversaries. See [Nym](https://nymtech.net/). * There must be a law against public sector jobs using non-free software, SaaSS, and insecure communications protocols. * Workplaces must stop requiring people to use non-free software, SaaSS, and insecure communications protocols. * There must be a law requiring businesses to accept anonymous forms of payment. @@ -41,7 +40,7 @@ No matter how accurate future AI Sherlock is, there are a few things that will p * There must be a law against markets for personal data, the same way there are laws against markets for human organs. * Smartphone location tracking must end. * We must educate people about the importance of privacy and create political pressure to protect it. -* <more items here...> +* [more items here...] If you notice, almost all of the above points are related to preventing data collection and not preventing AI use. AI is just software. To stop people using it would require extremely draconian measures that might undermine privacy anyways. I'm not saying draconian measures protect us from AI will never be justifiable. I'm just saying why resort to that when there are solutions that aren't draconian and will actually allow us to preserve our rights? @@ -59,7 +58,7 @@ For those cases, we need strict, legally enforceable data collection and data pr * The technology must not collect more data than necessary to achieve its ends. * The technology must securely delete said data after it's no longer needed. * The technology must securely encrypt all transmitted data. -* <more items here...> +* [insert more items here...] Of course the guidelines will be technology-specific and they won't be perfect. There will still be data leaks and hacks. But we have to collectively agree on certain trade-offs. There are going to be some benefits of AI we just can't have unless everybody agrees to sacrifice some level of privacy. We're not going to be able to have self-driving cars and all the benefits they come with unless we allow cars to drive around with cameras and sensors capturing everything going on around them. @@ -76,7 +75,7 @@ The examples of self-driving cars and AI matchmaking were pretty mild in terms o If many useful services provided by AI simply cannot exist without collecting personal data on users, then we might end up with a 2-tier society. There will be those who sacrifice their privacy to reap the huge benefits of AI technology. Then there will be those who don't consent to giving up their privacy who will end up comparatively crippled. Dividing society in this way would be a very bad thing. ## Cryptography -But maybe we can avoid making trade-offs. One reason to stay hopeful I haven't mentioned yet is how cryptography could protect privacy from AI. With advances in homomorphic encryption[5], differential privacy[6], zero-knowledge proofs[7], and other cryptographic tools, we might can have our AI/privacy cake and eat it too. Improvements in homomorphic encryption efficiency in particular could enable us to perform all computations encrypted, including training neural networks on encrypted data.[8] This would be great news for privacy. Since efficient homomorphic encryption would allow businesses to perform arbitrary computations on encrypted data, no business offering an internet service would have any excuse for collecting or storing plaintext user data. +But maybe we can avoid making trade-offs. One reason to stay hopeful I haven't mentioned yet is how cryptography could protect privacy from AI. With advances in [homomorphic encryption](https://www.wikipedia.org/wiki/Homomorphic_encryption), [differential privacy](https://www.wikipedia.org/wiki/Differential_privacy), [zero-knowledge proofs](https://www.wikipedia.org/wiki/Zero-knowledge_proof), and other cryptographic tools, we might can have our AI/privacy cake and eat it too. Improvements in homomorphic encryption efficiency in particular could enable us to perform all computations encrypted, including [training neural networks on encrypted data](https://openaccess.thecvf.com/content_CVPRW_2019/papers/CV-COPS/Nandakumar_Towards_Deep_Neural_Network_Training_on_Encrypted_Data_CVPRW_2019_paper.pdf). This would be great news for privacy. Since efficient homomorphic encryption would allow businesses to perform arbitrary computations on encrypted data, no business offering an internet service would have any excuse for collecting or storing plaintext user data. We could also regulate businesses running AI-driven services so they're legally required to operate it collecting as minimal user data as possible. For instance, if we figured out how to use homomorphic encryption for the hypothetical AI matchmaking business without collecting plaintext data about users, it would then be legally required of all AI matchmaking businesses providing worse or equivalent service to provide that same level of privacy to users. @@ -86,14 +85,3 @@ With that law in place, we could constantly step up privacy protections against In summary, AI is a danger to privacy. It's getting more dangerous. To protect our privacy, we need to stop governments and businesses from collecting data about us and get them to purge data they already have. Stronger laws and regulations than currently exist anywhere in the world will need to be passed to protect user privacy in a meaningful way. If we're fortunate, advances in cryptography, particularly homomorphic encryption, could allow us to reap the benefits of AI without the privacy invasion. It's too early to say how the future of privacy will play out. Anyone that claims to know is either full of themselves or lying. There are just too many unknowns. As I said earlier, we don't know how much predictive power future AI will have or how fast it will develop. We don't know which privacy laws will be rolled out or when. We don't know if or when cryptographic tools will become available that can alleviate some of the privacy concerns. We don't know how public attitudes towards privacy will adapt over time. So it's all up in the air for now. - - -Link(s): -[1: Awful AI](https://github.com/daviddao/awful-ai) -[2: Free Software](https://www.gnu.org/philosophy/free-sw.en.html) -[3: Service as a Software Substitute](https://www.gnu.org/philosophy/who-does-that-server-really-serve.html) -[4: Nym](https://nymtech.net/) -[5: Homomorphic Encryption](https://www.wikipedia.org/wiki/Homomorphic_encryption) -[6: Differential Privacy](https://www.wikipedia.org/wiki/Differential_privacy) -[7: Zero-Knowledge Proof](https://www.wikipedia.org/wiki/Zero-knowledge_proof) -[8: Towards Deep Neural Network Training on Encrypted Data](https://openaccess.thecvf.com/content_CVPRW_2019/papers/CV-COPS/Nandakumar_Towards_Deep_Neural_Network_Training_on_Encrypted_Data_CVPRW_2019_paper.pdf) |