diff options
Diffstat (limited to 'content/entry/the-privacy-implications-of-weak-ai.md')
-rw-r--r-- | content/entry/the-privacy-implications-of-weak-ai.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/content/entry/the-privacy-implications-of-weak-ai.md b/content/entry/the-privacy-implications-of-weak-ai.md index 7ef7681..e7175bd 100644 --- a/content/entry/the-privacy-implications-of-weak-ai.md +++ b/content/entry/the-privacy-implications-of-weak-ai.md @@ -76,7 +76,7 @@ The examples of self-driving cars and AI matchmaking were pretty mild in terms o If many useful services provided by AI simply cannot exist without collecting personal data on users, then we might end up with a 2-tier society. There will be those who sacrifice their privacy to reap the huge benefits of AI technology. Then there will be those who don't consent to giving up their privacy who will end up comparatively crippled. Dividing society in this way would be a very bad thing. ## Cryptography -But maybe we can avoid making trade-offs. One reason to stay hopeful I haven't mentioned yet is how cryptography could protect privacy from AI. With advances in [homomorphic encryption](https://www.wikipedia.org/wiki/Homomorphic_encryption), [differential privacy](https://www.wikipedia.org/wiki/Differential_privacy), [zero-knowledge proofs](https://www.wikipedia.org/wiki/Zero-knowledge_proof), and other cryptographic tools, we might can have our AI/privacy cake and eat it too. Improvements in homomorphic encryption efficiency in particular could enable us to perform all computations encrypted, including [training neural networks on encrypted data](https://openaccess.thecvf.com/content_CVPRW_2019/papers/CV-COPS/Nandakumar_Towards_Deep_Neural_Network_Training_on_Encrypted_Data_CVPRW_2019_paper.pdf). This would be great news for privacy. Since efficient homomorphic encryption would allow businesses to perform arbitrary computations on encrypted data, no business offering an internet service would have any excuse for collecting or storing plaintext user data. +But maybe we can avoid making trade-offs. One reason to stay hopeful I haven't mentioned yet is how cryptography could protect privacy from AI. With advances in [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption), [differential privacy](https://en.wikipedia.org/wiki/Differential_privacy), [zero-knowledge proofs](https://en.wikipedia.org/wiki/Zero-knowledge_proof), and other cryptographic tools, we might can have our AI/privacy cake and eat it too. Improvements in homomorphic encryption efficiency in particular could enable us to perform all computations encrypted, including [training neural networks on encrypted data](https://openaccess.thecvf.com/content_CVPRW_2019/papers/CV-COPS/Nandakumar_Towards_Deep_Neural_Network_Training_on_Encrypted_Data_CVPRW_2019_paper.pdf). This would be great news for privacy. Since efficient homomorphic encryption would allow businesses to perform arbitrary computations on encrypted data, no business offering an internet service would have any excuse for collecting or storing plaintext user data. We could also regulate businesses running AI-driven services so they're legally required to operate it collecting as minimal user data as possible. For instance, if we figured out how to use homomorphic encryption for the hypothetical AI matchmaking business without collecting plaintext data about users, it would then be legally required of all AI matchmaking businesses providing worse or equivalent service to provide that same level of privacy to users. |