summaryrefslogtreecommitdiff
path: root/content/entry/implications-of-synthetic-media.md
diff options
context:
space:
mode:
authorNicholas Johnson <nick@nicksphere.ch>2022-05-23 00:00:00 +0000
committerNicholas Johnson <nick@nicksphere.ch>2022-05-23 00:00:00 +0000
commit05fa3051e12acddfe320912a93e1927bcf1b64f6df2a14589594144df3b9f3e2 (patch)
treee2f767706bbef2caf24a3fd5ea9147f6866d3fef2c0e732f9b481932e87d67ea /content/entry/implications-of-synthetic-media.md
parent44ef9882132619ead1f888778804893d848b7686a4833e038b67b263165eb933 (diff)
downloadjournal-05fa3051e12acddfe320912a93e1927bcf1b64f6df2a14589594144df3b9f3e2.tar.gz
journal-05fa3051e12acddfe320912a93e1927bcf1b64f6df2a14589594144df3b9f3e2.zip
Fix spelling errors
Diffstat (limited to 'content/entry/implications-of-synthetic-media.md')
-rw-r--r--content/entry/implications-of-synthetic-media.md10
1 files changed, 5 insertions, 5 deletions
diff --git a/content/entry/implications-of-synthetic-media.md b/content/entry/implications-of-synthetic-media.md
index 07ae9ab..ce4c914 100644
--- a/content/entry/implications-of-synthetic-media.md
+++ b/content/entry/implications-of-synthetic-media.md
@@ -7,11 +7,11 @@ A few months ago, I wrote an entry titled "The Privacy Implications of Weak AI".
A.I. and automation are subjects people avoid thinking about because they're scary. I can't fault anybody for that because they're right. The way weak AI is already being used is extremely worrying. It doesn't bode well for the future, but we can't find solutions without discussing the problem. So today, I thought I'd explore another way weak A.I. might disrupt society.
-In case you're not familiar with the term "deepfake", it refers to AI-generated media[2] (synthetic media) where a person in a picture or video is digitally replaced with somebody else. The goal is for the replacement to be so seemless that it's impossible to tell the difference. Right now, deepfakes[3] are pretty good and they're getting better all the time. This has huge implications.
+In case you're not familiar with the term "deepfake", it refers to AI-generated media[2] (synthetic media) where a person in a picture or video is digitally replaced with somebody else. The goal is for the replacement to be so seamless that it's impossible to tell the difference. Right now, deepfakes[3] are pretty good and they're getting better all the time. This has huge implications.
# Plausible Deniability
## Blackmail
-You might initially think, as I did, that blackmail will get a lot easier. You won't even need real incriminating photos or videos of someone any more. You can just generate it as needed. But the problem is, every semi-computer-literate person will be able to generate convincing deepfakes. As deepfakes become more common and the public becomes more aware of them, blackmail using photos, videos, audio, etcetera will become impossible because the victim can always plausibly deny it.
+You might initially think, as I did, that blackmail will get a lot easier. You won't even need real incriminating photos or videos of someone any more. You can just generate it as needed. But the problem is, every semi-computer-literate person will be able to generate convincing deepfakes. As deepfakes become more common and the public becomes more aware of them, blackmail using photos, videos, audio, etc. will become impossible because the victim can always plausibly deny it.
Even if you have real blackmail material on someone, all the victim needs to do is claim it's deepfaked and it will be impossible for a third-party to be sure one way or the other without more context. So blackmail will become harder, not easier.
@@ -35,11 +35,11 @@ I imagine it like that scene in the first Terminator movie where terminators can
On the other side of the law, black hat hackers will certainly use deepfakes to social engineer corporations and institutions. In fact, it already happened when a voice deepfake was used to scam a CEO out of $243,000.[6]
# The Infopocalypse
-The central subject which we seem to be orbiting is the infopocalypse. That is, when sockpuppets and deepfakes become absolutely pervasive everywhere on the internet. And I have to mention sockpuppets because they go hand in hand with deepfakes in an important way.
+The central subject which we seem to be orbiting is the infopocalypse. That is, when sock puppets and deepfakes become absolutely pervasive everywhere on the internet. And I have to mention sock puppets because they go hand in hand with deepfakes in an important way.
Right now, what prevents bots from overtaking the internet is mainly CAPTCHA[7], phone registration, and bot detection systems. CAPTCHA is a technique to tell humans and computers apart. As A.I. improves, bots will eventually be able to do all the things that humans can do, including passing CAPTCHA. They'll also be able to bypass bot detection and, with some money, buy phone numbers.
-We have to assume that as time passes, it will take less and less resources for anyone to create their own personal army of convincing bots. Combining this with deepfakes will make it nearly impossible to tell human from machine. Unless new techniques for bot prevention are developed, online platforms may run rampant with spam, disinformation, and sockpuppets.
+We have to assume that as time passes, it will take less and less resources for anyone to create their own personal army of convincing bots. Combining this with deepfakes will make it nearly impossible to tell human from machine. Unless new techniques for bot prevention are developed, online platforms may run rampant with spam, disinformation, and sock puppets.
So new techniques will have to be developed to tell humans and machines apart and, hopefully, those techniques still allow for online anonymity. Internet protocols and applications will have to be adapted to defend against this new threat model.
@@ -50,7 +50,7 @@ Now, broadening the subject even more to synthetic media as a whole, not just de
Maintaining relationships with real people takes effort. With synthetic media and convincing chat bots, a lot of people will probably opt for relationships with synthetic, digital A.I. systems instead of other human beings. This could be really destructive to the social fabric. The word "loner" will take on a whole new meaning.
-What worries me the most is how addictive these A.I. chatbots could potentially be. We've already seen how bad social media and smartphone addiction is. Maybe it's too early to worry about this, but if A.I. chatbots pass the Turing test[8] and become capable of real-time audio and video calls, there will probably be less human connection in society.
+What worries me the most is how addictive these A.I. chat bots could potentially be. We've already seen how bad social media and smartphone addiction is. Maybe it's too early to worry about this, but if A.I. chat bots pass the Turing test[8] and become capable of real-time audio and video calls, there will probably be less human connection in society.
If you're looking for some inspiration, two good films depicting human-bot relationships are Her[9] and Ex Machina[10]. Those films both depict A.I. taking human form, which goes a bit outside the scope of synthetic media, but synthetic media by itself probably wouldn't make good film.