summaryrefslogtreecommitdiff
path: root/content/entry
diff options
context:
space:
mode:
authorNicholas Johnson <nick@nicholasjohnson.ch>2023-04-26 00:00:00 +0000
committerNicholas Johnson <nick@nicholasjohnson.ch>2023-04-27 00:00:00 +0000
commitf0fb4eac729a51fc3456e7c0c0f794e72745913235bbfd0c452bb752a59598cd (patch)
treebec59a76804303dd77ef64b506a1bb605af5205b539bce570c62e010b7c36ee9 /content/entry
parentfae9e6595cb1424643d8df62bdcf165af8699bb42edf70efa15573b8972e5d93 (diff)
downloadjournal-f0fb4eac729a51fc3456e7c0c0f794e72745913235bbfd0c452bb752a59598cd.tar.gz
journal-f0fb4eac729a51fc3456e7c0c0f794e72745913235bbfd0c452bb752a59598cd.zip
New entry: predicting-the-near-term-consequences-of-ai
Diffstat (limited to 'content/entry')
-rw-r--r--content/entry/predicting-the-near-term-consequences-of-ai.md89
1 files changed, 89 insertions, 0 deletions
diff --git a/content/entry/predicting-the-near-term-consequences-of-ai.md b/content/entry/predicting-the-near-term-consequences-of-ai.md
new file mode 100644
index 0000000..b20ba92
--- /dev/null
+++ b/content/entry/predicting-the-near-term-consequences-of-ai.md
@@ -0,0 +1,89 @@
+---
+title: "Predicting The Near-Term Consequences of AI"
+date: 2023-04-27T00:00:00
+draft: false
+---
+I thought I'd have at least a few more years until AI became this good, but it's to the point where I'm starting to feel like I'm in a [Black Mirror](https://libremdb.iket.me/title/tt2085059/) episode. As AI technology improves, the near-term future becomes increasingly uncertain. Nevertheless, I think it's necessary to at least try to predict what will happen so we can prepare ourselves.
+
+So far on this journal, I've made several predictions about the effects of AI on society in the near term. I'd like to summarize my major predictions in this entry to make them more easily accessible, add a few more predictions, make some judgments, and suggest how humanity may proceed. Keep in mind that many of these predictions are highly speculative and may not pan out exactly the way I predict. They'll only apply if AI doesn't immediately drive us into a utopia/dystopia scenario. My predictions are by no means comprehensive and they're not in any particular order.
+
+## The Internet
+Let's begin with some predictions I started thinking about in "[Implications of Synthetic Media](/2022/04/24/implications-of-synthetic-media/)".
+
+I believe that free online service providers such as social media networks will be forced to take extreme actions to avoid being inundated with sock puppet accounts. This will probably include mandatory identity verification. Email providers will have to enforce a sender whitelist to prevent users from being flooded with spam emails. Since human users will easily be capable of generating synthetic (AI-generated) media and they'll have incentives to, everything one sees from unknown online sources will have to be treated with the highest degree of skepticism.
+
+One good thing about synthetic media is that it may make online extortion and blackmail harder. Since every computer-literate human will be able to generate nude photos and other embarrassing/incriminating data of anyone else, everyone will have full deniability because it'll be impossible to prove that the data is real.
+
+With AIs that can help hackers find software vulnerabilities and fake voices for social engineering, businesses will have to spend more resources on cybersecurity and cybersecurity training. Detecting cheaters in video games will become hard, if not impossible. Gaming companies may have to take extreme measures to prevent cheats.
+
+AI will open up new forms of art and self-expression. I predict that AI will completely undermine intellectual property. It's already happening. Artists' work is being remixed and reused without their permission. Free software is being laundered through AI into proprietary software. I think intellectual property was always a mistake and the only sensible way forward, especially given recent developments in AI, is to abolish it, put everything in the public domain, and set up a fund to reimburse artists, drug companies, movies producers, and anyone else who may depend on it for their livelihood.
+
+If intellectual property rights do continue to exist in the same capacity they exist today, I predict the laws regarding AI and intellectual property will be ineffective and unenforceable.
+
+## Human Relationships
+Humans will begin to form relationships with AIs like in the movie [Her](https://libremdb.iket.me/title/tt1798709). Even though the AIs won't be as good as in the movie, it won't matter. They'll be good enough to be used for many purposes. They'll be people's friends, significant others, therapists, life coaches, teachers, and everything in between. This may cause human-to-human relationships to become less common or important.
+
+## Privacy
+I think AI will be a privacy disaster in two separate ways. First, there will be more [AI-based privacy-invading technology](https://github.com/daviddao/awful-ai). I'm specifically concerned about:
+
+1. AI causing private information disclosure through scarily accurate inferential capabilities
+2. AI surveillance being used on groups of people in a way that exacerbates unjust power differentials
+
+Second, in my entry "[AI Poses a Threat to Privacy](/2023/03/28/ai-poses-a-threat-to-privacy/)" I expressed concern that AI would harm privacy in the same way smartphones do. Currently, the only way to benefit from the most powerful AIs is to give your private data to the service which provides the AI. If this remains true, it may create a two-tier society in which the small minority who chooses to forego the benefits of AI to preserve their privacy faces an intolerably difficult life.
+
+There won't be any law saying "You must use AI." just as there's no law saying "You must own a smartphone." It'll just be too difficult to function in society without it. For example, it'll be impossible to compete in the workforce against people who are willing to use AI to augment their abilities if you're not willing to. Thus, agreement to the AI service providers' terms of service will be [coerced](/2021/08/21/manufacturing-agreement/).
+
+Since this implicit coercion issue isn't discussed at all for smartphones, I expect it won't get any attention for AI either. Therefore if AI somehow doesn't end up harming privacy and undermining consent in the way I just described, it'll be a matter of luck rather than careful planning.
+
+## Attention Engineering / Manipulation
+AI-powered social media sites are partially responsible for [destroying people's ability to pay attention](/2022/12/06/book-stolen-focus-why-you-cant-pay-attention-and-how-to-think-deeply-again/) and making them depressed and angry. In case you've been living under a rock, it has now become normalised for everyone to be addicted to their smartphone, checking social media hundreds of times per day. For that reason, I call social media networks, "digital [Skinner boxes](https://www.wikipedia.org/wiki/Operant_conditioning_chamber)".
+
+[I don't carry a smartphone](/2021/12/26/why-i-dont-have-a-smartphone/) because I didn't want to be a part of that. Unfortunately, since everybody else has them, I'm often tempted to borrow other people's smartphones and get sucked in anyways. The pull of social media is very strong even for someone like me who goes out of their way to avoid it. If social media becomes any more addictive than it already is, and it almost certainly will since AI will only improve, then I think humanity is going to have an even bigger attention crisis on its hands.
+
+## Autonomous Weapons
+I won't go into too much detail about [AI-driven lethal autonomous weapons](https://www.wikipedia.org/wiki/Lethal_autonomous_weapon). Rather, I have a short video which captures my concern better than anything I could write here. It's called "[Slaughterbots](https://yewtu.be/embed/9CO6M2HsoIA?local=true)". If you haven't seen it, I would highly recommend it.
+
+I haven't researched this area enough to make any solid predictions. All I can say is that I hope we don't end up in a situation like in the video where everyone has to stay indoors all the time, nowhere is safe, etc.
+
+## Jobs
+I predict that all major useful [proprietary software](https://www.wikipedia.org/wiki/Proprietary_software) will be reverse engineered with AI assistance. Translation software will become good enough that no one will need to learn foreign languages unless they want to. As I mentioned in "[Automation, Bullshit Jobs, And Work](/2022/01/22/automation-bullshit-jobs-and-work/)", so much human labor will be automated that only two practical possibilities will remain:
+
+1. In countries that stubbornly maintain a poor social safety net, loads of [bullshit jobs](https://www.wikipedia.org/wiki/Bullshit_Jobs) will be created to prevent mass homelessness, starvation, and ultimately revolution.
+2. Alternatively, a socialist program like [universal basic income](https://www.wikipedia.org/wiki/Universal_basic_income) will be implemented so that people don't have to work to survive and are free to do other things.
+
+Perhaps some forms of automation could be banned to prevent mass unemployment, but I'm skeptical that would work since it might make one's country unable to compete in the global economy. I don't know enough about that to make any definitive claims though.
+
+## Life Purpose
+In my entry "[Automation and The Meaning of Work](/2022/09/07/automation-and-the-meaning-of-work/)", I predicted how automation would affect how people find meaning. I think it will have some positive benefits like no more child labor and freeing people from miserable and dangerous jobs, giving people more time to do things they like doing. But it will also have negative effects such as taking away work people find meaningful. I predict some jobs will still remain, specifically those where human workers like doing them and the people who benefit from the labor prefer humans doing them.
+
+I predict that if nothing is done to incentivize students, they'll be discouraged from attending higher education since their future jobs will be automated anyways. Perhaps students won't be discouraged though if going to university is more of a sociocultural expectation than a rational economic choice they're making.
+
+With the dramatic reduction in useful human labor, I predict that culture will be forced to adapt so that human meaning is no longer associated with what one does for money.
+
+## The Law
+I'm very concerned about how AI will affect the (in)justice system. There are worrying trends that I hope reverse themselves, such as AI surveillance taking U.S. prisons by storm. That terrifies me because U.S. prisons are already farcically punitive unlike [reasonable prison systems](/2021/02/03/documentary-the-norden-prison/), [there are far too many Americans in jail](/2022/03/05/website-visualizing-wealth-inequality-and-mass-incarceration/) many of which haven't even been convicted, and many of which have been convicted, but for breaking [unjust laws](/2020/11/08/legalize-all-drugs/).
+
+I predict that AI will make the illegal practice of [parallel construction](/2020/12/04/shining-light-on-the-dark-side-of-law-enforcement/) more effective and potentially more common. Perfect or near-perfect enforcement of laws would be highly undesirable or, to put it in less diplomatically, a total fucking nightmare. I think that we need to be very cautious in deciding which AI technologies, if any, police are permitted to use.
+
+As for the court system, I predict that it'll be so easy to create synthetic media that photos, videos, audio, and other digital evidence will not be taken seriously any more. We will have to revert back to relying more on other forms of evidence such as impartial witnesses, contextual information, and DNA.
+
+## Scientific Research
+AI is already revolutionising scientific research. We can expect this trend to continue into the future. There are a few ideas floating around that try to make sure this new scientific understanding and technology helps mitigate [existential risk](https://www.wikipedia.org/wiki/Global_catastrophic_risk#Defining_existential_risks) rather than increasing it.
+
+Two ideas I'm in favor of are [differential technological development](https://www.wikipedia.org/wiki/Differential_technological_development) and [differential intellectual progress](https://www.lesswrong.com/tag/differential-intellectual-progress). The idea of the former is to develop existential-risk-reducing technologies rather than existential-risk-increasing technologies. The idea of the latter is that we should increase our philosophical sophistication and wisdom before proceeding with technological progress.
+
+It helps to have global coordination to accomplish these goals. Humanity currently lacks global cooperation, so it's going to be challenging to get everyone to agree to differentially pursue technological development. Even if international treaties are signed, it's hard to be sure that governments aren't secretly pursuing the banned technology, especially if it would give them an edge.
+
+## Government
+With a higher rate of technological development than in the past, governments will have to adopt more agile decision-making frameworks or else they won't keep pace with technological progress and won't be able to effectively govern. Computer-illiterate elderly government officials that can't keep up with smartphones nor social media just aren't going to cut it in the age of rapidly-advancing AI. We need leadership that can understand new technology.
+
+## Conclusion
+There's so much more that I wish I could get to, but I don't have the time. For instance, I didn't even mention any propositions concerning digital minds. That may be a more long-term issue, but I would argue that it's relevant now because we will soon build AIs that constitute primitive digital minds. Fortunately people like [Nick Bostrom](https://nickbostrom.com/) and [Carl Shulman](https://www.fhi.ox.ac.uk/team/carl-shulman/) have made some headway on digital minds in their paper "[Propositions Concerning Digital Minds and Society](https://nickbostrom.com/propositions.pdf)".
+
+Anyways, I thank you for reading my journal entries and considering these issues with me. I hope to write more about AI in the future. Sometimes I look at the work of the people like Nick Bostrom and think "Wow! I am so underqualified to write about this. Should I even bother?" but then I remind myself that:
+
+1. He writes academic papers while I'm just writing a blog, so expectations of rigor are different
+2. I have decent reasoning skills and more thinking is needed on this subject
+3. There are people out there with far greater reach than I who are even less qualified, publicly thinking about AI
+
+So based on that, I don't think I'm out of bounds here. As I said and I'll repeat, my predictions are highly speculative. Nobody knows exactly what's going to happen in the near-term future. All we can do is make our best guess, and this is mine. If anyone has constructive criticism, feel free to [get in contact with me](/about/) and share it.