diff options
author | Nicholas Johnson <nick@nicholasjohnson.ch> | 2024-05-27 00:00:00 +0000 |
---|---|---|
committer | Nicholas Johnson <nick@nicholasjohnson.ch> | 2024-05-27 00:00:00 +0000 |
commit | 628046738b0e4f410c639dd4844925ff044c79d2fb14b0e42722f1bee733f1ad (patch) | |
tree | cc1af60eedfa34aca0c24a6f1f6edfc554b6912715dc090bc8f124527e857caf /content/entry/predicting-the-near-term-consequences-of-ai.md | |
parent | 46e98fe4f8c4c373ccb42427122f1fe032cc68038ec3e13dcf43dec31b874a8a (diff) | |
download | journal-628046738b0e4f410c639dd4844925ff044c79d2fb14b0e42722f1bee733f1ad.tar.gz journal-628046738b0e4f410c639dd4844925ff044c79d2fb14b0e42722f1bee733f1ad.zip |
Fix tons of links
Diffstat (limited to 'content/entry/predicting-the-near-term-consequences-of-ai.md')
-rw-r--r-- | content/entry/predicting-the-near-term-consequences-of-ai.md | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/content/entry/predicting-the-near-term-consequences-of-ai.md b/content/entry/predicting-the-near-term-consequences-of-ai.md index aabab0a..79f04d1 100644 --- a/content/entry/predicting-the-near-term-consequences-of-ai.md +++ b/content/entry/predicting-the-near-term-consequences-of-ai.md @@ -37,20 +37,20 @@ There won't be any law saying "You must use AI." just as there's no law saying " Since this implicit coercion issue isn't discussed at all for smartphones, I expect it won't get any attention for AI either. Therefore if AI somehow doesn't end up harming privacy and undermining consent in the way I just described, it'll be a matter of luck rather than careful planning. ## Attention Engineering / Manipulation -AI-powered social media sites are partially responsible for [destroying people's ability to pay attention](/2022/12/06/book-stolen-focus-why-you-cant-pay-attention-and-how-to-think-deeply-again/) and making them depressed and angry. In case you've been living under a rock, it has now become normalised for everyone to be addicted to their smartphone, checking social media hundreds of times per day. For that reason, I call social media networks, "digital [Skinner boxes](https://www.wikipedia.org/wiki/Operant_conditioning_chamber)". +AI-powered social media sites are partially responsible for [destroying people's ability to pay attention](/2022/12/06/book-stolen-focus-why-you-cant-pay-attention-and-how-to-think-deeply-again/) and making them depressed and angry. In case you've been living under a rock, it has now become normalised for everyone to be addicted to their smartphone, checking social media hundreds of times per day. For that reason, I call social media networks, "digital [Skinner boxes](https://en.wikipedia.org/wiki/Operant_conditioning_chamber)". [I don't carry a smartphone](/2021/12/26/why-i-dont-have-a-smartphone/) because I didn't want to be a part of that. Unfortunately, since everybody else has them, I'm often tempted to borrow other people's smartphones and get sucked in anyways. The pull of social media is very strong even for someone like me who goes out of their way to avoid it. If social media becomes any more addictive than it already is, and it almost certainly will since AI will only improve, then I think humanity is going to have an even bigger attention crisis on its hands. ## Autonomous Weapons -I won't go into too much detail about [AI-driven lethal autonomous weapons](https://www.wikipedia.org/wiki/Lethal_autonomous_weapon). Rather, I have a short video which captures my concern better than anything I could write here. It's called "[Slaughterbots](https://yewtu.be/embed/9CO6M2HsoIA?local=true)". If you haven't seen it, I would highly recommend it. +I won't go into too much detail about [AI-driven lethal autonomous weapons](https://en.wikipedia.org/wiki/Lethal_autonomous_weapon). Rather, I have a short video which captures my concern better than anything I could write here. It's called "[Slaughterbots](https://yewtu.be/embed/9CO6M2HsoIA?local=true)". If you haven't seen it, I would highly recommend it. I haven't researched this area enough to make any solid predictions. All I can say is that I hope we don't end up in a situation like in the video where everyone has to stay indoors all the time, nowhere is safe, etc. ## Jobs -I predict that all major useful [proprietary software](https://www.wikipedia.org/wiki/Proprietary_software) will be reverse engineered with AI assistance. Translation software will become good enough that no one will need to learn foreign languages unless they want to. As I mentioned in "[Automation, Bullshit Jobs, And Work](/2022/01/22/automation-bullshit-jobs-and-work/)", so much human labor will be automated that only two practical possibilities will remain: +I predict that all major useful [proprietary software](https://en.wikipedia.org/wiki/Proprietary_software) will be reverse engineered with AI assistance. Translation software will become good enough that no one will need to learn foreign languages unless they want to. As I mentioned in "[Automation, Bullshit Jobs, And Work](/2022/01/22/automation-bullshit-jobs-and-work/)", so much human labor will be automated that only two practical possibilities will remain: -1. In countries that stubbornly maintain a poor social safety net, loads of [bullshit jobs](https://www.wikipedia.org/wiki/Bullshit_Jobs) will be created to prevent mass homelessness, starvation, and ultimately revolution. -2. Alternatively, a socialist program like [universal basic income](https://www.wikipedia.org/wiki/Universal_basic_income) will be implemented so that people don't have to work to survive and are free to do other things. +1. In countries that stubbornly maintain a poor social safety net, loads of [bullshit jobs](https://en.wikipedia.org/wiki/Bullshit_Jobs) will be created to prevent mass homelessness, starvation, and ultimately revolution. +2. Alternatively, a socialist program like [universal basic income](https://en.wikipedia.org/wiki/Universal_basic_income) will be implemented so that people don't have to work to survive and are free to do other things. Perhaps some forms of automation could be banned to prevent mass unemployment, but I'm skeptical that would work since it might make one's country unable to compete in the global economy. I don't know enough about that to make any definitive claims though. @@ -69,9 +69,9 @@ I predict that AI will make the illegal practice of [parallel construction](/202 As for the court system, I predict that it'll be so easy to create synthetic media that photos, videos, audio, and other digital evidence will not be taken seriously any more. We will have to revert back to relying more on other forms of evidence such as impartial witnesses, contextual information, and DNA. ## Scientific Research -AI is already revolutionising scientific research. We can expect this trend to continue into the future. There are a few ideas floating around that try to make sure this new scientific understanding and technology helps mitigate [existential risk](https://www.wikipedia.org/wiki/Global_catastrophic_risk#Defining_existential_risks) rather than increasing it. +AI is already revolutionising scientific research. We can expect this trend to continue into the future. There are a few ideas floating around that try to make sure this new scientific understanding and technology helps mitigate [existential risk](https://en.wikipedia.org/wiki/Global_catastrophic_risk#Defining_existential_risks) rather than increasing it. -Two ideas I'm in favor of are [differential technological development](https://www.wikipedia.org/wiki/Differential_technological_development) and [differential intellectual progress](https://www.lesswrong.com/tag/differential-intellectual-progress). The idea of the former is to develop existential-risk-reducing technologies rather than existential-risk-increasing technologies. The idea of the latter is that we should increase our philosophical sophistication and wisdom before proceeding with technological progress. +Two ideas I'm in favor of are [differential technological development](https://en.wikipedia.org/wiki/Differential_technological_development) and [differential intellectual progress](https://www.lesswrong.com/tag/differential-intellectual-progress). The idea of the former is to develop existential-risk-reducing technologies rather than existential-risk-increasing technologies. The idea of the latter is that we should increase our philosophical sophistication and wisdom before proceeding with technological progress. It helps to have global coordination to accomplish these goals. Humanity currently lacks global cooperation, so it's going to be challenging to get everyone to agree to differentially pursue technological development. Even if international treaties are signed, it's hard to be sure that governments aren't secretly pursuing the banned technology, especially if it would give them an edge. @@ -79,7 +79,7 @@ It helps to have global coordination to accomplish these goals. Humanity current With a higher rate of technological development than in the past, governments will have to adopt more agile decision-making frameworks or else they won't keep pace with technological progress and won't be able to effectively govern. Computer-illiterate elderly government officials that can't keep up with smartphones nor social media just aren't going to cut it in the age of rapidly-advancing AI. We need leadership that can understand new technology. ## Conclusion -There's so much more that I wish I could get to, but I don't have the time. For instance, I didn't even mention any propositions concerning digital minds. That may be a more long-term issue, but I would argue that it's relevant now because we will soon build AIs that constitute primitive digital minds. Fortunately people like [Nick Bostrom](https://nickbostrom.com/) and [Carl Shulman](https://www.fhi.ox.ac.uk/team/carl-shulman/) have made some headway on digital minds in their paper "[Propositions Concerning Digital Minds and Society](https://nickbostrom.com/propositions.pdf)". +There's so much more that I wish I could get to, but I don't have the time. For instance, I didn't even mention any propositions concerning digital minds. That may be a more long-term issue, but I would argue that it's relevant now because we will soon build AIs that constitute primitive digital minds. Fortunately people like [Nick Bostrom](https://nickbostrom.com/) and [Carl Shulman](https://web.archive.org/web/20230418235430if_/https://www.fhi.ox.ac.uk/team/carl-shulman/) have made some headway on digital minds in their paper "[Propositions Concerning Digital Minds and Society](https://nickbostrom.com/propositions.pdf)". Anyways, I thank you for reading my journal entries and considering these issues with me. I hope to write more about AI in the future. Sometimes I look at the work of the people like Nick Bostrom and think "Wow! I am so underqualified to write about this. Should I even bother?" but then I remind myself that: |