summaryrefslogtreecommitdiff
path: root/content/entry/re-pascals-mugging.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/entry/re-pascals-mugging.md')
-rw-r--r--content/entry/re-pascals-mugging.md10
1 files changed, 5 insertions, 5 deletions
diff --git a/content/entry/re-pascals-mugging.md b/content/entry/re-pascals-mugging.md
index 9ff6e4e..4d25ce8 100644
--- a/content/entry/re-pascals-mugging.md
+++ b/content/entry/re-pascals-mugging.md
@@ -3,13 +3,13 @@ title: "Re: Pascal's Mugging"
date: 2023-06-21T00:00:00
draft: false
---
-For those who are unfamiliar with Pascal's Mugging, here's an excerpt from [Wikipedia](https://www.wikipedia.org/wiki/Pascal%27s_mugging "Pascal's Mugging"):
+For those who are unfamiliar with Pascal's Mugging, here's an excerpt from [Wikipedia](https://en.wikipedia.org/wiki/Pascal%27s_mugging "Pascal's Mugging"):
> "In Bostrom's description, Blaise Pascal is accosted by a mugger who has forgotten their weapon. However, the mugger proposes a deal: the philosopher gives them his wallet, and in exchange the mugger will return twice the amount of money tomorrow. Pascal declines, pointing out that it is unlikely the deal will be honoured. The mugger then continues naming higher rewards, pointing out that even if it is just one chance in 1000 that they will be honourable, it would make sense for Pascal to make a deal for a 2000 times return. Pascal responds that the probability of that high return is even lower than one in 1000. The mugger argues back that for any low but strictly greater than 0 probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet. In one example, the mugger succeeds by promising Pascal 1,000 quadrillion happy days of life. Convinced by the argument, Pascal gives the mugger the wallet."
-The justification for the possibility of the mugger giving Pascal 1,000 quadrillion happy days of life is basically that "anything's possible", however unlikely it may be. For the sake of the argument, I'll grant that premise. There's always this general possibility that there's something we don't understand that goes beyond where reasoning, evidence, and science can take us. Maybe we're in a simulation. Maybe I'm a [Boltzmann brain](https://www.wikipedia.org/wiki/Boltzmann_brain). Maybe an [evil demon](https://www.wikipedia.org/wiki/Evil_demon "Evil Demon") is tricking me about everything. I'm fine with that part of the thought experiment.
+The justification for the possibility of the mugger giving Pascal 1,000 quadrillion happy days of life is basically that "anything's possible", however unlikely it may be. For the sake of the argument, I'll grant that premise. There's always this general possibility that there's something we don't understand that goes beyond where reasoning, evidence, and science can take us. Maybe we're in a simulation. Maybe I'm a [Boltzmann brain](https://en.wikipedia.org/wiki/Boltzmann_brain). Maybe an [evil demon](https://en.wikipedia.org/wiki/Evil_demon "Evil Demon") is tricking me about everything. I'm fine with that part of the thought experiment.
-The problem I see with Nick Bostrom's version of Pascal's mugging is that it suggests a [false dichotomy](https://www.wikipedia.org/wiki/False_dilemma "False Dichotomy"): either Pascal just loses his wallet or he gets 1,000 quadrillion happy days of life. Clearly, these are not the only two possibilities given the justification of "anything is possible".
+The problem I see with Nick Bostrom's version of Pascal's mugging is that it suggests a [false dichotomy](https://en.wikipedia.org/wiki/False_dilemma "False Dichotomy"): either Pascal just loses his wallet or he gets 1,000 quadrillion happy days of life. Clearly, these are not the only two possibilities given the justification of "anything is possible".
Since there's no evidence that the mugger actually has the ability to grant happy days, I would assign the 1,000 quadrillion happy days outcome the same probability as an outcome where Pascal suffers for 1,000 quadrillion miserable days after giving the mugger his wallet. And I could also justify the possibility of the misery outcome using the same "anything's possible" justification. In fact, any outcome the mugger suggests with massive reward for giving up the wallet can be countered by suggesting an equally unlikely counterfactual which is just as bad as the good outcome is good, thus cancelling any expected gains from Pascal giving up his wallet.
@@ -33,6 +33,6 @@ Whatever the true reason is, I don't see Pascal's Mugging nor my thought experim
Others have suggested that, to resolve the apparent paradox of Pascal's Mugging, we should bound utility functions, penalize prior probabilities, or "abandon quantitative decision procedures in the presence of extremely large risks". I'm fine with these measures, but only insofar as they reflect the way human values behave quantitatively and not for any other justification.
-To conclude, I want to say something about the importance of Pascal's Mugging, [The Trolley Problem](https://www.wikipedia.org/wiki/Trolley_problem), and moral thought experiments in general.
+To conclude, I want to say something about the importance of Pascal's Mugging, [The Trolley Problem](https://en.wikipedia.org/wiki/Trolley_problem), and moral thought experiments in general.
-They can seem very theoretical, but they're actually very practical in that they help us figure out what we value by asking us to imagine extreme scenarios. The question of what our values are is relevant because, if we don't destroy ourselves, we'll eventually create [artificial general intelligence](https://www.wikipedia.org/wiki/Artificial_general_intelligence "Artificial General Intelligence") and we really need it to be [aligned](https://www.wikipedia.org/wiki/AI_alignment "AI Alignment") with those values.
+They can seem very theoretical, but they're actually very practical in that they help us figure out what we value by asking us to imagine extreme scenarios. The question of what our values are is relevant because, if we don't destroy ourselves, we'll eventually create [artificial general intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence "Artificial General Intelligence") and we really need it to be [aligned](https://en.wikipedia.org/wiki/AI_alignment "AI Alignment") with those values.