From 63e5c7819dd3170a1713195c1e57a1d60ffe4c2a4f39b420335e2f6bf3c298b7 Mon Sep 17 00:00:00 2001 From: Nicholas Johnson Date: Wed, 5 Feb 2025 00:00:00 +0000 Subject: Replace 'whether or not' with 'whether' 'whether' is shorter. --- content/entry/newcombs-paradox-resolved.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'content/entry/newcombs-paradox-resolved.md') diff --git a/content/entry/newcombs-paradox-resolved.md b/content/entry/newcombs-paradox-resolved.md index 7140b9e..7c44fff 100644 --- a/content/entry/newcombs-paradox-resolved.md +++ b/content/entry/newcombs-paradox-resolved.md @@ -70,7 +70,7 @@ Long answer: There is a very subtle contradiction in the definition of Newcomb's Meanwhile taking only box B is supported by mathematical expected value, which doesn't rely on free choice being available after the prediction. It just says "If you take only box B, you can expect $1,000,000. If you take both boxes, you can expect $1,000". There's no notion of free will there. It's a purely statistical argument. The strategic dominance principle only seems appealing because of the strong intuition of having a free choice after the predictor has made the prediction. While [retrocausality](https://en.wikipedia.org/wiki/Retrocausality) doesn't actually occur in Newcomb's Paradox, it's not a bad mental model for thinking about the problem. Since the predictor is infallible, it has effective retrocausality. What the predictor did in the past is based on the box it already knows you're going to take. There's no real paradox, you just can't outwit the predictor even though your intuitions tell you that you "feel free". -You might think it doesn't make sense to prescribe players the strategy of choosing box B only, since they have "already made the choice" whether or not to take only box B. But, consider that by the same token, we have "already made the choice" whether or not to prescribe the player the strategy to take box B. So, it is equally coherent for us to prescribe the player to take box B as it is for the player to actually take box B. Saying there's no point in prescribing the player a course of action is akin to saying you'll just stay in bed all day since you have no free will. The "choice" to do nothing is also not of your own free will. In other words, you're not escaping your lack of free will by doing nothing. We aren't escaping the lack of the player's free will by not prescribing them a best course of action as we don't have free will either. So, there's no reason not to tell the player to take only box B. +You might think it doesn't make sense to prescribe players the strategy of choosing box B only, since they have "already made the choice" whether to take only box B. But, consider that by the same token, we have "already made the choice" whether to prescribe the player the strategy to take box B. So, it is equally coherent for us to prescribe the player to take box B as it is for the player to actually take box B. Saying there's no point in prescribing the player a course of action is akin to saying you'll just stay in bed all day since you have no free will. The "choice" to do nothing is also not of your own free will. In other words, you're not escaping your lack of free will by doing nothing. We aren't escaping the lack of the player's free will by not prescribing them a best course of action as we don't have free will either. So, there's no reason not to tell the player to take only box B. # Closing Some of the points I've written down in this post come from my own intuition. I couldn't write a single methodology for how I come up with it all. In philosophy, it's hard to define a single methodology that can solve problems since each problem is unique and touches on many different things. Maybe some day someone will come up with an algorithm for doing philosophy. Although that would be equivalent to finding an [algorithm for truth](https://yewtu.be/embed/leX541Dr2rU?local=true), so no one would be able to agree that it actually worked. -- cgit v1.2.3