How to Solve Newcomb’s “Paradox”

2 min read

There is only one solution

A godlike being — let’s call her Omega — presents you with two boxes. Box A is open and contains €1,000. Box B, however, is closed; Omega tells you it either contains €1,000,000 or nothing at all. You have a choice: take both boxes, or only take box B.

Easy, right? You don’t know what box B contains, but whatever it contains, taking both boxes (“two-boxing”) will get you €1,000 more than taking only box B (“one-boxing”).

But wait, there’s a catch: Omega has predicted what you would do. Of course, you don’t know her prediction; Omega, however, does tell you that she put the €1,000,000 in box B if and only if she predicted that you would one-box.

Let’s assume Omega can perfectly predict your choice in this dilemma, and has played this game 1000 times before and always predicted accurately: all one-boxers found €1,000,000 in box B, while each two-boxer only earned €1,000. In order to win as much money as possible, should you one-box or two-box?

The “paradox”

The supposed paradox in this problem — called Newcomb’s problem — comes from the fact that there are two arguments that both seem reasonable, but lead to opposite conclusions.

The “expected utility” argument

Historically, one-boxers have earned more than two-boxers: because Omega always predicts a player’s choice accurately, all one-boxers have made €1,000,000 while each two-boxer earned only €1,000. Based on this, you’re more likely to get a big payoff if you one-box; this argument therefore suggests one-boxing.

The “strategic dominance” argument

Strategic dominance might sound complicated, but we already saw this argument in the introduction of the problem. It goes as follows: when you make your choice, Omega has already either put €1,000,000 or nothing in box B. The contents of this box are now fixed. Whatever box B contains, getting both box A and B will always get you €1,000 more than just getting box B. This argument therefore says you should two-box.

The solution

The strategic dominance argument fails to incorporate the link between the player and Omega: Omega predicts what the player will do. This essentially means that a situation where you two-box and find €1,000,000 in box B is impossible: Omega will have predicted you would two-box and kept box B empty. One-boxing and not getting the €1,000,000 is impossible as well, because of that same link.

So it’s the expected utility argument that wins? No, that argument just “got lucky”: it has the right conclusion, but the wrong reasoning. The argument rests on a statistical relation — a correlation — between one-boxers and getting €1,000,000, but as you might now, a correlation doesn’t necessarily imply causality. The advent of cold weather might cause both increased glove sales and higher frostbite rates, but from this you shouldn’t conclude not to buy gloves — even though glove buying is correlated with frostbite. Similarly, while there is a correlation between one-boxing and earning €1,000,000, that alone is not enough reason to one-box.

So what is the correct reasoning then? The crucial point in Newcomb’s problem is the predictive power of Omega. This power isn’t magic: you have it when you predict what a calculator will answer when you feed it “2 + 2”. And that’s the point: you know how to add numbers, and therefore know — on a functional level, not necessarily on a technical level — how the calculator comes to its answer “4”. Similarly, Omega can predict your decisions, and therefore seems to have a functional model of how you make decisions.

If you and your calculator are both perfect at adding numbers, there’s no way for your calculator to give a different answer than you predicted on any given addition problem. Likewise, Omega is perfect at predicting your answer; therefore, you can’t possibly decide to two-box while she predicted you’d one-box or vice versa. What you decide is what Omega predicted you would decide, in a way; two-boxing means Omega predicted you would two-box, and therefore means there’s nothing in box B, while one-boxing means there’s €1,000,000 in box B. Therefore, the only correct choice is to one-box.

If you liked this analysis, consider visiting my publication How to Build an ASI. For now, thanks for reading!

Hein de Haan My name is Hein de Haan. An Artificial Intelligence expert, I am concerned with the future of humanity. As a result, I try to study as much as possible about many different topics in order to have a positive impact on society.

Leave a Reply

Your email address will not be published. Required fields are marked *