Deploying Turing to see if we have free will
I'm sorry, Dave ...
Alan Turing didn't just lay the theoretical basis for modern computing and save Britain in World War Two by defeating German cryptography: one of his problems also provides a theoretical basis for understanding free will, according to MIT quantum theorist Seth Lloyd.
Given the number of biologists – particularly in the neurosciences – that have decided humans don't have free will, it's an interesting idea. The nub of Lloyd's idea is in this passage: “decision-making systems … can not in general predict the outcome of their decision-making process”.
That means, he writes in this paper on Arxiv, that: “The inability of the decider to predict her decision beforehand holds whether the decision-making process is deterministic or not”.
So where does Turing come into the picture? Via what's known as the “halting problem”, which is stated on Wikipedia as “given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever”. Turing proved that no generalM algorithm exists to prove the problem, for all programs and all program input pairs.
The same, Lloyd argues, can even apply to human decision making:
“when a decider that uses recursive reasoning to arrive at a decision then
- (a) No general technique exists to determine whether or not the decider will come to a decision at all (the halting problem).
- (b) If the decider is time-limited, then any general technique for determining the decider’s decision must sometimes take longer than the decider herself.
- (c) A computationally universal decider can not answer all questions about her future behavior.
- (d) A time-limited computationally universal decider takes longer to simulate her decision making process than it takes her to perform that process directly.
The process of recursive reasoning, Lloyd argues, is something which in theory could be simulated by a computer – even though simulating an entire human is somewhat beyond computer science today.
Because the decision-making process can be simulated, he says, a Turing test can be contrived to answer the question “do I have free will?”
- Am I a decider? (Even a thermostat counts as a decider);
- Do I make my decisions using recursive reasoning? (Can some kind of Turing machine, even if it's beyond today's technology, be theorised that would model the decision-making?)
- Can I model and simulate – at least partially – my own behavior and that of other deciders?
- Can I predict my own decisions beforehand?
The last question, Lloyd notes, is something of a trick question: if you answered yes to all four questions, your answer to the last question is a lie; if you answer “yes, yes, yes and no”, then “you are likely to believe you have free will”.
And now that free-will advocates are tossing their hats in the air, Lloyd has a kicker downside of his Turing test: “Indeed, as computers and operating systems become more powerful, they become unpredictable – even imperious – in ways that are all too human.”
In other words, if a computer or even an iPhone answered the four questions the right way, as it probably would, then it would believe it has just as much decision-making free will as you have. ®