This is a follow on from my last post Discoursing the Multiverse.

In that, I argued that advances in quantum computing make the Everett interpretation of quantum mechanics more plausible. In part, this is unsurprising since the whole field was set off by Deutsch a firm supporter of Everett in his groundbreaking Royal Society paper, back in 1984.

In a quantum computer, we harness parallel universes to do calculations faster than we could do them in a single one. Quantum computers require us to maintain qubits in a highly isolated state for the duration of the computation so that the submanifold of the multiverse spanned by the qubits does not interact with the information in the vast number of other universes that coexist with each unique configuration of the qubits.

qmimage

Whilst the computation is going on, the qubits in the machine are kept away from the bits spanning the rest of the multiverse. There are constraints on the bits that I have shown as X X X X … in that they have to represent a universe that is compatible with the existence of the quantum computer in the first place, but, given that, there are huge numbers of microscopically different universes that could have created the machine.

Once the machine finishes and produces an answer, we measure the qubits and in the process establish correlations between these and some of the bits shown as Xs.

If we assume the computer was being used to crack the code using for secure banking transactions. In the configurations in which the computer breaks the code, the Xs will diverge from those where it does not.  The hacker succeeds in draining somebody’s bank account. In those where it does not, the owner keeps the money.

This would be true even if the quantum computer just produced a random bitstring, by some quantum driven process. There will be a configuration in which the random number is the true cyber key and vastly more in which the key is wrong. All the quantum machine does, is change the odds that the random number will be the correct one. It biases the random number generator.

The example of the key cracking shows that whilst the initial set of universes spanned by the qubits are, in physical terms, very close, the divergence after measurement can be significant on the macroscale.  I have given an engineered example, in the literal sense that very precise engineering would be required to set up such a computing machine. But it has more general implications. Does the multi-world hypothesis allow for macroscale changes in the normal course of events?

Are there universes in which Napoleon was never emperor of France, universes in which he was never born?

I think we have to say yes. Our DNA can be damaged by radiation, the result of random quantum processes. So there will be universes in which the egg that later grew into Napoleon incurred a fatal mutation before cell division started. More generally the genetic structure of current populations is the result of an interaction between Darwinian selection and random mutation. We do not know that other, slightly different genetic structures could not have arisen and been selected on.

The classic Darwinian position is that given the same environmental pressures the same set of features will tend to be selected out from the random noise of mutation. So regardless of which of our ancestors in Europe first developed lactose tolerance, and regardless of how often the mutation arose, – lactose tolerant people were bound to make up the majority of the population in cattle raising areas. Convergence means that regardless of which land vertebrate took to the seas, whether reptile, bird or mammal, they would all end up streamlined, and with forelimbs adapted as paddles.

The classic Marxian position in the Napoleon case, is that whether or not he had been born, the new social organisation corresponding to the bourgeois revolution in France would have given the revolutionary armies a decisive advantage over the old monarchies(Wintringham, Tom, and John Blashford-Snell. Weapons and tactics. Penguin (Non-Classics), 1943.). If we take historical materialism at face value it is stating something very like the classic Darwinian position. It says that the shape of the end result is relatively insensitive to random variations of initial conditions. In the current multi-universe formulation it means that a large portion of the alternative possible histories will be very similar. The individual personalities will be different, but the broad features of societies will stay the same.

We do have to abandon a simple unilinear model of historical change like this

mmode

for one based on Markov processes like this:

modes

Markov model representation of transitions between forms of economy. Examples of the labeled transitions:(a) Mongolia;(b) Germany transition to feudalism; (c) Slave economy West Africa;(d) Chinese revolution;(e) Britain; (f) East Germany; (g) Roman republic;(h) Late West Roman Empire;(i) Russia.

where there are multiple possible base states with transition probabilities ( the small letters)  linking them. The problem is that if we want to parameterise our model of historical materialism, we do not have access to all possible histories. We only have the imperfectly known subset of histories compatible with our records. We have instances of many of these transitions, and we could hope to get rough estimates of the weights to be allocated to the arcs. But the estimates are subject to such noise from the small sample set that we have to be very cautious about predicting future developments.

Advertisements