A week or two ago I was sent a message from (KP) about the multiverse, which sparked off the exchange below. I have for some time been intending to produce some blog posts on issues relating to philosophical materialism, and this initial exchange seems as good a starting point as any. The entire exchange has been set in italics. Checking later I realised that I mistakenly remembered Hume in my original reply, as I will explain below. But the exchange does serve to open up the subject. Lower down I will go on to say why arguments about multiverses may a) be relevant for the materialist theory of history, b) crucial to future technology.

The initial exchange

Hello Mr Cockshott. A friend of mine recently asked me about the multiverse hypothesis, and I am not adequately equipped to answer the question. I have posited that a chain multiverse is entailed already by QM it seems. Additionally, we observe that there are frequently many moons around a planet, frequently many planets around a star, always many stars around a galaxy, always many galaxies in a cluster, always many clusters in the cosmos. And the cosmos appears to stretch to vast quantities of such contents, possibly infinitely. The probability that this pattern continues is therefore higher than 50/50: we should expect there to be many universes. This would not be the case on a single star-planet system. Therefore the actual pattern we observe increases the probability of multiverse theory by exactly as much as observations not exhibiting that pattern would have decreased it. However, I am not a physicist, and could not adequately answer this question. There is so far no argument for or against the multiverse hypothesis. I would like to know what your thoughts on this subject were?(KP)

I think you have to define what a multiverse or many worlds hypothesis would amount to. In one sense they clearly are based on observation. So the recognition that stars are suns along with an assumption about the infinity of space was enough for Hume’s version of it. Before stars were seen to be suns there was no chance of formulating this. But it is not apparent that you need multiple levels of nested structure to justify the hypothesis. The QM multiverse is of a quite different order. Humean ones depend on configuration possibilities growing with the cube of the linear dimension of the universe. Quantum configurations grow as the exponent of the number of multi-state observables, and since these will, by the holographic principle, be proportional to the square of the linear dimensions of the space considered, the number of configurations or their basis grows as the exponent of the square of the linear dimensions. But this is much faster than Humean multiverses grow.

Thanks for answering! I might be overstepping what I do know here, but perhaps what the many worlds would entail would depend on the initial state. Quantum mechanics entails that some things are more likely than other things; if whatever the fundamental structure is that causes quantum mechanics to work didn t exist, then some things would not be more likely than other things. Everything would be as likely as anything else. Because the only way to make one thing more likely than something else, is for something to exist that makes the one thing more likely than the other. In some cases, logical necessity can do that. But not in every case. The number of universes that exist, for example. There is no logical necessity for there to be only one universe. Or any other specific number of them. And if nothing exists to decide how many there will be, all possible outcomes are equally likely. There being just one universe will be just as likely as there being seven of them, or a million of them, or any other number of them. And if we count all configurations, then smaller numbers actually become less probable than larger ones.

If I were to try to put it under an epistemological argument, I would assume the initial state to be that of nothing except that which is logically necessary. We have no knowledge of any law of physics that would prevent there being other universes (and no means of seeing if there are none), so the probability that there are is exactly what that probability would be if the number of universes that exist were selected at random. Of all the possible conditions that could obtain (no universe; just one universe; two universes; three; four; etc., all the way to infinitely many universes), that there would be only one universe is only one out of infinitely many alternatives. This entails it is effectively 100 percent certain an infinite multiverse exists because the probability of there being only one universe is then 1/INFINITY, which is approximately 0 percent. In fact, for any finite number n of universes, the probability of having only that many or less is n/INFINITY, which is still approximately 0 percent. If the probability of having any finite number of universes is always approximately 0 percent, then the probability that there is an infinite multiverse is approximately 100 percent. This further entails we have no need to explain why there is something rather than nothing: as then nothing (a state of exactly zero universes) also has a probability of 1/INFINITY, which is again [approximately] 0 percent. The probability that there will be something rather than nothing is therefore approximately 100 percent. This conclusion can only be averted if something were proved to exist that would change any of these probabilities, thereby making nothing (or only one thing) more likely than any other logical possibility. But we know of no such thing. Therefore, so far as we must conclude given what we actually know, there is an infinite multiverse, and there must necessarily be an infinite multiverse (both to a certainty of approximately 100 percent).

I am not sure if I am making any sense, if I am not appropriately answering your question or misunderstood, feel free to tell me.(KP)

Well there are lots of questions in what you write. What would be the ‘logic’ you use for this. What are its axioms and rules of inference? I doubt it can be done that way, you are almost certainly going to use natural language for part of your argument, and as such will depend on means and assumptions about meanings that you carry over from natural language. It also depends on what multiple universes mean. Are they an abstract logical possibility hypothesized by a philosopher like Hume or are they real. The quantum ones are not hypothetical since we observe their effects in interference or in quantum algorithms.

1  continuation

A week or so later this discussion continues:

Hello Mr.Cockshott! It has been quite some time, I apologise for the long delay. I hope you do not hold it against me, and I certainly hope I have not prevented you from beginning your blog posts on the topic of materialist conceptions of the universe!

One of the first axioms I work from is from the Principia Mathematica, which is that if all logically necessary truths, being logically necessary, always exist (i.e. there can never be a state of being, not even a state of nothing, where logically necessary truth are not true), then mathematical truths always exist.(KP)

What does the statement about logically necessary truths actually mean? A logically necessary truth is one which can, by a finite series of deductions or transformations be derived from an initial set of axioms and rules of inference. The rules of inference themselves can be thought of as additional axioms of the formal system. A materialist take on this is to say that any physical system that is TM equivalent, provided with the axioms and the ‘logical truth’ or theorem and which is supplied with a suitable energy source will terminate in a finite time having derived the theorem from the axioms.

But this is just a statement about physical systems that are TM equivalent. The mathematical truths do not ‘always exist’, since such deduction systems do not always physically exist, but you can make the prediction that having made the derivation once, an equivalent derivation can be made if another equivalent deduction system, with source of energy etc, is set in motion.

Another is the axiom regarding probability. Essentially, you can assign probabilities to the integers in such a way as to make them equally likely, when you use infinitesimals of a corresponding cardinality. This is actually a geometric problem, not a problem in number theory. The question is: can you divide an area into infinitely many areas of equal size. The answer is yes, and remains yes for any cardinality of infinity.

For example, you can divide a space into an aleph-0 number of equally sized sections if each section has a proportional area equal to the aleph-0 infinitesimal. Likewise an aleph-aleph number of sections, if each section is an aleph-aleph infinitesimal. And so on for all cardinalities. Mathematically, {aleph-0} x {aleph-0-corresponding infinitesimal} = 1 (just as any {n/m} x {m/n} = 1). And yet every single slice is exactly the same size (the same exact infinitesimal), therefore exactly the same probability (which is produced by the ratio of that section s area to the area of the whole space).

We might not have the mathematical means to determine what the probability is of any particular number being selected. We can see it would be an infinity of infinities close to zero, and as there is no highest infinity, the actual probability cannot be defined. But that does not mean it does not exist (its existence, after all, can be logically proven, using the proofs of infinitesimal cardinality.Point taken on language. Ignore numbers as digits (that is an artificial human language). Instead, think of elements and sets. There is a set of all natural numbers, and there are sets that can be placed in one-to-one correspondence with that set. Those sets have a quantity aleph-0. But there are also sets that can t be placed in one-to-one correspondence with that set, sets that have more elements in them than all the natural numbers (more than infinitely many numbers, even though there is no such thing as a highest number, yet there is still a greater quantity of elements than that; this is weird; yet it is formally proven to be true). Aleph-1, for example, has at least one more element in it than the set of all natural numbers. Aleph-2 has at least one more element in it than Aleph-1. And so on.

If a set can have one more element in it, and a universe can be an element (and it can), then there are sets of universes to which no natural number can be assigned, i.e. the number of universes in that set is greater than any natural number. Again, even though there is not supposed to be a number higher than the highest number, because there is no highest number, yet Cantor s proof proves that in fact there is. This is hard for human meat brains to wrap their heads around, but there it is. It s nevertheless true. I hope I have cleared some points up. It is quite late here, so perhaps I have missed some things or I have not made myself clear and it came out muddled. If that is the case, do tell.(KP)

You made me go back and reread the first two chapters of Kolmogorov’s Foundations of Probability on this. What I understand from his argument is that one can go from a theory of probabilities over finite fields to a set of theorems about infinite probability fields, Borel fields – for example, given by distribution functions over the real number line, but that these only have a mathematical existence in the sense of being steps in proofs about things one wants to demonstrate for finite probability fields.

Remark: Even if the sets (events) A of ℑ can be interpreted as actual and (perhaps only approximately) observable events, it does not, of course, follow from this that the sets of the extended field Bℑ reasonably admit of such an interpretation. Thus there is the possibility that while a field of probability (ℑ, P) may be regarded as the image (idealized, however) of actual random events, the extended field of probability (Bℑ, P) will still remain merely a mathematical structure.

Thus sets of Bℑ are generally merely ideal events to which nothing corresponds in the outside world. However, if reasoning which utilizes the probabilities of such ideal events leads us to a determination of the probability of an actual event of g, then, from an empirical point of view also, this determination will automatically fail to be contradictory.(Kolgomorov, Foundations of Probability, page 17-18)

You also have to consider what these probabilities in the axiomatic theory of probability are, they are real numbers associated with sets of what he calls atomic events. One can reasonably easily go from his theory of finite probability fields to actual events in the real world – he gives examples like sequences of coin tosses. Once you go on to infinite fields and distribution functions the relationship becomes looser. They are real numbers associated with each subset of infinite sets. For example, in quantum mechanics, one defines probability density functions of events occurring over space: photons impinging on a sensor for example. The maths of it are in terms that can be mapped into probabilities over a real-valued space of position. But any sensor is of finite resolution: grains in silver iodide, pixels on an electronic sensor, so the real-valued probability density function is just an algorithmic step to predict what is actually a finite probability distribution.

What are the probabilities you are talking about? You say you can associate a probability with each natural numbers such that the probability is the same for every number and the sum of the probabilities is 1. This would meet the requirements of being something that could be modelled by Kolmogorov’s axioms, but it would be only one of an infinite number of such associations between the real numbers and the natural numbers. I could, for example, say that we set up an association such that the sum of the probabilities of all even numbers =1 and the sum of the probabilities of all odd ones =0. Or we could say that the sum of the probabilities of primes =1 , or that the sum of the probabilities of the numbers 200 to 1207 =1.

The association between the numbers and the reals is arbitrary provided that the axioms are met and probability sums to 1.

So, yes, you can hypothesize an equal probability of all natural numbers, or any other distribution function, but these are just abstract associations between sets that you hypothesise. They do not become actual real probabilities unless there is a physical process that generates the events. But you can not set up any physical process that will select any natural number 0..∞with equal probability. Physical processes that select any integer in a finite range with equal probability can exist, and games of chance are founded on them, but there is no roulette wheel that will output any natural number with equal frequency.

If we go on to your conclusion :If a set can have one more element in it, and a universe can be an element (and it can), then there are sets of universes to which no natural number can be assigned, i.e. the number of universes in that set is greater than any natural number.

This seems to me to be an invalid argument even at the formal level. Consider the following rewriting of your argument:

:If a set can have one more element in it, and x can be an element (and it can), then there are sets of xs to which no natural number can be assigned, i.e. the number of x in that set is greater than any natural number.

Now substitute for x the number 7

If a set can have one more element in it, and 7 can be an element (and it can), then there are sets of 7s to which no natural number can be assigned, i.e. the number of 7 in that set is greater than any natural number.

So you would be claiming that there is an infinite number of 7s in the set of natural numbers, which is obviously wrong.

I think the false step is to go from saying that ‘a set can have one more element in it’, to implicitly inferring that ‘all sets can have one more element in them’. The first is highly questionable, the second clearly wrong. In standard set theory, the set formed by the addition of an additional element is a new set of which the original set is a subset.

Reflection

 deepfield

Figure#1Hubble deep field image, 3000 galaxies in a tiny area of sky.

On reflection I checked and Hume uses the many worlds concept in two distinct ways. One is the variant I attributed to him above

Where is the difficulty in conceiving, that the same powers or principles, whatever they were, which formed this visible world, men and animals, produced also a species of intelligent creatures, of more refined substance and greater authority than the rest? That these creatures may be capricious, revengeful, passionate, voluptuous, is easily conceived; nor is any circumstance more apt, among ourselves, to engender such vices, than the licence of absolute authority. And in short, the whole mythological system is so natural, that, in the vast variety of planets and worlds, contained in this universe, it seems more than probable, that, somewhere or other, it is really carried into execution.( Natural History, N 11.1, Bea 65)

Here Hume is calling on the infinity of space to justify the assumption that there will be other worlds with similar or more sophisticated beings than ourselves. But he also resorts to the infinity of time:

But were this world ever so perfect a production, it must still remain uncertain, whether all the excellencies of the work can justly be ascribed to the workman. If we survey a ship, what an exalted idea must we form of the ingenuity of the carpenter, who framed so complicated, useful and beautiful a machine? And what surprise must we feel, when we find him a stupid mechanic, who imitated others, and copied an art, which, through a long succession of ages, after multiplied trials, mistakes, corrections, deliberations, and controversies, had been gradually improving? Many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out: much labour lost: many fruitless trials made: and a slow, but continued improvement carried on during infinite ages in the art of world-making. In such subjects, who can determine, where the truth; nay, who can conjecture where the probability, lies; amidst a great number of hypotheses which may be proposed, and a still greater number, which may be imagined? (Dialogues D 5.7, KS 167)

We now have the Hubble deep field image, which reveals over 3000 distant galaxies in a tiny area, [1/24,000,000]of the whole sky. This implies that were Hubble to scan the whole sky it would reveal of the order of 72 billion galaxies. With 100 billion stars per galaxy and several planets per star, we are talking about of the order of 10,000,000,000,000 worlds within that portion the universe visible to current astronomy. That seems plenty to allow for creatures as ‘ capricious, revengeful, passionate and voluptuous’ as us. It even leaves, Hume argued, plenty of space for worlds on which the myths of the Olympian gods are true.

 XDF-separated.jpg

Figure Hubble Extreme deep field image sorted by galaxy ages.

Hume’s infinity of time may no longer be ascribed to by most astronomers, but the Hubble image looks back in time for billions of years, which leaves plenty of time for worlds to have been botched and bungled, by our human standards. Indeed or most of the existence of the Earth, it was ‘bungled’, uninhabitable to mammals with an unbreathable atmosphere.

But this is all still the sense of worlds as being planets and the discussion is of multiple planets which may be inhabitable and have clades analogous to mammals, primates and hominids. This is different from the many universes postulated in the Everett, or ‘many worlds’ interpretation of quantum mechanics.

The Everett’s many worlds are assumed to co exist in the same space and time, and, in subtle ways, to interact.

 1280px-Schroedingers_cat_film.svg.png 2000px-Double-slit.svg

Figure Figure#1The naive presentation of the Everett interpretation, left. Double slit experiment right.

The naive account of the Everett interpretation is that the universe splits into two every time a quantum process yields a binary choice. So in the famous thought experiment with a cat, poison and a radioactive source, the universe is seen as splitting into two, one with the cat alive and one with it dead. This would seem to imply an exponential growth in the number of universes as time passes, but the picture in Fig. 3 is a simplification for presentation purposes.

Think about it in more detail and it is evident that the dead cat outcome is itself the confluence of many different microcosmic timelines. The radioactive source will contain multiple atoms, any one of which could have decayed to produce the Geiger counter click. So long as there is no process that could distinguish which of the atoms decayed then the multiple atomic different decays yield the same macroscopic state of the universe. But down at the atomic level, there are millions of different micro timelines that have ended in the same place.

The splitting and merging of parallel universes apply even to a simple two-slit interference experiment. In the Everett view the emitted electrons each pass through both slits, interfere with one another and produce the pattern. But given a sensitive enough detector and a slow enough flux of electrons, each impact building up the interference pattern can be associated with a definite pixel position on the screen. So we have the process, the electron is emitted, it passes through each slit ( two timelines ) then they interfere and produce a sensor reading in one position – merged to a single timeline.

Viewed at another level of detail, the Everett theory would say that since there are a multiplicity of pixels on the screen, many of which have a non zero chance of being activated by the electron, since the interference pattern is stochastic, each of these different pixel activations which could take place corresponds to a distinct timeline with an associated amplitude. The merging/splitting process is messy here because we are dealing with very complex systems with high degrees of freedom. In quantum computing proposals like Grover’s algorithm[2] the process is more tightly controlled.

In this, you search a database for the value that meets some criterion. A visualisation of Grover’s algorithm is here. The criterion is a binary test that can be applied to a number – for example, does a particular symmetric decryption key yield a valid decryption of a message.

If you have 64 bit keys this would require 264tests on a conventional machine to run through all the possible tests. Using a quantum computer with Grover’s algorithm you can find the correct answer in 232tests.

The algorithm first sets 64 qubits into an equal superposition of states so that each bit has a 50% probability of being true or false, or in the underlying Hilbert space they two states of the qubit have amplitudes of [1/(√2)], which when squared gives a probability of [1/2].

If you have 64 qubits prepared this way, then the Hilbert space you are working in has 264 dimensions. In the Everett interpretation, you are doing your calculations using 264parallel universes. The amplitude in each of these universes, or along each of these dimensions is thus [1/(264)]. As the calculation progresses, you gradually drain amplitude from all of the universes other than the one containing the right answer. When you are finished, you do a measurement operation and, with a high probability, you will have the correct decryption key. At the moment 64 bit keys are beyond practical experimental work, but the algorithm was demonstrated for shorter numbers of qubits more than a decade ago[1].

It is worth stopping to think about this.

  1. The first point is that we are reaching the stage where we can practically manipulate and make use of multiple parallel universes to achieve results.
  2. The second point is that the multiple universes interact by interference. The boost in the amplitude of the ‘answer’ with each iteration is because the sum of the amplitudes of all of the sub-universes has to be 1. We are able to get the sub-universes to gradually add their amplitude to the one that we want – the one in which the computer comes up with the right answer.
  3. Next, we can see that the operation of such a computation has the effect of placing us, the user of the computer, in just that universe in which the answer is found. The old Machist Copenhagen interpretation in which the observer collapses a wave function was already repudiated by Everett, but in the case of quantum computation, it becomes totally implausible. Why should the observer be able to so collapse the wave function that it delivers a semantically meaningful answer – one that allows us to decode a text for example, or to extract prime factors with the Shor algorithm[4]. The Copenhagen interpretation already gave a somewhat mystical role to the observer, in this case holding to Copenhagen implies a miraculous clairvoyance on the observer’s part. Under the Everett interpretation, the operation of the quantum computer fades out all the observers who are in universes in which no answer is given. The universes which give the wrong answer, along with all the observers in them fade away to vanishingly low probability.
  4. The final point is the exponential vastness of the quantum multiverse. The number of dimensions of the machine’s Hilbert space is exponential in the number of qubits. One style of quantum computer uses single ions to store each qubit[3]. Each ion you add to the computation doubles the dimensionality of the Hilbert space, and thus the number of parallel universes used in the computation. But if you go from considering the highly controlled environment of the computer to just an ordinary collection of atoms which have multiple excitations or spin states, it is clear that under the Everett interpretation the number of parallel universes must be at least exponential in the number of atoms. So a mere gram of matter must coexist in of the order of eNAuniverses, where NA ≈ 6×1023  is Avogadro’s number. That is why I said earlier that the quantum multiverse grows exponentially with the volume of the individual universes. Much much faster than the cosmological many worlds that Hume relied on.

To be continued.

References

[1]
K-A Brickman, PC Haljan, PJ Lee, M Acton, L Deslauriers, and C Monroe. Implementation of grover’s quantum search algorithm in a scalable system. Physical Review A, 72(5):050306, 2005.

[2]
L.K. Grover. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters, 79(2):325-328, 1997.

[3]
Christopher Monroe and Jungsang Kim. Scaling the ion trap quantum processor. Science, 339(6124):1164-1169, 2013.

[4]
Thomas Monz, Daniel Nigg, Esteban A Martinez, Matthias F Brandl, Philipp Schindler, Richard Rines, Shannon X Wang, Isaac L Chuang, and Rainer Blatt. Realization of a scalable Shor algorithm. Science, 351(6277):1068-1070, 2016.

 


File translated from TEX by TTH, version 4.08.

Advertisements