Phlip Goff
Aeon
In the past 40 or so years, a strange fact about our Universe gradually made itself known to scientists: the laws of physics, and the initial conditions of our Universe, are fine-tuned for the possibility of life. It turns out that, for life to be possible, the numbers in basic physics – for example, the strength of gravity, or the mass of the electron – must have values falling in a certain range. And that range is an incredibly narrow slice of all the possible values those numbers can have. It is therefore incredibly unlikely that a universe like ours would have the kind of numbers compatible with the existence of life. But, against all the odds, our Universe does.
Here are a few of examples of this fine-tuning for life:
- The strong nuclear force (the force that binds together the elements in the nucleus of an atom) has a value of 0.007. If that value had been 0.006 or less, the Universe would have contained nothing but hydrogen. If it had been 0.008 or higher, the hydrogen would have fused to make heavier elements. In either case, any kind of chemical complexity would have been physically impossible. And without chemical complexity there can be no life.
- The physical possibility of chemical complexity is also dependent on the masses of the basic components of matter: electrons and quarks. If the mass of a down quark had been greater by a factor of 3, the Universe would have contained only hydrogen. If the mass of an electron had been greater by a factor of 2.5, the Universe would have contained only neutrons: no atoms at all, and certainly no chemical reactions.
- Gravity seems a momentous force but it is actually much weaker than the other forces that affect atoms, by about 1036. If gravity had been only slightly stronger, stars would have formed from smaller amounts of material, and consequently would have been smaller, with much shorter lives. A typical sun would have lasted around 10,000 years rather than 10 billion years, not allowing enough time for the evolutionary processes that produce complex life. Conversely, if gravity had been only slightly weaker, stars would have been much colder and hence would not have exploded into supernovae. This also would have rendered life impossible, as supernovae are the main source of many of the heavy elements that form the ingredients of life.
Some take the fine-tuning to be simply a basic fact about our Universe: fortunate perhaps, but not something requiring explanation. But like many scientists and philosophers, I find this implausible. In The Life of the Cosmos (1999), the physicist Lee Smolin has estimated that, taking into account all of the fine-tuning examples considered, the chance of life existing in the Universe is 1 in 10229, from which he concludes:
In my opinion, a probability this tiny is not something we can let go unexplained. Luck will certainly not do here; we need some rational explanation of how something this unlikely turned out to be the case.
The two standard explanations of the fine-tuning are theism and the multiverse hypothesis. Theists postulate an all-powerful and perfectly good supernatural creator of the Universe, and then explain the fine-tuning in terms of the good intentions of this creator. Life is something of great objective value; God in Her goodness wanted to bring about this great value, and hence created laws with constants compatible with its physical possibility. The multiverse hypothesis postulates an enormous, perhaps infinite, number of physical universes other than our own, in which many different values of the constants are realised. Given a sufficient number of universes realising a sufficient range of the constants, it is not so improbable that there will be at least one universe with fine-tuned laws.
Both of these theories are able to explain the fine-tuning. The problem is that, on the face of it, they also make false predictions. For the theist, the false prediction arises from the problem of evil. If one were told that a given universe was created by an all-loving, all-knowing and all-powerful being, one would not expect that universe to contain enormous amounts of gratuitous suffering. One might not be surprised to find it contained intelligent life, but one would be surprised to learn that life had come about through the gruesome process of natural selection. Why would a loving God who could do absolutely anything choose to create life that way? Prima facie theism predicts a universe that is much better than our own and, because of this, the flaws of our Universe count strongly against the existence of God.
Turning to the multiverse hypothesis, the false prediction arises from the so-called Boltzmann brain problem, named after the 19th-century Austrian physicist Ludwig Boltzmann who first formulated the paradox of the observed universe. Assuming there is a multiverse, you would expect our Universe to be a fairly typical member of the universe ensemble, or at least a fairly typical member of the universes containing observers (since we couldn’t find ourselves in a universe in which observers are impossible). However, in The Road to Reality (2004), the physicist and mathematician Roger Penrose has calculated that in the kind of multiverse most favoured by contemporary physicists – based on inflationary cosmology and string theory – for every observer who observes a smooth, orderly universe as big as ours, there are 10 to the power of 10123 who observe a smooth, orderly universe that is just 10 times smaller. And by far the most common kind of observer would be a ‘Boltzmann’s brain’: a functioning brain that has by sheer fluke emerged from a disordered universe for a brief period of time. If Penrose is right, then the odds of an observer in the multiverse theory finding itself in a large, ordered universe are astronomically small. And hence the fact that we are ourselves such observers is powerful evidence against the multiverse theory.
Neither of these are knock-down arguments. Theists can try to come up with reasons why God would allow the suffering we find in the Universe, and multiverse theorists can try to fine-tune their theory such that our Universe is less unlikely. However, both of these moves feel ad hoc, fiddling to try to save the theory rather than accepting that, on its most natural interpretation, the theory is falsified. I think we can do better.
In the public mind, physics is on its way to giving us a complete account of the nature of space, time and matter. We are not there yet of course; for one thing, our best theory of the very big – general relativity – is inconsistent with our best theory of the very small – quantum mechanics. But it is standardly assumed that one day these challenges will be overcome and physicists will proudly present an eager public with the Grand Unified Theory of everything: a complete story of the fundamental nature of the Universe.
In fact, for all its virtues, physics tells us precisely nothing about the nature of the physical Universe. Consider Isaac Newton’s theory of universal gravitation: F=G(m1m2/r2)
The variables m1 and m2 stand for the masses of two objects that we want to work out the gravitational attraction between; F is the gravitational attraction between those two masses, G is the gravitational constant (a number we know from observation); and r is the distance between m1 and m2. Notice that this equation doesn’t provide us with definitions of what ‘mass’, ‘force’ and ‘distance’ are. And this is not something peculiar to Newton’s law. The subject matter of physics are the basic properties of the physics world: mass, charge, spin, distance, force. But the equations of physics do not explain what these properties are. They simply name them in order to assert equations between them.
If physics is not telling us the nature of physical properties, what is it telling us? The truth is that physics is a tool for prediction. Even if we don’t know what ‘mass’ and ‘force’ really are, we are able to recognise them in the world. They show up as readings on our instruments, or otherwise impact on our senses. And by using the equations of physics, such as Newton’s law of gravity, we can predict what’s going to happen with great precision. It is this predictive capacity that has enabled us to manipulate the natural world in extraordinary ways, leading to the technological revolution that has transformed our planet. We are now living through a period of history in which people are so blown away by the success of physical science, so moved by the wonders of technology, that they feel strongly inclined to think that the mathematical models of physics capture the whole of reality. But this is simply not the job of physics. Physics is in the business of predicting the behaviour of matter, not revealing its intrinsic nature.
Given that physics tell us nothing of the nature of physical reality, is there anything we do know? Are there any clues as to what is going on ‘under the bonnet’ of the engine of the Universe? The English astronomer Arthur Eddington was the first scientist to confirm general relativity, and also to formulate the Boltzmann brain problem discussed above (albeit in a different context). Reflecting on the limitations of physics in The Nature of the Physical World (1928), Eddington argued that the only thing we really know about the nature of matter is that some of it has consciousness; we know this because we are directly aware of the consciousness of our own brains:
We are acquainted with an external world because its fibres run into our own consciousness; it is only our own ends of the fibres that we actually know; from those ends, we more or less successfully reconstruct the rest, as a palaeontologist reconstructs an extinct monster from its footprint.
We have no direct access to the nature of matter outside of brains. But the most reasonable speculation, according to Eddington, is that the nature of matter outside of brains is continuous with the nature of matter inside of brains. Given that we have no direct insight into the nature of atoms, it is rather ‘silly’, argued Eddington, to declare that atoms have a nature entirely removed from mentality, and then to wonder where mentality comes from. In my book Consciousness and Fundamental Reality (2017), I developed these considerations into an extensive argument for panpsychism: the view that all matter has a consciousness-involving nature.
There are two ways of developing the basic panpsychist position. One is micropsychism, the view that the smallest parts of the physical world have consciousness. Micropsychism is not to be equated with the absurd view that quarks have emotions or that electrons feel existential angst. In human beings, consciousness is a sophisticated thing, involving subtle and complex emotions, thoughts and sensory experiences. But there seems nothing incoherent with the idea that consciousness might exist in some extremely basic forms. We have good reason to think that the conscious experience of a horse is much less complex than that of a human being, and the experiences of a chicken less complex than those of a horse. As organisms become simpler, perhaps at some point the light of consciousness suddenly switches off, with simpler organisms having no experience at all. But it is also possible that the light of consciousness never switches off entirely, but rather fades as organic complexity reduces, through flies, insects, plants, amoeba and bacteria. For the micropsychist, this fading-while-never-turning-off continuum further extends into inorganic matter, with fundamental physical entities – perhaps electrons and quarks – possessing extremely rudimentary forms of consciousness, to reflect their extremely simple nature.
However, a number of scientists and philosophers of science have recently argued that this kind of ‘bottom-up’ picture of the Universe is outdated, and that contemporary physics suggests that in fact we live in a ‘top-down’ – or ‘holist’ – Universe, in which complex wholes are more fundamental than their parts. According to holism, the table in front of you does not derive its existence from the sub-atomic particles that compose it; rather, those sub-atomic particles derive their existence from the table. Ultimately, everything that exists derives its existence from the ultimate complex system: the Universe as a whole.
Holism has a somewhat mystical association, in its commitment to a single unified whole being the ultimate reality. But there are strong scientific arguments in its favour. The American philosopher Jonathan Schaffer argues that the phenomenon of quantum entanglement is good evidence for holism. Entangled particles behave as a whole, even if they are separated by such large distances that it is impossible for any kind of signal to travel between them. According to Schaffer, we can make sense of this only if, in general, we are in a Universe in which complex systems are more fundamental than their parts.
If we combine holism with panpsychism, we get cosmopsychism: the view that the Universe is conscious, and that the consciousness of humans and animals is derived not from the consciousness of fundamental particles, but from the consciousness of the Universe itself. This is the view I ultimately defend in Consciousness and Fundamental Reality.
The cosmopsychist need not think of the conscious Universe as having human-like mental features, such as thought and rationality. Indeed, in my book I suggested that we think of the cosmic consciousness as a kind of ‘mess’ devoid of intellect or reason. However, it now seems to me that reflection on the fine-tuning might give us grounds for thinking that the mental life of the Universe is just a little closer than I had previously thought to the mental life of a human being.
The Canadian philosopher John Leslie proposed an intriguing explanation of the fine-tuning, which in Universes (1989) he called ‘axiarchism’. What strikes us as so incredible about the fine-tuning is that, of all the values the constants in our laws had, they ended up having exactly those values required for something of great value: life, and ultimately intelligent life. If the laws had not, against huge odds, been fine-tuned, the Universe would have had infinitely less value; some say it would have had no value at all. Leslie proposes that this proper understanding of the problem points us in the direction of the best solution: the laws are fine-tuned because their being so leads to something of great value. Leslie is not imagining a deity mediating between the facts of value and the cosmological facts; the facts of value, as it were, reach out and fix the values directly.
It can hardly be denied that axiarchism is a parsimonious explanation of fine-tuning, as it posits no entities whatsoever other than the observable Universe. But it is not clear that it is intelligible. Values don’t seem to be the right kind of things to have a causal influence on the workings of the world, at least not independently of the motives of rational agents. It is rather like suggesting that the abstract number 9 caused a hurricane.
But the cosmopsychist has a way of rendering axiarchism intelligible, by proposing that the mental capacities of the Universe mediate between value facts and cosmological facts. On this view, which we can call ‘agentive cosmopsychism’, the Universe itself fine-tuned the laws in response to considerations of value. When was this done? In the first 10-43 seconds, known as the Planck epoch, our current physical theories, in which the fine-tuned laws are embedded, break down. The cosmopsychist can propose that during this early stage of cosmological history, the Universe itself ‘chose’ the fine-tuned values in order to make possible a universe of value.
Making sense of this requires two modifications to basic cosmopsychism. Firstly, we need to suppose that the Universe acts through a basic capacity to recognise and respond to considerations of value. This is very different from how we normally think about things, but it is consistent with everything we observe. The Scottish philosopher David Hume long ago noted that all we can really observe is how things behave – the underlying forces that give rise to those behaviours are invisible to us. We standardly assume that the Universe is powered by a number of non-rational causal capacities, but it is also possible that it is powered by the capacity of the Universe to respond to considerations of value.
How are we to think about the laws of physics on this view? I suggest that we think of them as constraints on the agency of the Universe. Unlike the God of theism, this is an agent of limited power, which explains the manifest imperfections of the Universe. The Universe acts to maximise value, but is able to do so only within the constraints of the laws of physics. The beneficence of the Universe does not much reveal itself these days; the agentive cosmopsychist might explain this by holding that the Universe is now more constrained than it was in the unique circumstances of the first split second after the Big Bang, when currently known laws of physics did not apply.
Ockham’s razor is the principle that, all things being equal, more parsimonious theories – that is to say, theories with relatively few postulations – are to be preferred. Is it not a great cost in terms of parsimony to ascribe fundamental consciousness to the Universe? Not at all. The physical world must have some nature, and physics leaves us completely in the dark as to what it is. It is no less parsimonious to suppose that the Universe has a consciousness-involving nature than that it has some non-consciousness-involving nature. If anything, the former proposal is more parsimonious insofar as it is continuous with the only thing we really know about the nature of matter: that brains have consciousness.
Having said that, the second and final modification we must make to cosmopsychism in order to explain the fine-tuning does come at some cost. If the Universe, way back in the Planck epoch, fine-tuned the laws to bring about life billions of years in its future, then the Universe must in some sense be aware of the consequences of its actions. This is the second modification: I suggest that the agentive cosmopsychist postulate a basic disposition of the Universe to represent the complete potential consequences of each of its possible actions. In a sense, this is a simple postulation, but it cannot be denied that the complexity involved in these mental representations detracts from the parsimony of the view. However, this commitment is arguably less profligate than the postulations of the theist or the multiverse theorist. The theist postulates a supernatural agent while the agentive cosmopsychist postulates a natural agent. The multiverse theorist postulates an enormous number of distinct, unobservable entities: the many universes. The agentive cosmopsychist merely adds to an entity that we already believe in: the physical Universe. And most importantly, agentive cosmopsychism avoids the false predictions of its two rivals.
The idea that the Universe is a conscious mind that responds to value strikes us a ludicrously extravagant cartoon. But we must judge the view not on its cultural associations but on its explanatory power. Agentive cosmopsychism explains the fine-tuning without making false predictions; and it does so with a simplicity and elegance unmatched by its rivals. It is a view we should take seriously.
This essay was made possible through the support of a grant from Templeton Religion Trust to Aeon and a separate grant from the Templeton funded ‘Pantheism and Panentheism‘ project to the author. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of Templeton Religion Trust.
Philip Goff is associate professor in philosophy at the Central European University in Budapest. His research interest is in consciousness and he blogs at Conscience and Consciousness.
Phlip Goff
Aeon