Because the available field-theoretical and string-theoretical techniques are not yet ready to answer the deepest questions about the beginning of the Universe, it is not surprising that this topic is inevitably a speculative one. But the papers about the particular question of the low-entropy beginning of the Universe seem to be more than just speculative.
I feel that most of the papers written about this topic - with authors including great names such as Roger Penrose, Lenny Susskind, Don Page, and many others - are somewhat irrational texts that try to solve non-existent problems and propose absurd scenarios that we simply know to be incorrect and whose starting point seems to be some ideologically-driven confusion. It is often difficult to figure out whether certain ideas in these papers were proposed seriously or as a satire. The papers usually disagree with each other in details because they draw different boundaries between serious statements and jokes. But there are so far no sharp boundaries in this business. ;-)
Second law of thermodynamics
Let me first try to explain why I think that there is no real paradox to talk about.
A perpetuum mobile of the second kind is impossible. This gadget would be able to cool down an object and transform all the "saved" heat into work. If this machine could be constructed, we might simultaneously cool down the Earth, stop the global warming, and satisfy all energy demands of the mankind. ;-)
The pessimistic hypothesis that it is impossible to create such a machine has survived for centuries and it has been reincarnated into a mathematical statement called the second law of thermodynamics. The entropy - a quantity that measures the amount of disorder of a macroscopic system - never decreases. It usually increases.
Is that a fundamental law of Nature? Well, not quite. In fact, if you realize that matter is composed of a finite number of atoms or other building blocks, you can see that the statement is not even quite true because the entropy is a wiggly function of time if you measure it accurately enough. Sometimes it goes up, sometimes it goes down. Only if you average the entropy over a large enough number of degrees of freedom and a long enough time, you will see that the average value increases.
You shouldn't be surprised that the entropy can't be exactly an increasing function of time because the time-evolution of a physical system is fully described by certain equations - either the classical equations of motion or something like the Schrödinger equation in quantum mechanics. Because these equations completely determine the future values of all quantities that can be used to make predictions, there is no room for additional laws. You can't add extra conditions or inequalities to the equations at time "T". Because you can't derive the inequality behind the second law of thermodynamics either - this inequality is time-asymmetric so you obviously can't get it from any time-reversal-symmetric or CPT-symmetric fundamental equations - you might think that the second law of thermodynamics is a strange addition to the laws of physics.
Can it be reconciled with the fundamental laws? You bet.
All facts that are conventionally described by the second law of thermodynamics can be derived from a simple assumption, namely a very low value of the entropy in the past. The dynamical equations of the Standard Model, electromagnetism, or Newtonian mechanics only describe the evolution of a physical system. They don't determine the initial conditions. If you want to predict how the Universe will look like, you need to know both the dynamical equations as well as the initial conditions. Reasonable initial conditions never contradict our beloved dynamical equations. These two packages of information work together quite happily.
In fact, we know that a low value of the entropy in the past is certainly true because it is the only way how the second law of thermodynamics can be guaranteed for macroscopic systems. And we know that this law is true because it is responsible for the arrow of time. People are usually getting older but they are rarely getting younger. The eggs break but they never unbreak. The crackpots at Not Even Wrong are getting dumber but they never learn anything, and so on.
The arrow of time is a part of common sense
I think that most farmers, drivers, and supermodels realize that there is a difference between the future and the past. In other words, they informally know that the second law of thermodynamics is true. If you assume that they can go through the usual reasoning, it follows that most farmers, drivers, and supermodels realize that the Universe had to have a much lower entropy billions of years ago than its present value. This fact has been tested at least as carefully as the dynamical equations of physics. Can you count how many broken eggs the people have seen? Have they seen an egg that unbroke? This experience can't be neglected or humiliated because the arrow of time is a very important ingredient that underlies not only life but a decent behavior of the Cosmos at the macrosopic scales. What I really want to argue is the following:
Every single person who has ever argued that the low-entropy beginning of the Universe is a paradox neglects a huge body of observational evidence that is accessible to most supermodels, namely observations showing that there is a difference between the past and the future.
The processes such as natural selection are essential for life as we know it. I don't care whether the low-entropy beginning of the Universe is counted as the law #1 or law #7 by the labor unions of the creators of the universes but this rule is nevertheless essential for them. All papers that play with the idea that the entropy should be maximized now - and maybe always if you also want to preserve the time-translational symmetry of your laws - or that the entropy should be maximized at the moment of the Big Bang haven't appreciated the difference between the past and the future. They study a Universe that can't be ours. They study theories that have been very safely falsified.
The entropy is generally maximized in the future. The previous sentence is really equivalent to the statement that the entropy had to be lower in the past as long as you agree that the past is the opposite of future. ;-) The concept of thermal equilibrium applies to physical systems that have had enough time to reach such an equilibrium. This is usually in the future. Indeed, cosmology used to have a paradox: it failed to explain why the CMB was so uniform. This paradox was only resolved by the cosmic inflation. The idea of thermal equilibrium simply has no reason to be relevant for the moment when an object - or a universe - is created. Quite on the contrary: it is reasonable to expect that such an object or a universe will not be in thermal equilibrium at the beginning. It could still happen that the equations of thermal quantum field theory will be useful for the Big Bang but there is no justifiable physical reason to expect that this should be the case.
I am sure that most blonde supermodels would agree with me on this point. The strawberry icecream they like to add into their tomato soup has a different temperature at the beginning and only reaches the same temperature at the end. ;-)
Religious problems with this obvious fact
In the previous paragraphs, I argued that some people overestimate the importance of the time-reversal-symmetric microscopic laws relatively to other important facts about the Cosmos we know - such as the difference between the future and the past. But there is one more reason why many people have a problem with this obvious insight. This reason is called religion:
- the idea of special initial conditions reminds us of God, and because these people really hate the idea of God, they also hate the idea of special initial conditions
It's one of many examples showing that a left-wing or anti-religious bias can lead to as unscientific ideas as a right-wing or religious bias.
As a Christian atheist, I am no real believer but if special initial conditions were offered as a definition of God, then I would argue that God has been experimentally proven beyond any reasonable doubt. How does it all work in my picture?
As explained above, there is no contradiction whatsoever between the local dynamical laws on one side and low-entropy initial conditions on the other side. Moreover, both sets of laws have been verified by observations. The dynamical laws have been verified by observing falling apples - and the Tevatron - while the second law of thermodynamics has been verified by observing falling eggs.
Someone may have hyper-atheist preconceptions that make any special initial conditions look "extremely unlikely". Well, others may have other irrational reasons that make it "extremely unlikely" that the Universe obeys some dynamical laws at each moment of time. These two groups of people come from different spiritual camps but I think that both of them defend a demonstrably false thesis.
Defining ourselves
If I use the interpretation of quantum mechanics based on consistent histories - my favorite interpretation - and if there is ever a need to define "myself" for some subtle cosmological argument, I define "myself" not only by requiring a more or less right configuration of some atoms or cells that make up my brain or other parts of the body. I also require this configuration to satisfy the second law of thermodynamics at the macroscopic scale in the past.
The projection operator or the density matrix that impose the condition that "I" am in the initial state simply don't allow any eggs to unbreak five minutes earlier. Equivalently, "I" am defined as a particular physical state that could have been evolved from a low-entropy initial state. If you wish, "I" am defined not only by the specific synapses in my brain or my DNA code but also by the condition that "I" was created by God. Only experimentally verified notions of God are used in this condition. ;-)
This approach is legal, it makes sense, and it is kind of inevitable to avoid some obviously absurd conclusions. One of them is called:
Boltzmann's brain
What is Boltzmann's brain? Boltzmann's brain is a brain of a famous 19th century physicist who has committed suicide 101 years ago. An exact replica of this brain has appeared as a random fluctuation in a carbon-rich gas cloud that has existed for huge periods of time. While it is unlikely that an exact copy of Boltzmann's brain suddenly appears in the lower troposphere by the end of this week, it becomes rather likely that such a brain materializes as a random thermal fluctuation if the gas has exponentially long time to try all random arrangements of the atoms. If those gas clouds exist for effectively infinite periods of time, it is more or less inevitable that Boltzmann's brain with all of his memories and feelings will be created there at least once. If you multiply the required time by 100, it is 100 times more likely that Boltzmann's brain will occur as a random thermal fluctuation than by the conventional method that starts with God, a low-entropy Big Bang, evolution, and a wild party attended by Boltzmann's parents.
The gas cloud is an obsolete, 19th century technology to create this brain. In the 21st century, we usually expect Boltzmann's brain to be created inside a spaceship: this incremental improvement is a gift of the 1960s. The spaceship including the brain appears as a thermal fluctuation in de Sitter space - something that resembles our Universe and has a permanent nonzero temperature.
OK, is this the way how brains are actually created?
I hope that most readers will agree that the answer is obviously "No". How do we know it is not the case? Well, your humble correspondent is defined by several conditions including a low-entropy initial state of his environment. So it is easy in my case: I can't exist as a thermal fluctuation. ;-) How much you can be sure that you are not a thermal fluctuation either? Well, even if you don't know that you were created by God, you should notice that most things in your brain make more sense than what you would expect if your brain were a fluke. You remember many events that seem to follow from each other in agreement with certain highly restrictive laws.
If your brain were a random fluctuation, it would be likely that the memories stored in your brain would be random, too. That's why you would expect that they would make much less sense than your memories actually do. While some combinations of events in your brain may look illogical or absurd, most of them look sensible. This fact allows you to discard the hypothesis that our Universe, our planet, or you personally were created a short time ago as a thermal fluctuation. The more you know and the more carefully you organize the events in your memory, the more reliably you will be able to falsify the hypothesis that you are a thermal fluctuation. For average brains of physics PhDs, the confidence level with which you can eliminate the hypothesis that you're just a noise in de Sitter space is remarkably close to 100%.
Instead, you may figure out that you were created in processes that satisfied the second law of thermodynamics, i.e. processes that started from a low-entropy initial state. You know very well that it is true. Every theory and every interpretation of an existing theory that disagrees with the fact that the people were created by processes that obeyed the second law of thermodynamics should be instantly discarded. If you care about observations, we have made more than enough of them to be sure that what we see can't be a random de Sitter fluctuation.
If something like Boltzmann's brain in a spaceship that occurred without any respect for the second law of thermodynamics had ever occurred, we would know that such an artificial physicist would almost certainly find the whole world confusing and his science would almost certainly lead nowhere. If we want to assume that his science could be getting increasingly sophisticated and coherent, we need to make ever stronger assumptions about the initial state of his spaceship and especially his brain. Such increasingly strong assumptions are increasingly unlikely and for a rational scientist, such an exercise should therefore be increasingly uninteresting.
Boltzmann's brain in the landscape
The strange ideas about Boltzmann's brain have reappeared in the context of the discussions about the landscape. I think that this very link is irrational, too. String theory describes details of ultrashort distance physics, parameters of effective field theories and a more detailed origin of their degrees of freedom, and small effects that are responsible for the conservation of information stored in an evaporating black hole, among several other things.
As far as we know, string theory has so far nothing to change about some general characteristics of Nature such as the second law of thermodynamics or the postulates of quantum mechanics. That's why I think that the colleagues who think that there exists an unsolved paradox of Boltzmann's brain would have the same opinion already before the era of string theory.
Has string theory really changed anything about these matters? What about the large landscape?
Well, string theory may predict 10^{350} four-dimensional long-lived semi-realistic vacua. Does this fact change anything about the interpretation of the arrow of time? I think that the answer is a resounding "No". 10^{350} is not such a large number, after all. 10^{-350} is the probability that approximately 1,000 bases of a random piece of DNA will match another random string of 1,000 bases.
These are very short pieces of the DNA molecule, well shorter than the DNA code of all animals you know. While it is unlikely that these pieces will agree, the probability that a whole brain or a spaceship will be found in a certain state is much less likely still. The entropy of the Universe is about 10^{100}, a googol. That means that the probability that such a large Universe will be found in a particular microstate is about
- 1 / 10^{10^{100}},
an inverse googolplex. The probability that you find a spaceship in a certain state is slightly higher - the inverse probability is a somewhat lower number - but it is still morally closer to the inverse googolplex than to the inverse googol. These are completely different numbers.
The crackpots who have marketed themselves as "critics of string theory" have produced such an astonishing amount of publicity in 2006 largely because most of the people who listen to them, including the foam of the journalists, are mathematically retarded folks who have genuine problems in dealing with many simple mathematical concepts including large numbers such as googol. If you tell them about a large number that exceeds their individual ability to imagine it, they explode, start to behave like animals, and write breathtaking stupidities in the newspapers.
Science is compatible with large numbers
Of course, theoretical physicists should have no problems to deal with numbers such as 10^{100} or 10^{350}. The order of the largest sporadic finite group is nearly 10^{54}. Large numbers are common in mathematics and high-energy physics and having psychological problems with large numbers is indeed a severe limitation that prevents anyone from having rational opinions about any problem that depends on these numbers.
Once again, the probability that you choose the right vacuum in the landscape, even if you are selecting one vacuum randomly, is much closer to one than the probability that a macroscopic piece of the Universe such as the brain will be found in a particular state. This is why the degeneracy of the landscape doesn't change anything qualitative about the calculation of the probabilities associated with considerations about the second law of thermodynamics: the probabilities of having a particular microstate of a macroscopic system have always been much much smaller than the probability of choosing the right vacuum in the landscape.
In this sense, the degeneracy of the landscape means that string theory as we know it today is a theory of the whole Universe except for 1,000 bases of a DNA molecule located in a random cave. This degeneracy is simply not a big deal.
Future of the second law of thermodynamics
In this text, I have explained why I think that it is irrational to think about theories where the humans appear as random fluctuations because we simply know that these theories are wrong. So why are they considered? Do they follow from the current picture of physics? I don't think so.
Occam's razor really tells us to forget about any causal relations between the properties of the Universe we observe and some hypothetical larger multiverse in which our Universe was created as a bubble. I find it obvious that the macroscopic details of the multiverse or the father-Cosmos of our Cosmos will have no impact on the macroscopic details of our Universe. This statement is morally equivalent to the observation that the incoming matter that has collapsed into a black hole won't influence the macroscopic nature of the Hawking radiation from the resulting black hole.
The microscopic states may be correlated but one needs a full definition of the quantum theory of gravity to say anything particular about such correlations. Also, I think that without such a complete theory, we shouldn't talk about hypothetical paradoxes that depend on assumptions about the full theory. If we only have a certain approximation, we should only talk about paradoxes that can be revealed by experiments that are described by this approximation. Most papers about the subject violate this principle, too. For example, the character and details of de Sitter horizon entropy has no implications for the observable physics of de Sitter space as it can be described by currently available formalisms which is why it's not right to argue that inflation revives some entropic paradoxes caused by the horizon entropy.
From a macroscopic or semiclassical viewpoint, we should simply eliminate all information about everything that happened "before" our Universe was created - even if it were created by a tunnelling process starting in a different vacuum. The idea that the properties of our Universe depend on some statistical features of different vacua in the multiverse - as recently advocated e.g. by Raphael Bousso - depends on the assumption that the different vacua included in the bubbling multiverse achieve some version of a universal probability measure - a counterpart of the Maxwell-Boltzmann distribution - that usually follows from thermal equilibrium.
But there is no thermal equilibrium in between the different vacua in the bubbling multiverse. The different bubbles are causally disconnected from each other in most cases. There is no known way how the equilibrium with its priviliged probability measure could be achieved in this setup. You shouldn't expect any equilibrium in between the worldlines either - the details of your calculations of the measure don't really matter because all of these calculations share a certain flawed idea.
There could exist an unknown way to achieve such an equilibrium ;-) but given the fact that the hypothesis assuming such a special probability measure on the space of vacua doesn't clarify any known observations that can be made quantitatively and because it doesn't make our theories more mathematically robust either, I think that it is a textbook example of a hypothesis with an unnecessary theoretical superstructure that should be removed with the help of Occam's razor.
A remaining enigma: details of initial conditions
When you do so, you end up with a general picture that I find obviously correct. All semi-classical or macroscopic features of our Universe only depend on the dynamical laws plus some assumptions about the initial conditions of our Universe when it was Planckian. Nothing that occurred "before" our bubble was created can ever influence phenomena inside our Universe at the semiclassical level and it shouldn't influence our semiclassical reasoning about our Universe either because our reasoning should be based on the causal relations that actually exist as opposed to causal relations that probably can't exist.
From this viewpoint, the only question we should be trying to answer in this context is related to the initial conditions in our particular Universe when it was small. Inflation partially solves this problem because its final product is largely independent on the initial conditions before inflation. This fact is a good thing for anyone who actually wants to make predictions because inflation simply reduces the importance of the Planckian initial conditions for such predictions. Consequently, it reduces our uncertainty, too.
In effective field theory, the amplitudes are only known up to some small corrections that go to zero as the cutoff scale is sent to infinity. One may describe an analogous statement involving inflation: physics after inflation can be predicted without knowing the initial conditions up to errors that go to zero as the number of e-foldings becomes large.
In the effective field theory example, we of course know the full theory that allows us to get the exact results equivalent to an arbitrarily high cutoff scale: it is called string theory. In the case of cosmology, we need to know both string theory as well as the exact initial conditions in order to eliminate the uncertainties that fail to be diluted away by the inflationary era.
Maybe we will never find a way to eliminate these uncertainties. Maybe we will. Maybe we will always have to rely on observations if our task is to deduce which vacuum is the right one. And maybe we will find powerful theoretical tools that can answer this question.
But I suspect that the knowledge of physics and statistics of other vacua that are only connected to our Universe via "miraculous" processes such as quantum tunneling will never help us to learn more about our Universe. For physicists who care about reality, the only priviliged probability distribution on the space of vacua will always be a Kronecker-delta distribution located at our vacuum. Everything else is an error. Only the first astronauts who will be able to visit other vacua and survive the first-order phase transition may start to view the Kronecker-delta picture as an anachronism. If inflation is generic in the new bubbles, there will never be such astronauts, even in principle. ;-)
Meanwhile, the low-entropy beginning of the Universe will survive as a principle and the future physicists will probably explain it by a full understanding of the Hartle-Hawking wavefunction embedded into string theory: this wavefunction is a pure state and its entropy is therefore zero. Even though Hawking would certainly find this interpretation annoying, I think that the insights of Hawking and Hartle, when developed really quantitatively in the full theory, will eventually prove the existence of God as defined in the first paragraphs of this text. ;-)
And that's the memo.