Sc13t4

Physicists Want to Rebuild Quantum Theory from Scratch

Physicists Want to Rebuild Quantum Theory from Scratch

SCIENTISTS HAVE BEEN using quantum theory for almost a century now, but embarrassingly they still don’t know what it means. An informal poll taken at a 2011 conference on Quantum Physics and the Nature of Reality showed that there’s still no consensus on what quantum theory says about reality—the participants remained deeply divided about how the theory should be interpreted.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Some physicists just shrug and say we have to live with the fact that quantum mechanics is weird. So particles can be in two places at once, or communicate instantaneously over vast distances? Get over it. After all, the theory works fine. If you want to calculate what experiments will reveal about subatomic particles, atoms, molecules and light, then quantum mechanics succeeds brilliantly.

But some researchers want to dig deeper. They want to know why quantum mechanics has the form it does, and they are engaged in an ambitious program to find out. It is called quantum reconstruction, and it amounts to trying to rebuild the theory from scratch based on a few simple principles.

If these efforts succeed, it’s possible that all the apparent oddness and confusion of quantum mechanics will melt away, and we will finally grasp what the theory has been trying to tell us. “For me, the ultimate goal is to prove that quantum theory is the only theory where our imperfect experiences allow us to build an ideal picture of the world,” said Giulio Chiribella, a theoretical physicist at the University of Hong Kong.

There’s no guarantee of success—no assurance that quantum mechanics really does have something plain and simple at its heart, rather than the abstruse collection of mathematical concepts used today. But even if quantum reconstruction efforts don’t pan out, they might point the way to an equally tantalizing goal: getting beyond quantum mechanics itself to a still deeper theory. “I think it might help us move towards a theory of quantum gravity,” said Lucien Hardy, a theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

The Flimsy Foundations of Quantum Mechanics

The basic premise of the quantum reconstruction game is summed up by the joke about the driver who, lost in rural Ireland, asks a passer-by how to get to Dublin. “I wouldn’t start from here,” comes the reply.

Where, in quantum mechanics, is “here”? The theory arose out of attempts to understand how atoms and molecules interact with light and other radiation, phenomena that classical physics couldn’t explain. Quantum theory was empirically motivated, and its rules were simply ones that seemed to fit what was observed. It uses mathematical formulas that, while tried and trusted, were essentially pulled out of a hat by the pioneers of the theory in the early 20th century.

Take Erwin Schrödinger’s equation for calculating the probabilistic properties of quantum particles. The particle is described by a “wave function” that encodes all we can know about it. It’s basically a wavelike mathematical expression, reflecting the well-known fact that quantum particles can sometimes seem to behave like waves. Want to know the probability that the particle will be observed in a particular place? Just calculate the square of the wave function (or, to be exact, a slightly more complicated mathematical term), and from that you can deduce how likely you are to detect the particle there. The probability of measuring some of its other observable properties can be found by, crudely speaking, applying a mathematical function called an operator to the wave function.

I think quantum theory as we know it will not stand. Alexei Grinbaum

But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them—such as the mathematical properties of operators that correspond to observable properties of the system—rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics? The eminent physicist John Wheeler once asserted that if we really understood the central point of quantum theory, we would be able to state it in one simple sentence that anyone could understand. If such a statement exists, some quantum reconstructionists suspect that we’ll find it only by rebuilding quantum theory from scratch: by tearing up the work of Bohr, Heisenberg and Schrödinger and starting again.

Quantum Roulette

One of the first efforts at quantum reconstruction was made in 2001 by Hardy, then at the University of Oxford. He ignored everything that we typically associate with quantum mechanics, such as quantum jumps, wave-particle duality and uncertainty. Instead, Hardy focused on probability: specifically, the probabilities that relate the possible states of a system with the chance of observing each state in a measurement. Hardy found that these bare bones were enough to get all that familiar quantum stuff back again.

Lucien Hardy, a physicist at the Perimeter Institute, was one of the first to derive the rules of quantum mechanics from simple principles.
GABRIELA SECARA, PERIMETER INSTITUTE FOR THEORETICAL PHYSICS

 Lucien Hardy, a physicist at the Perimeter Institute, was one of the first to derive the rules of quantum mechanics from simple principles.
GABRIELA SECARA, PERIMETER INSTITUTE FOR THEORETICAL PHYSICS

In quantum mechanics, however, a particle can exist not just in distinct states, like the heads and tails of a coin, but in a so-called superposition—roughly speaking, a combination of those states. In other words, a quantum bit, or qubit, can be not just in the binary state of 0 or 1, but in a superposition of the two.

But if you make a measurement of that qubit, you’ll only ever get a result of 1 or 0. That is the mystery of quantum mechanics, often referred to as the collapse of the wave function: Measurements elicit only one of the possible outcomes. To put it another way, a quantum object commonly has more options for measurements encoded in the wave function than can be seen in practice.

Hardy’s rules governing possible states and their relationship to measurement outcomes acknowledged this property of quantum bits. In essence the rules were (probabilistic) ones about how systems can carry information and how they can be combined and interconverted.

Hardy then showed that the simplest possible theory to describe such systems is quantum mechanics, with all its characteristic phenomena such as wavelike interference and entanglement, in which the properties of different objects become interdependent. “Hardy’s 2001 paper was the ‘Yes, we can!’ moment of the reconstruction program,” Chiribella said. “It told us that in some way or another we can get to a reconstruction of quantum theory.”

More specifically, it implied that the core trait of quantum theory is that it is inherently probabilistic. “Quantum theory can be seen as a generalized probability theory, an abstract thing that can be studied detached from its application to physics,” Chiribella said. This approach doesn’t address any underlying physics at all, but just considers how outputs are related to inputs: what we can measure given how a state is prepared (a so-called operational perspective). “What the physical system is is not specified and plays no role in the results,” Chiribella said. These generalized probability theories are “pure syntax,” he added — they relate states and measurements, just as linguistic syntax relates categories of words, without regard to what the words mean. In other words, Chiribella explained, generalized probability theories “are the syntax of physical theories, once we strip them of the semantics.”

Shouldn’t this shock anyone who thinks of quantum theory as an expression of properties of nature? Adán Cabello

The general idea for all approaches in quantum reconstruction, then, is to start by listing the probabilities that a user of the theory assigns to each of the possible outcomes of all the measurements the user can perform on a system. That list is the “state of the system.” The only other ingredients are the ways in which states can be transformed into one another, and the probability of the outputs given certain inputs. This operational approach to reconstruction “doesn’t assume space-time or causality or anything, only a distinction between these two types of data,” said Alexei Grinbaum, a philosopher of physics at the CEA Saclay in France.

To distinguish quantum theory from a generalized probability theory, you need specific kinds of constraints on the probabilities and possible outcomes of measurement. But those constraints aren’t unique. So lots of possible theories of probability look quantum-like. How then do you pick out the right one?

“We can look for probabilistic theories that are similar to quantum theory but differ in specific aspects,” said Matthias Kleinmann, a theoretical physicist at the University of the Basque Country in Bilbao, Spain. If you can then find postulates that select quantum mechanics specifically, he explained, you can “drop or weaken some of them and work out mathematically what other theories appear as solutions.” Such exploration of what lies beyond quantum mechanics is not just academic doodling, for it’s possible—indeed, likely—that quantum mechanics is itself just an approximation of a deeper theory. That theory might emerge, as quantum theory did from classical physics, from violations in quantum theory that appear if we push it hard enough.

Bits and Pieces

Some researchers suspect that ultimately the axioms of a quantum reconstruction will be about information: what can and can’t be done with it. One such derivation of quantum theory based on axioms about information was proposed in 2010 by Chiribella, then working at the Perimeter Institute, and his collaborators Giacomo Mauro D’Ariano and Paolo Perinotti of the University of Pavia in Italy. “Loosely speaking,” explained Jacques Pienaar, a theoretical physicist at the University of Vienna, “their principles state that information should be localized in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved.” (In irreversible processes, by contrast, information is typically lost—just as it is when you erase a file on your hard drive.)

What’s more, said Pienaar, these axioms can all be explained using ordinary language. “They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories,” he said. “And they all seem quite reasonable, so that it is easy to accept their truth.” Chiribella and his colleagues showed that a system governed by these rules shows all the familiar quantum behaviors, such as superposition and entanglement.

Giulio Chiribella, a physicist at the University of Hong Kong, reconstructed quantum theory from ideas in information theory.
COURTESY OF CIFAR

One challenge is to decide what should be designated an axiom and what physicists should try to derive from the axioms. Take the quantum no-cloning rule, which is another of the principles that naturally arises from Chiribella’s reconstruction. One of the deep findings of modern quantum theory, this principle states that it is impossible to make a duplicate of an arbitrary, unknown quantum state.

It sounds like a technicality (albeit a highly inconvenient one for scientists and mathematicians seeking to design quantum computers). But in an effort in 2002 to derive quantum mechanics from rules about what is permitted with quantum information, Jeffrey Bub of the University of Maryland and his colleagues Rob Clifton of the University of Pittsburgh and Hans Halvorson of Princeton University made no-cloning one of three fundamental axioms. One of the others was a straightforward consequence of special relativity: You can’t transmit information between two objects more quickly than the speed of light by making a measurement on one of the objects. The third axiom was harder to state, but it also crops up as a constraint on quantum information technology. In essence, it limits how securely a bit of information can be exchanged without being tampered with: The rule is a prohibition on what is called “unconditionally secure bit commitment.”

These axioms seem to relate to the practicalities of managing quantum information. But if we consider them instead to be fundamental, and if we additionally assume that the algebra of quantum theory has a property called non-commutation, meaning that the order in which you do calculations matters (in contrast to the multiplication of two numbers, which can be done in any order), Clifton, Bub and Halvorson have shown that these rules too give rise to superposition, entanglement, uncertainty, nonlocality and so on: the core phenomena of quantum theory.

Another information-focused reconstruction was suggested in 2009 by Borivoje Dakić and Časlav Brukner, physicists at the University of Vienna. They proposed three “reasonable axioms” having to do with information capacity: that the most elementary component of all systems can carry no more than one bit of information, that the state of a composite system made up of subsystems is completely determined by measurements on its subsystems, and that you can convert any “pure” state to another and back again (like flipping a coin between heads and tails).

Dakić and Brukner showed that these assumptions lead inevitably to classical and quantum-style probability, and to no other kinds. What’s more, if you modify axiom three to say that states get converted continuously—little by little, rather than in one big jump—you get only quantum theory, not classical. (Yes, it really is that way round, contrary to what the “quantum jump” idea would have you expect—you can interconvert states of quantum spins by rotating their orientation smoothly, but you can’t gradually convert a classical heads to a tails.) “If we don’t have continuity, then we don’t have quantum theory,” Grinbaum said.

May 26, 2015 – Harvard University
Quantum Physicist Chris Fuchs stands for a portrait inside the Integrated Sciences building on the campus of UMass Boston.
Photo by Katherine Taylor for Quanta

A further approach in the spirit of quantum reconstruction is called quantum Bayesianism, or QBism. Devised by Carlton Caves, Christopher Fuchs and Rüdiger Schack in the early 2000s, it takes the provocative position that the mathematical machinery of quantum mechanics has nothing to do with the way the world really is; rather, it is just the appropriate framework that lets us develop expectations and beliefs about the outcomes of our interventions. It takes its cue from the Bayesian approach to classical probability developed in the 18th century, in which probabilities stem from personal beliefs rather than observed frequencies. In QBism, quantum probabilities calculated by the Born rule don’t tell us what we’ll measure, but only what we should rationally expect to measure.

In this view, the world isn’t bound by rules—or at least, not by quantum rules. Indeed, there may be no fundamental laws governing the way particles interact; instead, laws emerge at the scale of our observations. This possibility was considered by John Wheeler, who dubbed the scenario Law Without Law. It would mean that “quantum theory is merely a tool to make comprehensible a lawless slicing-up of nature,” said Adán Cabello, a physicist at the University of Seville. Can we derive quantum theory from these premises alone?

“At first sight, it seems impossible,” Cabello admitted—the ingredients seem far too thin, not to mention arbitrary and alien to the usual assumptions of science. “But what if we manage to do it?” he asked. “Shouldn’t this shock anyone who thinks of quantum theory as an expression of properties of nature?”

Making Space for Gravity

In Hardy’s view, quantum reconstructions have been almost too successful, in one sense: Various sets of axioms all give rise to the basic structure of quantum mechanics. “We have these different sets of axioms, but when you look at them, you can see the connections between them,” he said. “They all seem reasonably good and are in a formal sense equivalent because they all give you quantum theory.” And that’s not quite what he’d hoped for. “When I started on this, what I wanted to see was two or so obvious, compelling axioms that would give you quantum theory and which no one would argue with.”

So how do we choose between the options available? “My suspicion now is that there is still a deeper level to go to in understanding quantum theory,” Hardy said. And he hopes that this deeper level will point beyond quantum theory, to the elusive goal of a quantum theory of gravity. “That’s the next step,” he said. Several researchers working on reconstructions now hope that its axiomatic approach will help us see how to pose quantum theory in a way that forges a connection with the modern theory of gravitation—Einstein’s general relativity.

Perhaps when we finally get our hands on quantum gravity, the interpretation will suggest itself. Lucien Hardy

Look at the Schrödinger equation and you will find no clues about how to take that step. But quantum reconstructions with an “informational” flavor speak about how information-carrying systems can affect one another, a framework of causation that hints at a link to the space-time picture of general relativity. Causation imposes chronological ordering: An effect can’t precede its cause. But Hardy suspects that the axioms we need to build quantum theory will be ones that embrace a lack of definite causal structure—no unique time-ordering of events—which he says is what we should expect when quantum theory is combined with general relativity. “I’d like to see axioms that are as causally neutral as possible, because they’d be better candidates as axioms that come from quantum gravity,” he said.

Hardy first suggested that quantum-gravitational systems might show indefinite causal structure in 2007. And in fact only quantum mechanics can display that. While working on quantum reconstructions, Chiribella was inspired to propose an experiment to create causal superpositions of quantum systems, in which there is no definite series of cause-and-effect events. This experiment has now been carried out by Philip Walther’s lab at the University of Vienna—and it might incidentally point to a way of making quantum computing more efficient.

“I find this a striking illustration of the usefulness of the reconstruction approach,” Chiribella said. “Capturing quantum theory with axioms is not just an intellectual exercise. We want the axioms to do something useful for us—to help us reason about quantum theory, invent new communication protocols and new algorithms for quantum computers, and to be a guide for the formulation of new physics.”

But can quantum reconstructions also help us understand the “meaning” of quantum mechanics? Hardy doubts that these efforts can resolve arguments about interpretation—whether we need many worlds or just one, for example. After all, precisely because the reconstructionist program is inherently “operational,” meaning that it focuses on the “user experience”—probabilities about what we measure—it may never speak about the “underlying reality” that creates those probabilities.

“When I went into this approach, I hoped it would help to resolve these interpretational problems,” Hardy admitted. “But I would say it hasn’t.” Cabello agrees. “One can argue that previous reconstructions failed to make quantum theory less puzzling or to explain where quantum theory comes from,” he said. “All of them seem to miss the mark for an ultimate understanding of the theory.” But he remains optimistic: “I still think that the right approach will dissolve the problems and we will understand the theory.”

Maybe, Hardy said, these challenges stem from the fact that the more fundamental description of reality is rooted in that still undiscovered theory of quantum gravity. “Perhaps when we finally get our hands on quantum gravity, the interpretation will suggest itself,” he said. “Or it might be worse!”

MORE QUANTA

  • MEGAN MOLTENI
  • NATALIE WOLCHOVER
    The Man Who’s Trying to Kill Dark Matter
  • FRANK WILCZEK
    Your Simple (Yes, Simple) Guide to Quantum Entanglement

Right now, quantum reconstruction has few adherents—which pleases Hardy, as it means that it’s still a relatively tranquil field. But if it makes serious inroads into quantum gravity, that will surely change. In the 2011 poll, about a quarter of the respondents felt that quantum reconstructions will lead to a new, deeper theory. A one-in-four chance certainly seems worth a shot.

Grinbaum thinks that the task of building the whole of quantum theory from scratch with a handful of axioms may ultimately be unsuccessful. “I’m now very pessimistic about complete reconstructions,” he said. But, he suggested, why not try to do it piece by piece instead—to just reconstruct particular aspects, such as nonlocality or causality? “Why would one try to reconstruct the entire edifice of quantum theory if we know that it’s made of different bricks?” he asked. “Reconstruct the bricks first. Maybe remove some and look at what kind of new theory may emerge.”

“I think quantum theory as we know it will not stand,” Grinbaum said. “Which of its feet of clay will break first is what reconstructions are trying to explore.” He thinks that, as this daunting task proceeds, some of the most vexing and vague issues in standard quantum theory—such as the process of measurement and the role of the observer—will disappear, and we’ll see that the real challenges are elsewhere. “What is needed is new mathematics that will render these notions scientific,” he said. Then, perhaps, we’ll understand what we’ve been arguing about for so long.

 

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Posted by Sc13t4 in Atomic, Design, Mathematics, Space/Time, Theoretical Physics, 0 comments
Proof Claimed for Deep Connection between Prime Numbers

Proof Claimed for Deep Connection between Prime Numbers

If true, a solution to the “abc” conjecture about whole numbers would be “one of the most astounding achievements of mathematics of the 21st century”

The usually quiet world of mathematics is abuzz with a claim that one of the most important problems in number theory has been solved.

Mathematician Shinichi Mochizuki of Kyoto University in Japan has released a 500-page proof of the abcconjecture, which proposes a relationship between whole numbers — a ‘Diophantine’ problem.

The abcconjecture, proposed independently by David Masser and Joseph Oesterle in 1985, might not be as familiar to the wider world as Fermat’s Last Theorem, but in some ways it is more significant. “The abcconjecture, if proved true, at one stroke solves many famous Diophantine problems, including Fermat’s Last Theorem,” says Dorian Goldfeld, a mathematician at Columbia University in New York. “If Mochizuki’s proof is correct, it will be one of the most astounding achievements of mathematics of the twenty-first century.”

By Philipe Ball  From Nature magazine Credit: Flickr/Center for Image in Science and Art _ UL

Like Fermat’s theorem, the abc conjecture refers to equations of the form a+b=c. It involves the concept of a square-free number: one that cannot be divided by the square of any number. Fifteen and 17 are square free-numbers, but 16 and 18 — being divisible by 42and 32, respectively — are not.

The ‘square-free’ part of a number n, sqp(n), is the largest square-free number that can be formed by multiplying the factors of n that are prime numbers. For instance, sqp(18)=2×3=6.

If you’ve got that, then you should get the abcconjecture. It concerns a property of the product of the three integers axbxc, or abc— or more specifically, of the square-free part of this product, which involves their distinct prime factors. It states that for integers a+b=c, the ratio of sqp(abc)r/calways has some minimum value greater than zero for any value of rgreater than 1. For example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)2/c = 900/128. In this case, in which r=2, sqp(abc)r/c is nearly always greater than 1, and always greater than zero.

Deep connection
It turns out that this conjecture encapsulates many other Diophantine problems, including Fermat’s Last Theorem (which states that an+bn=cnhas no integer solutions if n>2). Like many Diophantine problems, it is all about the relationships between prime numbers. According to Brian Conrad of Stanford University in California, “it encodes a deep connection between the prime factors of a, b and a+b”.

Many mathematicians have expended a great deal of effort trying to prove the conjecture. In 2007, French mathematician Lucien Szpiro, whose work in 1978 led to the abcconjecture in the first place claimed to have a proof of it, but it was soon found to be flawed.

Like Szpiro, and also like British mathematician Andrew Wiles, who proved Fermat’s Last Theorem in 1994, Mochizuki has attacked the problem using the theory of elliptic curves — the smooth curves generated by algebraic relationships of the sort y2=x3+ax+b.

There, however, the relationship of Mochizuki’s work to previous efforts stops. He has developed techniques that very few other mathematicians fully understand and that invoke new mathematical ‘objects’ — abstract entities analogous to more familiar examples such as geometric objects, sets, permutations, topologies and matrices. “At this point, he is probably the only one that knows it all,” says Goldfeld.

Conrad says that the work “uses a huge number of insights that are going to take a long time to be digested by the community”. The proof is spread across four long papers1–4, each of which rests on earlier long papers. “It can require a huge investment of time to understand a long and sophisticated proof, so the willingness by others to do this rests not only on the importance of the announcement but also on the track record of the authors,” Conrad explains.

Mochizuki’s track record certainly makes the effort worthwhile. “He has proved extremely deep theorems in the past, and is very thorough in his writing, so that provides a lot of confidence,” says Conrad. And he adds that the pay-off would be more than a matter of simply verifying the claim. “The exciting aspect is not just that the conjecture may have now been solved, but that the techniques and insights he must have had to introduce should be very powerful tools for solving future problems in number theory.”

This article is reproduced with permission from the magazine Nature. The article was first publishedon September 10, 2012.

Posted by Sc13t4 in Mathematics, Space/Time, Theoretical Physics, 0 comments
What is SpaceTime?

What is SpaceTime?

Physicists believe that at the tiniest scales, space emerges from quanta.
What might these building blocks look like?

People have always taken space for granted. It is just emptiness, after all—a backdrop to everything else. Time, likewise, simply ticks on incessantly. But if physicists have learned anything from the long slog to unify their theories, it is that space and time form a system of such staggering complexity that it may defy our most ardent efforts to understand.

Albert Einstein saw what was coming as early as November 1916. A year earlier he had formulated his general theory of relativity, which postulates that gravity is not a force that propagates through space but a feature of spacetime itself. When you throw a ball high into the air, it arcs back to the ground because Earth distorts the spacetime around it, so that the paths of the ball and the ground intersect again. In a letter to a friend, Einstein contemplated the challenge of merging general relativity with his other brainchild, the nascent theory of quantum mechanics. That would not merely distort space but dismantle it. Mathematically, he hardly knew where to begin. “How much have I already plagued myself in this way!” he wrote.

Einstein never got very far. Even today there are almost as many contending ideas for a quantum theory of gravity as scientists working on the topic. The disputes obscure an important truth: the competing approaches all say space is derived from something deeper—an idea that breaks with 2,500 years of scientific and philosophical understanding.

[SCIET Dynamic’s Note] This article is posted here because it beautifully presents some core issues regarding the controversy over the competition to describe reality in the realm of very small changes in space. We need to find a General Theory of Spacetime.

SCIET Dynamics seeks to unite the components of SpaceTime into an interdependent set that grows in complexity as it develops. It views the Void(Awareness), Space, Matter and Consciousness as sequences of creation built one upon the other. The Void, called “Awareness” in SD, exists as a sea of extremely small and fast fluctuations, which then gives rise to a burst of energy, labeled the “First Action” which converts the burst into ever smaller increments, or “points of Awareness”, that have the effect of “formatting” the area defined by the original burst of energy. The “formatting” is the byproduct of self-measuring algorithm which reduces uniformly within the original radius of the burst. When the increments reach the size of the original center point they begin to interact, or resonate, with that value. The resonance gives rise to a new quality that allows the information about the change created by movement to bounce off of the center point and be stored in the area around the “point of Awareness”, a phenomenon that is responsible to the formation of spheres that surround every “point of Awareness. All nucleons (Protons, neutrons and electrons) are created by this affect. The same affect is responsible for spherical forms in space of all sizes.

DOWN THE BLACK HOLE

A kitchen magnet neatly demonstrates the problem that physicists face. It can grip a paper clip against the gravity of the entire Earth. Gravity is weaker than magnetism or than electric or nuclear forces. Whatever quantum effects it has are weaker still. The only tangible evidence that these processes occur at all is the mottled pattern of matter in the very early universe—thought to be caused, in part, by quantum fluctuations of the gravitational field.

Black holes are the best test case for quantum gravity. “It’s the closest thing we have to experiments,” says Ted Jacobson of the University of Maryland, College Park. He and other theorists study black holes as theoretical fulcrums. What happens when you take equations that work perfectly well under laboratory conditions and extrapolate them to the most extreme conceivable situation? Will some subtle flaw manifest itself?

General relativity predicts that matter falling into a black hole becomes compressed without limit as it approaches the center—a mathematical cul-de-sac called a singularity. Theorists cannot extrapolate the trajectory of an object beyond the singularity; its time line ends there. Even to speak of “there” is problematic because the very spacetime that would define the location of the singularity ceases to exist. Researchers hope that quantum theory could focus a microscope on that point and track what becomes of the material that falls in.

Out at the boundary of the hole, matter is not so compressed, gravity is weaker and, by all rights, the known laws of physics should still hold. Thus, it is all the more perplexing that they do not. The black hole is demarcated by an event horizon, a point of no return: matter that falls in cannot get back out. The descent is irreversible. That is a problem because all known laws of fundamental physics, including those of quantum mechanics as generally understood, are reversible. At least in principle, you should be able to reverse the motion of all the particles and recover what you had.

A very similar conundrum confronted physicists in the late 1800s, when they contemplated the mathematics of a “black body,” idealized as a cavity full of electromagnetic radiation. James Clerk Maxwell’s theory of electromagnetism predicted that such an object would absorb all the radiation that impinges on it and that it could never come to equilibrium with surrounding matter. “It would absorb an infinite amount of heat from a reservoir maintained at a fixed temperature,” explains Rafael Sorkin of the Perimeter Institute for Theoretical Physics in Ontario. In thermal terms, it would effectively have a temperature of absolute zero. This conclusion contradicted observations of real-life black bodies (such as an oven). Following up on work by Max Planck, Einstein showed that a black body can reach thermal equilibrium if radiative energy comes in discrete units, or quanta.

Theoretical physicists have been trying for nearly half a century to achieve an equivalent resolution for black holes. The late Stephen Hawking of the University of Cambridge took a huge step in the mid-1970s, when he applied quantum theory to the radiation field around black holes and showed they have a nonzero temperature. As such, they can not only absorb but also emit energy. Although his analysis brought black holes within the fold of thermodynamics, it deepened the problem of irreversibility. The outgoing radiation emerges from just outside the boundary of the hole and carries no information about the interior. It is random heat energy. If you reversed the process and fed the energy back in, the stuff that had fallen in would not pop out; you would just get more heat. And you cannot imagine that the original stuff is still there, merely trapped inside the hole, because as the hole emits radiation, it shrinks and, according to Hawking’s analysis, ultimately disappears.

This problem is called the information paradox because the black hole destroys the information about the infalling particles that would let you rewind their motion. If black hole physics really is reversible, something must carry information back out, and our conception of spacetime may need to change to allow for that.

ATOMS OF SPACETIME

Heat is the random motion of microscopic parts, such as the molecules of a gas. Because black holes can warm up and cool down, it stands to reason that they have parts—or, more generally, a microscopic structure. And because a black hole is just empty space (according to general relativity, infalling matter passes through the horizon but cannot linger), the parts of the black hole must be the parts of space itself. As plain as an expanse of empty space may look, it has enormous latent complexity.

Even theories that set out to preserve a conventional notion of spacetime end up concluding that something lurks behind the featureless facade. For instance, in the late 1970s Steven Weinberg, now at the University of Texas at Austin, sought to describe gravity in much the same way as the other forces of nature. He still found that spacetime is radically modified on its finest scales.

Physicists initially visualized microscopic space as a mosaic of little chunks of space. If you zoomed in to the Planck scale, an almost inconceivably small size of 10–35 meter, they thought you would see something like a chessboard. But that cannot be quite right. For one thing, the grid lines of a chessboard space would privilege some directions over others, creating asymmetries that contradict the special theory of relativity. For example, light of different colors might travel at different speeds—just as in a glass prism, which refracts light into its constituent colors. Whereas effects on small scales are usually hard to see, violations of relativity would actually be fairly obvious.

In SCIET Dynamics the “atoms” of space time are perceived to be quantum scale fluctuations that leave tetrahedral tracks as they appear and disappear. The tracks are related to the Event Horizons of Black Holes because they bound the the area between the void and space. In this sense, the tiny “tetrons” are an artifact of the creation of space.

The thermodynamics of black holes casts further doubt on picturing space as a simple mosaic. By measuring the thermal behavior of any system, you can count its parts, at least in principle. Dump in energy and watch the thermometer. If it shoots up, that energy must be spread out over comparatively few molecules. In effect, you are measuring the entropy of the system, which represents its microscopic complexity.

If you go through this exercise for an ordinary substance, the number of molecules increases with the volume of material. That is as it should be: If you increase the radius of a beach ball by a factor of 10, you will have 1,000 times as many molecules inside it. But if you increase the radius of a black hole by a factor of 10, the inferred number of molecules goes up by only a factor of 100. The number of “molecules” that it is made up of must be proportional not to its volume but to its surface area. The black hole may look three-dimensional, but it behaves as if it were two-dimensional.

This weird effect goes under the name of the holographic principle because it is reminiscent of a hologram, which presents itself to us as a three-dimensional object. On closer examination, however, it turns out to be an image produced by a two-dimensional sheet of film. If the holographic principle counts the microscopic constituents of space and its contents—as physicists widely, though not universally, accept—it must take more to build space than splicing together little pieces of it.

The relation of part to whole is seldom so straightforward, anyway. An H2O molecule is not just a little piece of water. Consider what liquid water does: it flows, forms droplets, carries ripples and waves, and freezes and boils. An individual H2O molecule does none of that: those are collective behaviors. Likewise, the building blocks of space need not be spatial. “The atoms of space are not the smallest portions of space,” says Daniele Oriti of the Max Planck Institute for Gravitational Physics in Potsdam, Germany. “They are the constituents of space. The geometric properties of space are new, collective, approximate properties of a system made of many such atoms.”

What exactly those building blocks are depends on the theory. In loop quantum gravity, they are quanta of volume aggregated by applying quantum principles. In string theory, they are fields akin to those of electromagnetism that live on the surface traced out by a moving strand or loop of energy—the namesake string. In M-theory, which is related to string theory and may underlie it, they are a special type of particle: a membrane shrunk to a point. In causal set theory, they are events related by a web of cause and effect. In the amplituhedron theory and some other approaches, there are no building blocks at all—at least not in any conventional sense.

Although the organizing principles of these theories vary, all strive to uphold some version of the so-called relationalism of 17th- and 18th-century German philosopher Gottfried Leibniz. In broad terms, relationalism holds that space arises from a certain pattern of correlations among objects. In this view, space is a jigsaw puzzle. You start with a big pile of pieces, see how they connect and place them accordingly. If two pieces have similar properties, such as color, they are likely to be nearby; if they differ strongly, you tentatively put them far apart. Physicists commonly express these relations as a network with a certain pattern of connectivity. The relations are dictated by quantum theory or other principles, and the spatial arrangement follows.

Phase transitions are another common theme. If space is assembled, it might be disassembled, too; then its building blocks could organize into something that looks nothing like space. “Just like you have different phases of matter, like ice, water and water vapor, the atoms of space can also reconfigure themselves in different phases,” says Thanu Padmanabhan of the Inter-University Center for Astronomy and Astrophysics in India. In this view, black holes may be places where space melts. Known theories break down, but a more general theory would describe what happens in the new phase. Even when space reaches its end, physics carries on.

ENTANGLED WEBS

The big realization of recent years—and one that has crossed old disciplinary boundaries—is that the relevant relations involve quantum entanglement. An extrapowerful type of correlation, intrinsic to quantum mechanics, entanglement seems to be more primitive than space. For instance, an experimentalist might create two particles that fly off in opposing directions. If they are entangled, they remain coordinated no matter how far apart they may be.

Traditionally when people talked about “quantum” gravity, they were referring to quantum discreteness, quantum fluctuations and almost every other quantum effect in the book—but never quantum entanglement. That changed when black holes forced the issue. Over the lifetime of a black hole, entangled particles fall in, but after the hole evaporates fully, their partners on the outside are left entangled with—nothing. “Hawking should have called it the entanglement problem,” says Samir Mathur of Ohio State University.

Even in a vacuum, with no particles around, the electromagnetic and other fields are internally entangled. If you measure a field at two different spots, your readings will jiggle in a random but coordinated way. And if you divide a region in two, the pieces will be correlated, with the degree of correlation depending on the only geometric quantity they have in common: the area of their interface. In 1995 Jacobson argued that entanglement provides a link between the presence of matter and the geometry of spacetime—which is to say, it might explain the law of gravity. “More entanglement implies weaker gravity—that is, stiffer spacetime,” he says.

Several approaches to quantum gravity—most of all, string theory—now see entanglement as crucial. String theory applies the holographic principle not just to black holes but also to the universe at large, providing a recipe for how to create space—or at least some of it. For instance, a two-dimensional space could be threaded by fields that, when structured in the right way, generate an additional dimension of space. The original two-dimensional space would serve as the boundary of a more expansive realm, known as the bulk space. And entanglement is what knits the bulk space into a contiguous whole.

In 2009 Mark Van Raamsdonk of the University of British Columbia gave an elegant argument for this process. Suppose the fields at the boundary are not entangled—they form a pair of uncorrelated systems. They correspond to two separate universes, with no way to travel between them. When the systems become entangled, it is as if a tunnel, or wormhole, opens up between those universes, and a spaceship can go from one to the other. As the degree of entanglement increases, the wormhole shrinks in length, drawing the universes together until you would not even speak of them as two universes anymore. “The emergence of a big spacetime is directly tied into the entangling of these field theory degrees of freedom,” Van Raamsdonk says. When we observe correlations in the electromagnetic and other fields, they are a residue of the entanglement that binds space together.

Many other features of space, besides its contiguity, may also reflect entanglement. Van Raamsdonk and Brian Swingle, now at the University of Maryland, College Park, argue that the ubiquity of entanglement explains the universality of gravity—that it affects all objects and cannot be screened out. As for black holes, Leonard Susskind of Stanford University and Juan Maldacena of the Institute for Advanced Study in Princeton, N.J., suggest that entanglement between a black hole and the radiation it has emitted creates a wormhole—a back-door entrance into the hole. That may help preserve information and ensure that black hole physics is reversible.

Whereas these string theory ideas work only for specific geometries and reconstruct only a single dimension of space, some researchers have sought to explain how all of space can emerge from scratch. For instance, ChunJun Cao, Spyridon Michalakis and Sean M. Carroll, all at the California Institute of Technology, begin with a minimalist quantum description of a system, formulated with no direct reference to spacetime or even to matter. If it has the right pattern of correlations, the system can be cleaved into component parts that can be identified as different regions of spacetime. In this model, the degree of entanglement defines a notion of spatial distance.

In physics and, more generally, in the natural sciences, space and time are the foundation of all theories. Yet we never see spacetime directly. Rather we infer its existence from our everyday experience. We assume that the most economical account of the phenomena we see is some mechanism that operates within spacetime. But the bottom-line lesson of quantum gravity is that not all phenomena neatly fit within spacetime. Physicists will need to find some new foundational structure, and when they do, they will have completed the revolution that began just more than a century ago with Einstein.

This article was originally published with the title “What Is Spacetime?”
Rights & Permissions
Posted by Sc13t4 in Astrophysics, Cosmology, Design, Mathematics, Space/Time, Theoretical Physics, 0 comments
The End of Theoretical Physics As We Know It

The End of Theoretical Physics As We Know It

Theoretical physics has a reputation for being complicated. I beg to differ. That we are able to write down natural laws in mathematical form at all means that the laws we deal with are simple — much simpler than those of other scientific disciplines.

Unfortunately, actually solving those equations is often not so simple. For example, we have a perfectly fine theory that describes the elementary particles called quarks and gluons, but no one can calculate how they come together to make a proton. The equations just can’t be solved by any known methods. Similarly, a merger of black holes or even the flow of a mountain stream can be described in deceptively simple terms, but it’s hideously difficult to say what’s going to happen in any particular case.

By Sabine Hossenfelder QuantaMagazine Contributing Columnist

Of course, we are relentlessly pushing the limits, searching for new mathematical strategies. But in recent years much of the pushing has come not from more sophisticated math but from more computing power.

This article first appeared on QuantaMagazine.org by Contributing Columnist Sabine Hassenfelder
August 27, 2018
Quantized Columns- A regular column in which top researchers explore the process of discovery. This month’s columnist, Sabine Hossenfelder, is a theoretical physicist based at the Frankfurt Institute for Advanced Studies in Frankfurt, Germany. She is the author of Lost in Math: How Beauty Leads Physics Astray.

When the first math software became available in the 1980s, it didn’t do much more than save someone a search through enormous printed lists of solved integrals. But once physicists had computers at their fingertips, they realized they no longer had to solve the integrals in the first place, they could just plot the solution.

In the 1990s, many physicists opposed this “just plot it” approach. Many were not trained in computer analysis, and sometimes they couldn’t tell physical effects from coding artifacts. Maybe this is why I recall many seminars in which a result was degraded as “merely numerical.” But over the past two decades, this attitude has markedly shifted, not least thanks to a new generation of physicists for whom coding is a natural extension of their mathematical skill.

Accordingly, theoretical physics now has many subdisciplines dedicated to computer simulations of real-world systems, studies that would just not be possible any other way. Computer simulations are what we now use to study the formation of galaxies and supergalactic structures, to calculate the masses of particles that are composed of several quarks, to find out what goes on in the collision of large atomic nuclei, and to understand solar cycles, to name but a few areas of research that are mainly computer based.

The next step of this shift away from purely mathematical modeling is already on the way: Physicists now custom design laboratory systems that stand in for other systems which they want to better understand. They observe the simulated system in the lab to draw conclusions about, and make predictions for, the system it represents.

The best example may be the research area that goes by the name “quantum simulations.” These are systems composed of interacting, composite objects, like clouds of atoms. Physicists manipulate the interactions among these objects so the system resembles an interaction among more fundamental particles. For example, in circuit quantum electrodynamics, researchers use tiny superconducting circuits to simulate atoms, and then study how these artificial atoms interact with photons. Or in a lab in Munich, physicists use a superfluid of ultra-cold atoms to settle the debate over whether Higgs-like particles can exist in two dimensions of space (the answer is yes).

[SCIET Dynamics Note- Rather than using quantum rules to simulate interactions, the rules of the SCIET will be used to generate ongoing, evolving particles with a full feature set to the simulation that includes postulates for space and time related to the formation of all particles. 

These simulations are not only useful to overcome mathematical hurdles in theories we already know. We can also use them to explore consequences of new theories that haven’t been studied before and whose relevance we don’t yet know.

This is particularly interesting when it comes to the quantum behavior of space and time itself — an area where we still don’t have a good theory. In a recent experiment, for example, Raymond Laflamme, a physicist at the Institute for Quantum Computing at the University of Waterloo in Ontario, Canada, and his group used a quantum simulation to study so-called spin networks, structures that, in some theories, constitute the fundamental fabric of space-time. And Gia Dvali, a physicist at the University of Munich, has proposed a way to simulate the information processing of black holes with ultracold atom gases.

A similar idea is being pursued in the field of analogue gravity, where physicists use fluids to mimic the behavior of particles in gravitational fields. Black hole space-times have attracted the bulk of attention, as with Jeff Steinhauer’s (still somewhat controversial) claim of having measured Hawking radiation in a black-hole analogue. But researchers have also studied the rapid expansion of the early universe, called “inflation,” with fluid analogues for gravity.

In addition, physicists have studied hypothetical fundamental particles by observing stand-ins called quasiparticles. These quasiparticles behave like fundamental particles, but they emerge from the collective movement of many other particles. Understanding their properties allows us to learn more about their behavior, and thereby might also to help us find ways of observing the real thing.

This line of research raises some big questions. First of all, if we can simulate what we now believe to be fundamental by using composite quasiparticles, then maybe what we currently think of as fundamental — space and time and the 25 particles that make up the Standard Model of particle physics — is made up of an underlying structure, too. Quantum simulations also make us wonder what it means to explain the behavior of a system to begin with. Does observing, measuring, and making a prediction by use of a simplified version of a system amount to an explanation?

But for me, the most interesting aspect of this development is that it ultimately changes how we do physics. With quantum simulations, the mathematical model is of secondary relevance. We currently use the math to identify a suitable system because the math tells us what properties we should look for. But that’s not, strictly speaking, necessary. Maybe, over the course of time, experimentalists will just learn which system maps to which other system, as they have learned which system maps to which math. Perhaps one day, rather than doing calculations, we will just use observations of simplified systems to make predictions.

At present, I am sure, most of my colleagues would be appalled by this future vision. But in my mind, building a simplified model of a system in the laboratory is conceptually not so different from what physicists have been doing for centuries: writing down simplified models of physical systems in the language of mathematics.

 

Posted by Sc13t4 in Design, Mathematics, Theoretical Physics, 1 comment