Uncategorized

The Soul

Experiencing the Soul

Our experience of seeing, remembering and visualizing all hinge on the ability of the mind to place our awareness at the center of a spherical reality that converts incoming sensory information from our  nervous system, particularly eyes, memory and imagination into a manipulable visual construct.

The visual aspect of our being is a only a part of total sensory input, but pivotal to understanding how we survive death and return again and again to complete our goals for living. We call it the soul, and it has two important qualities that seem very different during analysis. First there is the ability to contain the experience of living and review it, whether it is a moment later or many lifetimes in the past. Second and most important is that we remain at the center of the experience at all times, somehow we are able to observe those experiences regardless of the events that created them.

Clearly our memories are not who we are, only the record that tells where we have been and what we have done. Who is it that does the observing? How does it work that this capability remains constant throughout our experience of being conscious?

The Soul contains all of our memories from all our existences both material and immaterial, it includes the yearning and hopes from previous lives and remembers those we loved so deeply that losing them meant that our lives no longer had meaning. Your soul gives you your purpose for living.

SCIET Dynamics explains the soul simply

The Soul is based on the same principles that created atoms at the beginning of the Creation. Point-to-point resonance returns to each point and goes into orbit around the center forming a continuous spherical layer of information. When this happens at the atomic level it creates the Atomic Shell and the same effect is used to store memories around the point of Awareness. This means that your sensory system is interactive with all points that surround you, and that your consciousness utilizes this to process your path forward moment-by-moment. Simply stated, you are behind your eyes utilizing a modeling system to see what is presently around you, to remember what you have seen and experienced and to imagine anything else that you want to visualize. These spherical layers are also reduced to molecular size as they are processed by the mind until they reach their smallest possible size, which is then broadcast throughout the body’s DNA in true Holographic fashion.

At this size the memories are passed to an even smaller dimension where they enter long term memory to form what I have called a Memory Tunnel. The Awareness is able to pass from waking consciousness at the center of your brain to deep ancestral awareness at the center of your Memory Tunnel during any past life. You are able to traverse from one moment to the previous or into deep antiquity by simply imagining that past moment.

This new math allows us to understand how we can be the observer at the center of a spherical field layer of information that exists as a result of the brains energetic field. The layout of the brains parts is consistent with this idea, with the internal reality modeled upside down and backwards, so that the visual field is projected toward the back of the head while the feet are modeled with the top regions the brain. The sky is modeled from point where the spinal cord connects the brain, and is the vanishing point above our heads. All the parts of the body and its sensory system are modeled by the brain using synaptic firing to provoke different spherical memories needed to manage the body.

The new math, the SCIET, reduces all incoming information toward the center until it reaches SOURCE. Therefore all SCIETs are points of Awareness and are, in effect, holes in the fabric of SpaceTime that connect to Source.  All Relationships in SpaceTime are Point-to-Point, meaning that the Soul is in continuous relationship with everything around it. According to its concepts the Soul is a recording of all experiences a being has since it separated from the Awareness that began the Creation.  The SCIET provides a mathematical tool to describe this process using a single cycle integrative effect topology to identify how a single point-to-point fractional reduction becomes a record of that momentary state while alive and also while in the transitionary state between lives. The center of this recording is where the fragment of God exists, and it is this center of Awareness that is charged with evolving, or learning how to operate within the constraints of the accumulated information. So the Soul is a vehicle for Awareness that is descended from the original Awareness that began the creation. In this sense, the Soul is the combination of that Awareness and all of the experiences it has accumulated.  

If you want to imagine what this would look like and how it would be a vehicle during the after life, consider what is written about Orbs that appear when researchers investigate reports of ghosts or visit spiritual sites. The appearance of Orbs of Light is constant in some places and sometimes witnesses report seeing faces or people within them. Additionally, Orbs are reported as UFOs and are always reactive to what people are thinking or saying about them. Recently, a witness from the Secret Space Program has reported being in contact with a race of beings from the Sixth dimension who manifest themselves as Blue Spheres, but can also manifest as humanoid beings in order to meet with him.

The Soul is a point of Awareness surrounded by the energy of its accumulated existences, and appears to us as an orb of light.

 

Dane Arr
April 22. 2019

 

Posted by Sc13t4, 0 comments
The Math That Tells Cells What They Are

The Math That Tells Cells What They Are

Website Editors Note: This website is about a Single Cycle Integrative Effect Topology, the SCIET. It does what this article describes for all of nature, providing a means to identify each point in space uniquely in full relationship to all others. This is why this article is included on this website.


In 1891, when the German biologist Hans Driesch split two-cell sea urchin embryos in half, he found that each of the separated cells then gave rise to its own complete, albeit smaller, larva. Somehow, the halves “knew” to change their entire developmental program: At that stage, the blueprint for what they would become had apparently not yet been drawn out, at least not in ink.

Since then, scientists have been trying to understand what goes into making this blueprint, and how instructive it is. (Driesch himself, frustrated at his inability to come up with a solution, threw up his hands and left the field entirely.) It’s now known that some form of positional information makes genes variously switch on and off throughout the embryo, giving cells distinct identities based on their location. But the signals carrying that information seem to fluctuate wildly and chaotically — the opposite of what you might expect for an important guiding influence.

“The [embryo] is a noisy environment,” said Robert Brewster, a systems biologist at the University of Massachusetts Medical School. “But somehow it comes together to give you a reproducible, crisp body plan.”

The same precision and reproducibility emerge from a sea of noise again and again in a range of cellular processes. That mounting evidence is leading some biologists to a bold hypothesis: that where information is concerned, cells might often find solutions to life’s challenges that are not just good but optimal — that cells extract as much useful information from their complex surroundings as is theoretically possible. Questions about optimal decoding, according to Aleksandra Walczak, a biophysicist at the École Normale Supérieure in Paris, “are everywhere in biology.”

Biologists haven’t traditionally cast analyses of living systems as optimization problems because the complexity of those systems makes them hard to quantify, and because it can be difficult to discern what would be getting optimized. Moreover, while evolutionary theory suggests that evolving systems can improve over time, nothing guarantees that they should be driven to an optimal level.

Yet when researchers have been able to appropriately determine what cells are doing, many have been surprised to see clear indications of optimization. Hints have turned up in how the brain responds to external stimuli and how microbes respond to chemicals in their environments. Now some of the best evidence has emerged from a new study of fly larva development, reported recently in Cell.

Cells That Understand Statistics

For decades, scientists have been studying fruit fly larvae for clues about how development unfolds. Some details became apparent early on: A cascade of genetic signals establishes a pattern along the larva’s head-to-tail axis. Signaling molecules called morphogens then diffuse through the embryonic tissues, eventually defining the formation of body parts.

Particularly important in the fly are four “gap” genes, which are expressed separately in broad, overlapping domains along the axis. The proteins they make in turn help regulate the expression of “pair-rule” genes, which create an extremely precise, periodic striped pattern along the embryo. The stripes establish the groundwork for the later division of the body into segments.

FIGURE: Gap gene expression compared to "Pair rule" gene expression

Early in the development of fruit flies, four “gap” genes are expressed at different levels along the long axis of the larval body. That pattern lays the foundation for the expression of “pair-rule” genes in periodic bands later, which give rise to specific body segments. The purple stain in the embryo at left shows the expression of one gap protein; the staining in the later larva at right reveals one pair-rule protein.

Development 2002 129:4399-4409

How cells make sense of these diffusion gradients has always been a mystery. The widespread assumption was that after being pointed in roughly the right direction (so to speak) by the protein levels, cells would continuously monitor their changing surroundings and make small corrective adjustments as development proceeded, locking in on their planned identity relatively late. That model harks back to the “developmental landscape” proposed by Conrad Waddington in 1956. He likened the process of a cell homing in on its fate to a ball rolling down a series of ever-steepening valleys and forked paths. Cells had to acquire more and more information to refine their positional knowledge over time — as if zeroing in on where and what they were through “the 20 questions game,” according to Jané Kondev, a physicist at Brandeis University.

Such a system could be accident prone, however: Some cells would inevitably take the wrong paths and be unable to get back on track. In contrast, comparisons of fly embryos revealed that the placement of pair-rule stripes was incredibly precise, to within 1 percent of the embryo’s length — that is, to single-cell accuracy.

That prompted a group at Princeton University, led by the biophysicists Thomas Gregor and William Bialek,to suspect something else: that the cells could instead get all the information they needed to define the positions of pair-rule stripes from the expression levels of the gap genes alone, even though those are not periodic and therefore not an obvious source for such precise instructions.

And that’s just what they found.

Over the course of 12 years, they measured morphogen and gap-gene protein concentrations, cell by cell, from one embryo to the next, to determine how all four gap genes were most likely to be expressed at every position along the head-to-tail axis. From those probability distributions, they built a “dictionary,” or decoder — an explicit map that could spit out a probabilistic estimate of a cell’s position based on its gap-gene protein concentration levels.

Around five years ago, the researchers — including Mariela Petkova, who started the measurement work as an undergraduate at Princeton (and is currently pursuing a doctorate in biophysics at Harvard University), and Gašper Tkačik, now at the Institute of Science and Technology Austria — determined this mapping by assuming it worked like what’s known as an optimal Bayesian decoder (that is, the decoder used Bayes’ rule for inferring the likelihood of an event from prior conditional probabilities). The Bayesian framework allowed them to flip the “unknowns,” the conditions of probability: Their measurements of gap gene expression, given position, could be used to generate a “best guess” of position, given only gap gene expression.

The team found that the fluctuations of the four gap genes could indeed be used to predict the locations of cells with single-cell precision. No less than maximal information about all four would do, however: When the activity of only two or three gap genes was provided, the decoder’s location predictions were not nearly so accurate. Versions of the decoder that used less of the information from all four gap genes — that, for instance, responded only to whether each gene was on or off — made worse predictions, too.

According to Walczak, “No one has ever measured or shown how well reading out the concentration of these molecular gradients … actually pinpoints a specific position along the axis.”

Now they had: Even given the limited number of molecules and underlying noise of the system, the varying concentrations of the gap genes was sufficient to differentiate two neighboring cells in the head-to-tail axis — and the rest of the gene network seemed to be transmitting that information optimally.

“But the question always remained open: Does the biology actually care?” Gregor said. “Or is this just something that we measure?” Could the regulatory regions of DNA that responded to the gap genes really be wired up in such a way that they could decode the positional information those genes contained?

The biophysicists teamed up with the Nobel Prize-winning biologist Eric Wieschaus to test whether the cells were actually making use of the information potentially at their disposal. They created mutant embryos by modifying the gradients of morphogens in the very young fly embryos, which in turn altered the expression patterns of the gap genes and ultimately caused pair-rule stripes to shift, disappear, get duplicated or have fuzzy edges. Even so, the researchers found that their decoder could predict the changes in mutated pair-rule expression with surprising accuracy. “They show that the map is broken in mutants, but in a way that the decoder predicts,” Walczak said.

GRAPHIC: Showing the role of gap genes and pair-rule genes in embryonic development

Lucy Reading-Ikkanda/Quanta Magazine

“You could imagine that if it was getting information from other sources, you couldn’t trick [the cells] like that,” Brewster added. “Your decoder would fail.”

These findings represent “a signpost,” according to Kondev, who was not involved with the study. They suggest that there’s “some physical reality” to the inferred decoder, he said. “Through evolution, these cells have figured out how to implement Bayes’ trick using regulatory DNA.”

How the cells do it remains a mystery. Right now, “the whole thing is kind of wonderful and magical,” said John Reinitz, a systems biologist at the University of Chicago.

Even so, the work provides a new way of thinking about early development, gene regulation and, perhaps, evolution in general.

A Steeper Landscape

The findings provide a fresh perspective on Waddington’s idea of a developmental landscape. According to Gregor, their work indicates that there’s no need for 20 questions or a gradual refinement of knowledge after all. The landscape “is steep from the beginning,” he said. All the information is already there.

“Natural selection [seems to be] pushing the system hard enough so that it … reaches a point where the cells are performing at the limit of what physics allows,” said Manuel Razo-Mejia, a graduate student at the California Institute of Technology.

It’s possible that the high performance in this case is a fluke: Since fruit fly embryos develop very quickly, perhaps in their case “evolution has found this optimal solution because of that pressure to do everything very rapidly,” said James Briscoe, a biologist at the Francis Crick Institute in London who did not participate in this study. To really cement whether this is something more general, then, researchers will have to test the decoder in other species, including those that develop more slowly.

Even so, these results set up intriguing new questions to ask about the often-enigmatic regulatory elements. Scientists don’t have a solid grasp of how regulatory DNA codes for the control of other genes’ activities. The team’s findings suggest that this involves an optimal Bayesian decoder, which allows the regulatory elements to respond to very subtle changes in combined gap gene expression. “We can ask the question, what is it about regulatory DNA that encodes the decoder?” Kondev said.

And “what about it makes it do this optimal decoding?” he added. “That’s a question we could not have asked before this study.”

“That’s really what this work sets up as the next challenge in the field,” Briscoe said. Besides, there may be many ways of implementing such a decoder at the molecular level, meaning that this idea could apply to other systems as well. In fact, hints of it have been uncoveredin the development of the neural tube in vertebrates, the precursor of their central nervous system — which would call for a very different underlying mechanism.

Moreover, if these regulatory regions need to perform an optimal decoding function, that potentially limits how they can evolve — and in turn, how an entire organism can evolve. “We have this one example … which is the life that evolved on this planet,” Kondev said, and because of that, the important constraints on what life can be are unknown. Finding that cells show Bayesian behavior could be a hint that processing information effectively may be “a general principle that makes a bunch of atoms stuck together loosely behave like the thing that we think is life.”

But right now, it is still only a hint. Although it would be “kind of a physicist’s dream,” Gregor said, “we are far from really having proof for this.”

From Wires Under Oceans to Neurons in the Brain

The concept of information optimization is rooted in electrical engineering: Experts originally wanted to understand how best to encode and then decode sound to allow people to talk on the telephone via transoceanic cables. That goal later turned into a broader consideration of how to transmit information optimally through a channel. It wasn’t much of a leap to apply this framework to the brain’s sensory systems and how they measured, encoded and decoded inputs to produce a response.

Now some experts are trying to think about all kinds of “sensory systems” in this way: Razo-Mejia, for instance, has studied how optimally bacteria sense and process chemicals in their environment, and how that might affect their fitness. Meanwhile, Walczak and her colleagues have been asking what a “good decoding strategy” might look like in the adaptive immune system, which has to recognize and respond to a massive repertoire of intruders.

“I don’t think optimization is an aesthetic or philosophical idea. It’s a very concrete idea,” Bialek said. “Optimization principles have time and again pointed to interesting things to measure.” Whether or not they are correct, he considers them productive to think about.

“Of course, the difficulty is that in many other systems, the property being decoded is more difficult than one-dimensional position [along the embryo’s axis],” Walczak said. “The problem is harder to define.”

That’s what made the system Bialek and his colleagues studied so tantalizing. “There aren’t many examples in biology where a high-level idea, like information in this case, leads to a mathematical formula” that is then testable in experiments on living cells, Kondev said.

It’s this marriage of theory and experiment that excites Bialek. He hopes to see the approach continue to guide work in other contexts. “What’s not clear,” he said, “is whether the observation [of optimization] is a curiosity that arises in a few corners, or whether there’s something general about it.”

If the latter does prove to be the case, “then that’s very striking,” Briscoe said. “The ability for evolution to find these really efficient ways of doing things would be an incredible finding.”

Kondev agreed. “As a physicist, you hope that the phenomenon of life is not just about the specific chemistry and DNA and molecules that make living things on planet Earth — that it’s broader,” he said. “What is that broader thing? I don’t know. But maybe this is lifting a little bit of the veil off that mystery.”

Correction added on March 15: The text was updated to acknowledge the contributions of Mariela Petkova and Gašper Tkačik.

Posted by Sc13t4, 0 comments

SCIET Math Design

Any definable location in space is a SCIET

- and all intersections are expressible as SCIETs



“All points are SCIETs”

SCIET Dynamics Math Design:

“Nature is the realization of the
simplest conceivable mathematical ideas.”

Albert Einstein Quantum Questions, page 146

Introduction to SCIET Dynamics

As its logical starting place the SCIET revisits the concept of the point with the ambition that it should describe both the Universe and its smallest part.  Communicating the range of ideas required to teach an understanding of how this is true has led to SCIET Dynamics and  the variety of specialized ideas and concepts shown on this site.

The SCIET derives from a single measure, the distance between the center and the edge,  and maps all the space surrounding the center through equidistant angularities and subdivisions while retaining that definition of their existence together as a permanent record. The measure is as unlimited as the Awareness from which it came. There is no limit on smallness or largeness except the speed of change itself.

Geometry Rules First

Any definable location in space is a SCIET and all intersections are expressible as SCIETs. Thus a SCIET continually reduces into smaller SCIETs until the original measure seems to vanish into the space of itself.  It is the notion that all points in space are definable as SCIETs that makes this possible.

SCIET in Relationship is point-to-point. Receptive harmonics manifest geometric shape resonant maps during the Relationship reduction cycle, and these forms of Relationship continue to underlie all of creation. A is the SCIETangle and B And C show the relationship to Tetrons while E and F show the dodecahedron inside of the icosahedron shape of the SCIET to create a map of the Unitary Value phase of Harmonic Receptive Reduction .

The SCIET is the source of the geometry of space. Relationship is based on line, composed entirely of two dimensional relationships.  

Extending this idea into a three dimensional framework requires a careful analysis of the nature of the Void and how it limits the application of generalities to the stages of creation. It is understood that only one measure is possible within the Void, that being the line between two points. Building a multidimensional space from this must follow these limitations faithfully, and this is the basis of the SCIET which uses a single measure fractionally to establish all parts of the form.

The requirement for an omnidirectional form follows from the hole-in-space, the Tetron, which exists at the boundary of the Void when a difference exists. The difference is a creation that begins with a First Action, defined as an expression of awareness of difference within the Void, which is better described as the Limitless Awareness, an undifferentiated potential underlying existence itself.

SCIET Frequency Potential Levels are the basis for Awareness and the Creation Substance with Limitation thus enabling reduction and the beginning of Relationship setting the stage for Culmination. (See Fractional Harmonic Receptive Reduction below). An example of this idea being used in nature is the fovea of the eye, which has evolved specifically to receive the most focused part of the incoming image.

Frequency Potential Levels can be visualized as a sea of tiny SCIETs that will interact only based on Harmonic Fractional Receptive Reduction values.  A fractional harmonic of the Frequency Potential levels is accessible at the center of a SCIET at that fractional harmonic value.  The Frequency Potential Level of the Creation Substance is the speed of light or C.

Frequency Potential Levels can be visualized as a sea of tiny SCIETs that will interact only based on Harmonic Fractional Receptive Reduction values. A fractional harmonic of the Frequency Potential levels is accessible at the center of a SCIET at that fractional harmonic value. The Frequency Potential Level of the Creation Substance is the speed of light or C

SCIET Magnitude is the relationship between the Unitary Value and the SCIET’s Point Value. The Unitary Value subdivides through Harmonic Fractional Receptive Reduction to its smallest measure, 1/1,048,512th.

A SCIET Magnitude is 1048512 in 1  Every SCIET Harmonic Receptive Reduction fractionally subdivides the Unitary Value twenty times to reach its smallest segment, the SCIET Point of Magnitude where all angles of the SCIET originate.

A SCIET Magnitude is 1048512 in 1 Every SCIET Harmonic Receptive Reduction fractionally subdivides the Unitary Value twenty times to reach its smallest segment, the SCIET Point of Magnitude where all angles of the SCIET originate.

We can combine SCIET Frequency Potential Levels with SCIET Magnitude and infer a method for the Awareness to establish a relationship with the Creation Substance. The Awareness is a Magnitude faster and smaller than the Creation Substance allowing the Awareness to inhabit the Substance with an ability to establish Unitary Values within it.

The SCIET Unitary Value is undivided in its length while the SCIET Point of Magnitude  is the smallest segment after twenty subdivisions of the Unitary Value.

Harmonic Receptive Reduction fractionally subdivides the Unitary Value twenty times to reach its smallest segment, and a new Unitary Value the SCIET Point of Magnitude where all angles of the SCIET originate.

So the Frequency Potential Level of the Awareness is pulsing at one over one million forty-eight thousand five-hundred twelve faster than the Creation Substance and and it is aware of all frequencies within the Substance, whose Frequency Potential Level’s smallest segment is a Magnitude larger than the Awareness.

The limitless Awareness defines the Creation Substance as a Frequency Potential Level at a Magnitude slower rate and then expresses a Unitary Value from its center to the edge of the substance, which then reduces (Harmonic Fractional Receptive Reduction) until it reaches the Creation Substance Frequency Potential Level where it Culminates as a unitary value, meaning that for a moment all the reduced SCIETs react as one before the next pulse begins resonance.

The Culmination is a single pulse event marking the boundary between change in a system and its integration of that change.

The importance of the SCIET Magnitude is related to the Culmination and thus holograms, neurological integration and change in all natural systems.

Culmination in a system is related to the idea of limitation in the Creation itself. A created system has a beginning and the Culmination occurs when the initial defined value subdivides to a value equal to the smallest, fastest value possible in that SCIET range. The resulting value is individuated and becomes resonant within the frequency values defined by the prior subdivisions.  At the creation this resulted in the formation of protons based on the smallest and fastest resonant values, which remain in continuous resonant relationship.

Culmination is the basis of the holographic effect.  Interestingly, the frequency required to achieve Magnitude Culmination may be the same frequency that will shatter an object and stimulate the effect. The holographic effect was discovered when a researcher broke a photographic plate and saw that all the tiny pieces had all developed a whole image.  Another example is that when a soap bubble bursts it immediately culminates into its receptive reduction values, with the droplets in the mist created by the burst being tiny bubbles rather than  droplets of water.

Culmination occurs in all created systems at the end of the SCIET Cycle. Culmination in Relationship creates tiny versions of the original, which also culminate. It is particularly noteworthy in the nervous system where it distributes new information to all cells and governs all the transitions between value ranges, its actual form adapting to the needs of the newly defined system.

All of the above concepts and ideas set the stage for Agreement, the creation of matter and then in Memory, the stage of life, and the frequency rules for evolutionary processes. The below table graphic shows the increase of complexity over the duration of creation stages.

The Domain Frequency Rate has become more complex with the growth of the Universe.  The illustration above shows the sequence of stages and their relationship to complexity over time. The identification of the life frequencies with super conducting cellular resonance and the Phi growth  ratio is pivotal, as is the recognition that higher frequencies are attainable with both  reduction and radiant processes.  It is the combination of these that enable Awareness to move smoothly from the deep frequencies within the DNA from the Infinitesimal Substrate (Creation Substance) to express the patterns of the DNA into a living being by increasing complexity over time.

SCIETspheres are memories of every change since the Culmination, so the universe is filled with the shells, layers and lattice from SCIETsphere formation since the beginning of Agreement.     SCIETsphere formation is the basis of all shells, layers and lattice and all contain records accumulated since their origin.

Lattice Domains are the result of the long term evolution of SCIETspheres after the culmination. The high energy domain of the stage of Agreement provides the basis of shell formation and molecular Consolidation.

Next is the cellular consciousness field, which compounds into the somatic body, the living but unanimated form. The third is the sense domain which is created by the nervous system’s processing of the input from the environment. The forth is the focal consciousness field which is created by the merging of the two sensate fields into a frequency realm that exists independent of materiality, and it is the “door” through which we enter the vibration body’s aetheric quanta layer field.

The Lattice Domains generated by the dual inputs of the nervous system react to one another and create a Third SCIET Lattice Domain.

Molecular Resonance

The molecular lattice, made of protons, neutrons and electrons, is the matter in our bodies that resonates with the earth’s dense surface layers (gavity).). We experience this as weight and inertial mass. Their collective frequencies are used to manage molecular bonds and compounding such an amino acid formation. In the mass of the body these frequencies enable the autonomic nervous system to regulate the functions of the body in the absence of the self or animating being.

Phi Resonance

A Phi resonant frequency cycle rooted in the organic amino acids is used by the next frequency realm to extend the Phi Cycle into the evolution of free roaming life forms.  Using the extremely dense dumbbell-shaped molecules of the extended platinum group, the early cells incorporated their natural quality of generating a C-squared resonance when mechanically stimulated to self-bond with their own opposite-end valences

Super conducting Cellular Resonance

The cellular field is derived from the molecular field. The amino acid chains that form the DNA strands generate specific fields that complement and amplify one another. This effect continues within the cell as each of the structures works with the others to establish the appropriate field types and strengths to build and maintain the cell. Each specific field generates a pulse field resonance which connects it to all other fields of the same type. (see frequency bodies, capacitance lattice)

Movement Through the  Sea of Planetary Consciousness:
The Senses and Brain Bi-lateralization

The evolution of the central nervous system is the consequence of the sensate fields interaction with the frequency bodies of their environment.

Forward movement “parts” the frequency body of the cellular organism, “brushing” the field reactions to each side. From a molecular perspective the cell is immense and the complex of fields that react to gravity take time to adjust to the new position on the earth. This “parting” and “brushing” is the fundamental source of hemispheric bi-lateralization and duality in moving organisms, or animal.  In this sense, the central nervous system is the evolved consequence of the same phenomenon that causes momentum.

The sense organs evolved from the skin, each focusing to a particular range body field values. Forward motion divided the field inputs and set the stage for the pattern of side to side charge exchange that we observe in the electrical activity of the brain.

So the nervous system is actually two nervous systems that are united by the corpus collosum. At birth only a small portion of these connections are made, and it is not until about 10 years of age that enough are in place for the individual to gain the consistent mental function associated with adolescence.

There are a number of theories for this, but SCIET theory is that the brain is a “tuner” and the electrical activity that arcs across the hemispheres stimulates the growth of axons and dendrites at the matching hemispheric sites where simultaneous firing will generate or “tune” a capacitance lattice where information is stored as charged space relationships.

Hemispheric bi-lateralization and hemispheric reversal are natural consequences of space processing. The connection of the body to the focal consciousness domain involves a lifelong developmental process whose early foundations are pivotal to spiritual as well as mental potential.  Each brain hemisphere receives a flow of nerve energy (much more than electricity) from the senses on one side of the body, resulting in a simultaneous opposing charge. The shifting of this potential from one side to the other drives the processes of thought.

Focal Consciousness and the Mid-line

We mirror the world around us in our brains. The internal modeling process is part of a feedback loop that utilizes two inputs of the same information to provoke a spark of electrical discharge between the hemispheres.
This third field, the vehicle of our focus, exists at the level of aetheric quanta layers, a realm where the self is without boundary.

The ability to move the focal consciousness field is not limited to the physical body, and the projection of focus or “attention” externally is as normal as reading, watching a basketball game or driving a car, all examples of external focus.

The division between the auditory and visual system that is evidenced by brain research, is derived from the distinct differences of light and sound. Sound travels about one-quarter mile per second and light travels 186,000 miles per second, that is about 750,000 times faster.

Even though all of our nerves function at the same rate of speed, the dramatic difference in source of input necessitates quite different processing systems. The fact that most people “hear”a voice in their heads and can “see” with their imagination, indicates that our minds have synthesized a remarkably effective system of fields to deal with this incredible disparity in source speeds, and the subsequent realistic modeling.

For more information related to these math design ideas please see these pages:

The SCIET Functional Cosmology integrates the ideas of SCIET Dynamics into an explanation that progresses in stages related to the evolution of the Universe.

Posted by Sc13t4, 0 comments
A Physicist’s Physicist Ponders the Nature of Reality

A Physicist’s Physicist Ponders the Nature of Reality

Edward Witten reflects on the meaning of dualities in physics and math, emergent space-time, and the pursuit of a complete description of nature.

Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the leading candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.

During a visit this fall, I spotted Witten on the Institute’s central lawn and requested an interview; in his quick, alto voice, he said he couldn’t promise to be able to answer my questions but would try. Later, when I passed him on the stone paths, he often didn’t seem to see me.

Physics luminaries since Albert Einstein, who lived out his days in the same intellectual haven, have sought to unify gravity with the other forces of nature by finding a more fundamental quantum theory to replace Einstein’s approximate picture of gravity as curves in the geometry of space-time. M-theory, which Witten proposed in 1995, could conceivably offer this deeper description, but only some aspects of the theory are known. M-theory incorporates within a single mathematical structure all five versions of string theory, which renders the elements of nature as minuscule vibrating strings. These five string theories connect to each other through “dualities,” or mathematical equivalences. Over the past 30 years, Witten and others have learned that the string theories are also mathematically dual to quantum field theories — descriptions of particles moving through electromagnetic and other fields that serve as the language of the reigning “Standard Model” of particle physics. While he’s best known as a string theorist, Witten has discovered many new quantum field theories and explored how all these different descriptions are connected. His physical insights have led time and again to deep mathematical discoveries.

Researchers pore over his work and hope he’ll take an interest in theirs. But for all his scholarly influence, Witten, who is 66, does not often broadcast his views on the implications of modern theoretical discoveries. Even his close colleagues eagerly suggested questions they wanted me to ask him.

When I arrived at his office at the appointed hour on a summery Thursday last month, Witten wasn’t there. His door was ajar. Papers covered his coffee table and desk — not stacks, but floods: text oriented every which way, some pages close to spilling onto the floor. (Research papers get lost in the maelstrom as he finishes with them, he later explained, and every so often he throws the heaps away.) Two girls smiled out from a framed photo on a shelf; children’s artwork decorated the walls, one celebrating Grandparents’ Day. When Witten arrived minutes later, we spoke for an hour and a half about the meaning of dualities in physics and math, the current prospects of M-theory, what he’s reading, what he’s looking for, and the nature of reality. The interview has been condensed and edited for clarity.

Kids drawings in Edward Witten's office.
Jean Sweep for Quanta Magazine

Physicists are talking more than ever lately about dualities, but you’ve been studying them for decades. Why does the subject interest you?

People keep finding new facets of dualities. Dualities are interesting because they frequently answer questions that are otherwise out of reach. For example, you might have spent years pondering a quantum theory and you understand what happens when the quantum effects are small, but textbooks don’t tell you what you do if the quantum effects are big; you’re generally in trouble if you want to know that. Frequently dualities answer such questions. They give you another description, and the questions you can answer in one description are different than the questions you can answer in a different description.

What are some of these newfound facets of dualities?

It’s open-ended because there are so many different kinds of dualities. There are dualities between a gauge theory [a theory, such as a quantum field theory, that respects certain symmetries] and another gauge theory, or between a string theory for weak coupling [describing strings that move almost independently from one another] and a string theory for strong coupling. Then there’s AdS/CFT duality, between a gauge theory and a gravitational description. That duality was discovered 20 years ago, and it’s amazing to what extent it’s still fruitful. And that’s largely because around 10 years ago, new ideas were introduced that rejuvenated it. People had new insights about entropy in quantum field theory — the whole story about “it from qubit.”

That’s the idea that space-time and everything in it emerges like a hologram out of information stored in the entangled quantum states of particles.

Yes. Then there are dualities in math, which can sometimes be interpreted physically as consequences of dualities between two quantum field theories. There are so many ways these things are interconnected that any simple statement I try to make on the fly, as soon as I’ve said it I realize it didn’t capture the whole reality. You have to imagine a web of different relationships, where the same physics has different descriptions, revealing different properties. In the simplest case, there are only two important descriptions, and that might be enough. If you ask me about a more complicated example, there might be many, many different ones.

Given this web of relationships and the issue of how hard it is to characterize all duality, do you feel that this reflects a lack of understanding of the structure, or is it that we’re seeing the structure, only it’s very complicated? 

I’m not certain what we should hope for. Traditionally, quantum field theory was constructed by starting with the classical picture [of a smooth field] and then quantizing it. Now we’ve learned that there are a lot of things that happen that that description doesn’t do justice to. And the same quantum theory can come from different classical theories. Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

There’s another curious fact that you might want to consider, which is that quantum field theory is very central to physics, and it’s actually also clearly very important for math. But it’s extremely difficult for mathematicians to study; the way physicists define it is very hard for mathematicians to follow with a rigorous theory. That’s extremely strange, that the world is based so much on a mathematical structure that’s so difficult.

Ed Witten sitting in a chair by the window.
Jean Sweep for Quanta Magazine

What do you see as the relationship between math and physics?

I prefer not to give you a cosmic answer but to comment on where we are now. Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way. Physicists are able to come up with things that surprise the mathematicians. Because it’s hard to describe mathematically in the known formulation, the things you learn about quantum field theory you have to learn from physics.

I find it hard to believe there’s a new formulation that’s universal. I think it’s too much to hope for. I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

You can’t imagine it at all? 

No, I can’t. Traditionally it was thought that interacting quantum field theory couldn’t exist above four dimensions, and there was the interesting fact that that’s the dimension we live in. But one of the offshoots of the string dualities of the 1990s was that it was discovered that quantum field theories actually exist in five and six dimensions. And it’s amazing how much is known about their properties. 

I’ve heard about the mysterious (2,0) theory, a quantum field theory describing particles in six dimensions, which is dual to M-theory describing strings and gravity in seven-dimensional AdS space. Does this (2,0) theory play an important role in the web of dualities?

Yes, that’s the pinnacle. In terms of conventional quantum field theory without gravity, there is nothing quite like it above six dimensions. From the (2,0) theory’s existence and main properties, you can deduce an incredible amount about what happens in lower dimensions. An awful lot of important dualities in four and fewer dimensions follow from this six-dimensional theory and its properties. However, whereas what we know about quantum field theory is normally from quantizing a classical field theory, there’s no reasonable classical starting point of the (2,0) theory. The (2,0) theory has properties [such as combinations of symmetries] that sound impossible when you first hear about them. So you can ask why dualities exist, but you can also ask why is there a 6-D theory with such and such properties? This seems to me a more fundamental restatement. 

Dualities sometimes make it hard to maintain a sense of what’s real in the world, given that there are radically different ways you can describe a single system. How would you describe what’s real or fundamental?

What aspect of what’s real are you interested in? What does it mean that we exist? Or how do we fit into our mathematical descriptions?

The latter.

Well, one thing I’ll tell you is that in general, when you have dualities, things that are easy to see in one description can be hard to see in the other description. So you and I, for example, are fairly simple to describe in the usual approach to physics as developed by Newton and his successors. But if there’s a radically different dual description of the real world, maybe some things physicists worry about would be clearer, but the dual description might be one in which everyday life would be hard to describe.

What would you say about the prospect of an even more optimistic idea that there could be one single quantum gravity description that really does help you in every case in the real world?

Well, unfortunately, even if it’s correct I can’t guarantee it would help. Part of what makes it difficult to help is that the description we have now, even though it’s not complete, does explain an awful lot. And so it’s a little hard to say, even if you had a truly better description or a more complete description, whether it would help in practice.

Are you speaking of M-theory?

M-theory is the candidate for the better description.

You proposed M-theory 22 years ago. What are its prospects today?

Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

Ed Witten drawing on the chalkboard
Ed Witten drawing on the chalkboard
Jean Sweep for Quanta Magazine

What’s an example of something else we might need?

Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

Are you willing to speculate about how it would be different?

I really doubt I can say anything useful. I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. I tend to think that there isn’t a precise quantum description of space-time — except in the types of situations where we know that there is, such as in AdS space. I tend to think, otherwise, things are a little bit murkier than an exact quantum description. But I can’t say anything useful.

The other night I was reading an old essay by the 20th-century Princeton physicist John Wheeler. He was a visionary, certainly. If you take what he says literally, it’s hopelessly vague. And therefore, if I had read this essay when it came out 30 years ago, which I may have done, I would have rejected it as being so vague that you couldn’t work on it, even if he was on the right track.

You’re referring to Information, Physics, Quantum, Wheeler’s 1989 essay propounding the idea that the physical universe arises from information, which he dubbed “it from bit.” Why were you reading it? 

I’m trying to learn about what people are trying to say with the phrase “it from qubit.” Wheeler talked about “it from bit,” but you have to remember that this essay was written probably before the term “qubit” was coined and certainly before it was in wide currency. Reading it, I really think he was talking about qubits, not bits, so “it from qubit” is actually just a modern translation.

Don’t expect me to be able to tell you anything useful about it — about whether he was right. When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.

Observing a quantum system irreversibly changes it, creating a distinction between past and future. So the observer issue seems possibly related to the question of time, which we also don’t understand. With the AdS/CFT duality, we’ve learned that new spatial dimensions can pop up like a hologram from quantum information on the boundary. Do you think time is also emergent — that it arises from a timeless complete description?

I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay. As you’ll read, he thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.

Do you consider Wheeler a hero?

I wouldn’t call him a hero, necessarily, no. Really I just became curious what he meant by “it from bit,” and what he was saying. He definitely had visionary ideas, but they were too far ahead of their time. I think I was more patient in reading a vague but inspirational essay than I might have been 20 years ago. He’s also got roughly 100 interesting-sounding references in that essay. If you decided to read them all, you’d have to spend weeks doing it. I might decide to look at a few of them.

Jean Sweep for Quanta Magazine

Why do you have more patience for such things now?

I think when I was younger I always thought the next thing I did might be the best thing in my life. But at this point in life I’m less persuaded of that. If I waste a little time reading somebody’s essay, it doesn’t seem that bad.

Do you ever take your mind off physics and math?

My favorite pastime is tennis. I am a very average but enthusiastic tennis player.

In contrast to Wheeler, it seems like your working style is to come to the insights through the calculations, rather than chasing a vague vision. 

In my career I’ve only been able to take small jumps. Relatively small jumps. What Wheeler was talking about was an enormous jump. And he does say at the beginning of the essay that he has no idea if this will take 10, 100 or 1,000 years.

And he was talking about explaining how physics arises from information.

Yes. The way he phrases it is broader: He wants to explain the meaning of existence. That was actually why I thought you were asking if I wanted to explain the meaning of existence.

I see. Does he have any hypotheses?

No. He only talks about things you shouldn’t do and things you should do in trying to arrive at a more fundamental description of physics.

Do you have any ideas about the meaning of existence?

No. [Laughs.]

Correction: This article was updated on Nov. 29, 2017, to clarify that M-theory is the leading candidate for a unified theory of everything. Other ideas have been proposed that also claim to unify the fundamental forces.

This article was reprinted on Wired.com.

Posted by Sc13t4, 0 comments
How Holography Could Help Solve Quantum Gravity

How Holography Could Help Solve Quantum Gravity

In the latest campaign to reconcile Einstein’s theory of gravity with quantum mechanics, many physicists are studying how a higher dimensional space that includes gravity arises like a hologram from a lower dimensional particle theory.

How does gravity work at the particle level? The question has stumped physicists since the two bedrock theories of general relativity (Albert Einstein’s equations envisioning gravity as curves in the geometry of space-time) and quantum mechanics (equations that describe particle interactions) revolutionized the discipline about a century ago.

One challenge to solving the problem lies in the relative weakness of gravity compared with the strong, weak and electromagnetic forces that govern the subatomic realm. Though gravity exerts an unmistakable influence on macroscopic objects like orbiting planets, leaping sharks and everything else we physically experience, it produces a negligible effect at the particle level, so physicists can’t test or study how it works at that scale.

Confounding matters, the two sets of equations don’t play well together. General relativity paints a continuous picture of space-time while in quantum mechanics everything is quantized in discrete chunks. Their incompatibility leads physicists to suspect that a more fundamental theory is needed to unify all four forces of nature and describe them at all scales.

One relatively recent approach to understanding quantum gravity makes use of a “holographic duality” from string theory called the AdS-CFT correspondence. Our latest In Theory video explains how this correspondence connects a lower dimensional particle theory to a higher dimensional space that includes gravity:

Posted by Sc13t4, 0 comments
A Jewel at the Heart of Quantum Physics

A Jewel at the Heart of Quantum Physics

Physicists have discovered a jewel-shaped geometric object that challenges the notion that space and time are fundamental constituents of nature.

The new object dramatically simplifies calculations of particle interactions and challenges the idea that space and time are fundamental components of reality.

“This is completely new and very much simpler than anything that has been done before,” said Andrew Hodges, a mathematical physicist at Oxford University who has been following the work.

The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like “amplituhedron,” which yields an equivalent one-term expression.

“The degree of efficiency is mind-boggling,” said Jacob Bourjaily, a theoretical physicist at Harvard University and one of the researchers who developed the new idea. “You can easily do, on paper, computations that were infeasible even with a computer before.”

The new geometric version of quantum field theory could also facilitate the search for a theory of quantum gravity that would seamlessly connect the large- and small-scale pictures of the universe. Attempts thus far to incorporate gravity into the laws of physics at the quantum scale have run up against nonsensical infinities and deep paradoxes. The amplituhedron, or a similar geometric object, could help by removing two deeply rooted principles of physics: locality and unitarity.

“Both are hard-wired in the usual way we think about things,” said Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study in Princeton, N.J., and the lead author of the new work, which he is presenting in talks and in a forthcoming paper. “Both are suspect.”

Locality is the notion that particles can interact only from adjoining positions in space and time. And unitarity holds that the probabilities of all possible outcomes of a quantum mechanical interaction must add up to one. The concepts are the central pillars of quantum field theory in its original form, but in certain situations involving gravity, both break down, suggesting neither is a fundamental aspect of nature.

In keeping with this idea, the new geometric approach to particle interactions removes locality and unitarity from its starting assumptions. The amplituhedron is not built out of space-time and probabilities; these properties merely arise as consequences of the jewel’s geometry. The usual picture of space and time, and particles moving around in them, is a construct.

“It’s a better formulation that makes you think about everything in a completely different way,” said David Skinner, a theoretical physicist at Cambridge University.

The amplituhedron itself does not describe gravity. But Arkani-Hamed and his collaborators think there might be a related geometric object that does. Its properties would make it clear why particles appear to exist, and why they appear to move in three dimensions of space and to change over time.

Because “we know that ultimately, we need to find a theory that doesn’t have” unitarity and locality, Bourjaily said, “it’s a starting point to ultimately describing a quantum theory of gravity.”

Clunky Machinery

The amplituhedron looks like an intricate, multifaceted jewel in higher dimensions. Encoded in its volume are the most basic features of reality that can be calculated, “scattering amplitudes,” which represent the likelihood that a certain set of particles will turn into certain other particles upon colliding. These numbers are what particle physicists calculate and test to high precision at particle accelerators like the Large Hadron Collider in Switzerland.

The 60-year-old method for calculating scattering amplitudes — a major innovation at the time — was pioneered by the Nobel Prize-winning physicist Richard Feynman. He sketched line drawings of all the ways a scattering process could occur and then summed the likelihoods of the different drawings. The simplest Feynman diagrams look like trees: The particles involved in a collision come together like roots, and the particles that result shoot out like branches. More complicated diagrams have loops, where colliding particles turn into unobservable “virtual particles” that interact with each other before branching out as real final products. There are diagrams with one loop, two loops, three loops and so on — increasingly baroque iterations of the scattering process that contribute progressively less to its total amplitude. Virtual particles are never observed in nature, but they were considered mathematically necessary for unitarity — the requirement that probabilities sum to one.

“The number of Feynman diagrams is so explosively large that even computations of really simple processes weren’t done until the age of computers,” Bourjaily said. A seemingly simple event, such as two subatomic particles called gluons colliding to produce four less energetic gluons (which happens billions of times a second during collisions at the Large Hadron Collider), involves 220 diagrams, which collectively contribute thousands of terms to the calculation of the scattering amplitude.

In 1986, it became apparent that Feynman’s apparatus was a Rube Goldberg machine.

To prepare for the construction of the Superconducting Super Collider in Texas (a project that was later canceled), theorists wanted to calculate the scattering amplitudes of known particle interactions to establish a background against which interesting or exotic signals would stand out. But even 2-gluon to 4-gluon processes were so complex, a group of physicists had written two years earlier, “that they may not be evaluated in the foreseeable future.”

Stephen Parke and Tomasz Taylor, theorists at Fermi National Accelerator Laboratory in Illinois, took that statement as a challenge. Using a few mathematical tricks, they managed to simplify the 2-gluon to 4-gluon amplitude calculation from several billion terms to a 9-page-long formula, which a 1980s supercomputer could handle. Then, based on a pattern they observed in the scattering amplitudes of other gluon interactions, Parke and Taylor guessed a simple one-term expression for the amplitude. It was, the computer verified, equivalent to the 9-page formula. In other words, the traditional machinery of quantum field theory, involving hundreds of Feynman diagrams worth thousands of mathematical terms, was obfuscating something much simpler. As Bourjaily put it: “Why are you summing up millions of things when the answer is just one function?”

“We knew at the time that we had an important result,” Parke said. “We knew it instantly. But what to do with it?”

The Amplituhedron

The message of Parke and Taylor’s single-term result took decades to interpret. “That one-term, beautiful little function was like a beacon for the next 30 years,” Bourjaily said. It “really started this revolution.”

In the mid-2000s, more patterns emerged in the scattering amplitudes of particle interactions, repeatedly hinting at an underlying, coherent mathematical structure behind quantum field theory. Most important was a set of formulas called the BCFW recursion relations, named for Ruth Britto, Freddy Cachazo, Bo Feng and Edward Witten. Instead of describing scattering processes in terms of familiar variables like position and time and depicting them in thousands of Feynman diagrams, the BCFW relations are best couched in terms of strange variables called “twistors,” and particle interactions can be captured in a handful of associated twistor diagrams. The relations gained rapid adoption as tools for computing scattering amplitudes relevant to experiments, such as collisions at the Large Hadron Collider. But their simplicity was mysterious.

“The terms in these BCFW relations were coming from a different world, and we wanted to understand what that world was,” Arkani-Hamed said. “That’s what drew me into the subject five years ago.”

With the help of leading mathematicians such as Pierre Deligne, Arkani-Hamed and his collaborators discovered that the recursion relations and associated twistor diagrams corresponded to a well-known geometric object. In fact, as detailed in a paper posted to arXiv.org in December by Arkani-Hamed, Bourjaily, Cachazo, Alexander Goncharov, Alexander Postnikov and Jaroslav Trnka, the twistor diagrams gave instructions for calculating the volume of pieces of this object, called the positive Grassmannian.

 

A sketch of the amplituhedron representing an 8-gluon particle interaction. Using Feynman diagrams, the same calculation would take roughly 500 pages of algebra.
Nima Arkani-Hamed

Named for Hermann Grassmann, a 19th-century German linguist and mathematician who studied its properties, “the positive Grassmannian is the slightly more grown-up cousin of the inside of a triangle,” Arkani-Hamed explained. Just as the inside of a triangle is a region in a two-dimensional space bounded by intersecting lines, the simplest case of the positive Grassmannian is a region in an N-dimensional space bounded by intersecting planes. (N is the number of particles involved in a scattering process.)

It was a geometric representation of real particle data, such as the likelihood that two colliding gluons will turn into four gluons. But something was still missing.

The physicists hoped that the amplitude of a scattering process would emerge purely and inevitably from geometry, but locality and unitarity were dictating which pieces of the positive Grassmannian to add together to get it. They wondered whether the amplitude was “the answer to some particular mathematical question,” said Trnka, a post-doctoral researcher at the California Institute of Technology. “And it is,” he said.

Arkani-Hamed and Trnka discovered that the scattering amplitude equals the volume of a brand-new mathematical object — the amplituhedron. The details of a particular scattering process dictate the dimensionality and facets of the corresponding amplituhedron. The pieces of the positive Grassmannian that were being calculated with twistor diagrams and then added together by hand were building blocks that fit together inside this jewel, just as triangles fit together to form a polygon.

Like the twistor diagrams, the Feynman diagrams are another way of computing the volume of the amplituhedron piece by piece, but they are much less efficient. “They are local and unitary in space-time, but they are not necessarily very convenient or well-adapted to the shape of this jewel itself,” Skinner said. “Using Feynman diagrams is like taking a Ming vase and smashing it on the floor.”

Arkani-Hamed and Trnka have been able to calculate the volume of the amplituhedron directly in some cases, without using twistor diagrams to compute the volumes of its pieces. They have also found a “master amplituhedron” with an infinite number of facets, analogous to a circle in 2-D, which has an infinite number of sides. Its volume represents, in theory, the total amplitude of all physical processes. Lower-dimensional amplituhedra, which correspond to interactions between finite numbers of particles, live on the faces of this master structure.

“They are very powerful calculational techniques, but they are also incredibly suggestive,” Skinner said. “They suggest that thinking in terms of space-time was not the right way of going about this.”

Quest for Quantum Gravity

The seemingly irreconcilable conflict between gravity and quantum field theory enters crisis mode in black holes. Black holes pack a huge amount of mass into an extremely small space, making gravity a major player at the quantum scale, where it can usually be ignored. Inevitably, either locality or unitarity is the source of the conflict.

“We have indications that both ideas have got to go,” Arkani-Hamed said. “They can’t be fundamental features of the next description,” such as a theory of quantum gravity.

String theory, a framework that treats particles as invisibly small, vibrating strings, is one candidate for a theory of quantum gravity that seems to hold up in black hole situations, but its relationship to reality is unproven — or at least confusing. Recently, a strange duality has been found between string theory and quantum field theory, indicating that the former (which includes gravity) is mathematically equivalent to the latter (which does not) when the two theories describe the same event as if it is taking place in different numbers of dimensions. No one knows quite what to make of this discovery. But the new amplituhedron research suggests space-time, and therefore dimensions, may be illusory anyway.

“We can’t rely on the usual familiar quantum mechanical space-time pictures of describing physics,” Arkani-Hamed said. “We have to learn new ways of talking about it. This work is a baby step in that direction.”

Even without unitarity and locality, the amplituhedron formulation of quantum field theory does not yet incorporate gravity. But researchers are working on it. They say scattering processes that include gravity particles may be possible to describe with the amplituhedron, or with a similar geometric object. “It might be closely related but slightly different and harder to find,” Skinner said.

Physicists must also prove that the new geometric formulation applies to the exact particles that are known to exist in the universe, rather than to the idealized quantum field theory they used to develop it, called maximally supersymmetric Yang-Mills theory. This model, which includes a “superpartner” particle for every known particle and treats space-time as flat, “just happens to be the simplest test case for these new tools,” Bourjaily said. “The way to generalize these new tools to [other] theories is understood.”

Beyond making calculations easier or possibly leading the way to quantum gravity, the discovery of the amplituhedron could cause an even more profound shift, Arkani-Hamed said. That is, giving up space and time as fundamental constituents of nature and figuring out how the Big Bang and cosmological evolution of the universe arose out of pure geometry.

“In a sense, we would see that change arises from the structure of the object,” he said. “But it’s not from the object changing. The object is basically timeless.”

While more work is needed, many theoretical physicists are paying close attention to the new ideas.

The work is “very unexpected from several points of view,” said Witten, a theoretical physicist at the Institute for Advanced Study. “The field is still developing very fast, and it is difficult to guess what will happen or what the lessons will turn out to be.”

Note: This article was updated on December 10, 2013, to include a link to the first in a series of papers on the amplituhedron.

This article was reprinted on Wired.com.

Locality and unitarity are the central pillars of quantum field theory, but as the following thought experiments show, both break down in certain situations involving gravity. This suggests physics should be formulated without either principle. Locality says that particles interact at points in space-time. But suppose you want to inspect space-time very closely. Probing smaller and smaller distance scales requires ever higher energies, but at a certain scale, called the Planck length, the picture gets blurry: So much energy must be concentrated into such a small region that the energy collapses the region into a black hole, making it impossible to inspect. “There’s no way of measuring space and time separations once they are smaller than the Planck length,” said Arkani-Hamed. “So we imagine space-time is a continuous thing, but because it’s impossible to talk sharply about that thing, then that suggests it must not be fundamental — it must be emergent.” Unitarity says the quantum mechanical probabilities of all possible outcomes of a particle interaction must sum to one. To prove it, one would have to observe the same interaction over and over and count the frequencies of the different outcomes. Doing this to perfect accuracy would require an infinite number of observations using an infinitely large measuring apparatus, but the latter would again cause gravitational collapse into a black hole. In finite regions of the universe, unitarity can therefore only be approximately known.
Posted by Sc13t4, 0 comments
Physicists Hunt for the Big Bang’s Triangles

Physicists Hunt for the Big Bang’s Triangles

Once upon a time, about 13.8 billion years ago, our universe sprang from a quantum speck, ballooning to one million trillion trillion trillion trillion trillion trillion times its initial volume (by some estimates) in less than a billionth of a trillionth of a trillionth of a second. It then continued to expand at a mellower rate, in accordance with the known laws of physics.

So goes the story of cosmic inflation, the modern version of the Big Bang theory. That single short, outrageous growth spurt fits all existing cosmological data well and accounts for the universe’s largeness, smoothness, flatness and lack of preferred direction. But as an explanation of how and why the universe began, inflation falls short. The questions it raises — why the growth spurt happened, how it happened, what (if anything) occurred beforehand — have confounded cosmologists since the theory emerged in the 1980s. “We have very strong evidence that there was this period of inflation,” said Matthew Kleban, a cosmologist at New York University. “But we have no idea — or we have many, many ideas — too many ideas — what inflation was, fundamentally.”

To understand the origin of the universe, today’s cosmologists seek to identify the unknown driver of inflation, dubbed the “inflaton.” Often envisioned as a field of energy permeating space and driving it apart, the inflaton worked, experts say, like a clock. With each tick, it doubled the size of the universe, keeping nearly perfect time — until it stopped. Theorists like Kleban, then, are the clocksmiths, devising altogether hundreds of different models that might replicate the clockwork of the Big Bang.

Like many cosmological clocksmiths, Kleban is an expert in string theory — the dominant candidate for a “theory of everything” that attempts to describe nature across all distances, times and energies. The known equations of physics falter when applied to the tiny, fleeting and frenzied environment of the Big Bang, in which they struggle to cram an enormous amount of energy into infinitesimal space and time. But string theory flourishes in this milieu, positing extra spatial dimensions that diffuse the energy. Familiar point particles become, at this highest energy and zoom level, one-dimensional “strings” and higher-dimensional, membranous “branes,” all of which traverse a 10-dimensional landscape. These vibrating, undulating gears may have powered the Big Bang’s clock.

At his office on a recent afternoon, Kleban sketched his latest inflaton design on the blackboard. First, he drew a skinny cylinder to depict the string landscape. Its length represented the three spatial dimensions of macroscopic reality, and its circumference signified the six other spatial dimensions that string theory says exist, but which are too small to see. On the side of the cylinder, he drew a circle. This is Kleban’s timepiece: a membrane that bubbles into being and naturally expands. As its inflating interior forms a new universe, its energy incrementally ticks down in clocklike fashion each time the expanding circle winds around the cylinder’s circumference and overlaps itself. When the energy of the “brane” dilutes, the clock stops ticking, and inflation ends. It’s a scheme that some string cosmologists have hailed for its economy. “I think it’s pretty plausible that some version of this happens,” he said.

Though Kleban acknowledges that it’s too soon to tell whether he or anyone else is on to something, plans are under way to find out.

The record of the inflaton’s breakneck ticking can be read in the distribution of galaxies, galaxy clusters and superclusters that span the cosmos. These structures (and everything in them, including you) are artifacts of “mistakes in the clock,” as Matias Zaldarriaga, a cosmologist at the Institute for Advanced Study (IAS) in Princeton, N.J., put it. That is, time is intrinsically uncertain, and so the universe inflated at slightly different rates in different places and moments, producing density variations throughout. The jitter in time can also be thought of as a jitter in energy that occurred as pairs of particles spontaneously surfaced all over an “inflaton field” and stretched apart like two points on an inflating balloon. These particles were the seeds that gravity grew into galactic structures over the course of eons. The pairs of structures spanning the largest distances in the sky today came from the earliest quantum fluctuations during inflation, while structures that are closer together were produced later. This nested distribution across all cosmic distance scales “is telling you in detail that the clock was ticking,” said Nima Arkani-Hamed, a theoretical physicist at IAS. “But it doesn’t tell you anything about what it was made of.”

To reverse-engineer the clockwork, cosmologists are seeking a new kind of data. Their calculations indicate that galaxies and other structures are not merely randomly spread out in pairs across the sky; instead, they have a slight tendency to be arranged in more complex configurations: triangles, rectangles, pentagons and all manner of other shapes, which trace back not just to quantum jitter in the Big Bang’s clock, but to a much more meaningful turning of the gears.

Teasing out the cosmological triangles and other shapes — which have been named “non-Gaussianities” to contrast them with the Gaussian bell curve of randomly distributed pairs of structures — will require more precise observations of the cosmos than have been made to date. And so plans are being laid for a timeline of increasingly sensitive experiments. “We’re going to have far more information than we have now, and sensitivity to far subtler effects than we can probe now,” said Marc Kamionkowski, a cosmologist at Johns Hopkins University. In the meantime, theorists are making significant progress in determining what shapes to look for and how to look for them. “There’s been a great renaissance of understanding,” said Eva Silverstein, a string cosmologist at Stanford University who devised the dimensional-winding mechanism used by Kleban, as well as many clock designs of her own.

The rigorous study of non-Gaussianities took off in 2002, when Juan Maldacena, a revered, monklike theorist at IAS, calculated what’s known as the “gravitational floor”: the minimum number of triangles and other shapes that are guaranteed to exist in the sky, due to the unavoidable effect of gravity during cosmic inflation. Cosmologists had been struggling to calculate the gravitational floor for more than a decade, since it would provide a concrete goal for experimenters. If the floor is reached, and still no triangles are detected, Maldacena explained, “then inflation is wrong.”

When Maldacena first calculated the gravitational floor, actually detecting it seemed a distant goal indeed. At the time, all precise knowledge of the universe’s birth came from observations of the “cosmic microwave background” — the oldest light in the sky, which illuminates a two-dimensional slice of the infant universe as it appeared 380,000 years after the Big Bang. Based on the limited number of nascent structures that appear in this 2-D snapshot, it seemed impossible that their slight propensity to be configured in triangles and other shapes could ever be detected with statistical certainty. But Maldacena’s work gave theorists the tools to calculate other, more pronounced forms of non-Gaussianity that might exist in the sky, due to stronger effects than gravity. And it motivated researchers to devise better ways to search for the signals.

A year after Maldacena made his calculation, Zaldarriaga and collaborators showed that measuring the distribution of galaxies and groupings of galaxies that make up the universe’s “large-scale structure” would yield many more shapes than observing the cosmic microwave background. “It’s a 3-D versus 2-D argument,” said Olivier Doré, a cosmologist at NASA’s Jet Propulsion Laboratory who is working on a proposed search for non-Gaussianities in the large-scale structure. “If you start counting triangles in 3-D like you can do with galaxy surveys, there are really many more you can count.”

The notion that counting more shapes in the sky will reveal more details of the Big Bang is implied in a central principle of quantum physics known as “unitarity.” Unitarity dictates that the probabilities of all possible quantum states of the universe must add up to one, now and forever; thus, information, which is stored in quantum states, can never be lost — only scrambled. This means that all information about the birth of the cosmos remains encoded in its present state, and the more precisely cosmologists know the latter, the more they can learn about the former.

But how did details of the Big Bang get encoded in triangles and other shapes? According to Zaldarriaga, Maldacena’s calculation “opened up the understanding of how it comes about.” In a universe governed by quantum mechanics, all of nature’s constituents are cross-wired, morphing into and interacting with one another with varying degrees of probability. This includes the inflaton field, the gravitational field, and whatever else existed in the primordial universe: Particles arising in these fields would have morphed into and scattered with each other to produce triangles and other geometric configurations, like billiard balls scattering on a table.

Lucy Reading-Ikkanda for Quanta Magazine

These dynamical events would be mixed in with the more mundane quantum jitter from those particle pairs that popped up in the inflaton field and engendered so-called “two-point correlations” throughout the sky. A pair of particles might, for instance, have surfaced in some other primordial field, and one member of this pair might then have decayed into two inflaton particles while the other decayed into just a single inflaton particle, yielding a three-point correlation, or triangle, in the sky. Or, two mystery particles might have collided and split into four inflaton particles, producing a four-point correlation. Rarer events would have yielded five-point, six-point and even higher-point correlations, with their numbers, sizes and interior angles encoding the types and relationships of the particles that produced them. The unitarity principle promises that by tallying the shapes ever more precisely, cosmologists will achieve an increasingly detailed account of the primordial universe, just as physicists at the Large Hadron Collider in Europe hone their theory of the known particles and look for evidence of new ones by collecting statistics on how particles morph and scatter during collisions.

Following Maldacena’s calculation of the gravitational floor, other researchers demonstrated that even many simple inflationary models generate much more pronounced non-Gaussianity than the bare minimum. Clocksmiths like Silverstein and Kleban have since been busy working out the distinct set of triangles that their models would produce — predictions that will become increasingly testable in the coming years. Progress accelerated in 2014, when a small experiment based at the South Pole appeared to make a momentous discovery about the universe’s birth. The announcement drummed up interest in cosmological triangles, even though the supposed discovery ultimately proved a grave disappointment.

As news began to spread on March 17, 2014, that the “smoking gun” of cosmic inflation had been detected, Stanford University’s press office posted a celebratory video on YouTube. In the footage, the cosmologist Andrei Linde, one of the decorated pioneers of inflationary cosmology, and his wife, the string and supergravity theorist and cosmologist Renata Kallosh, answer their door to find their Stanford colleague Chao-Lin Kuo on the doorstep, accompanied by a camera crew.

“It’s five sigma, at point two,” Kuo says in the video.

“Discovery?” Kallosh asks, after a beat. She hugs Kuo, almost melting, as Linde exclaims, “What?”

Viewers learn that BICEP2, an experiment co-led by Kuo, has detected a swirl pattern in the cosmic microwave background that would have been imprinted by ripples in space-time known as “primordial gravitational waves.” And these could only have arisen during cosmic inflation, as corkscrew-like particles popped up in the gravitational field and then became stretched and permanently frozen into the shape of the universe.

In the next scene, Linde sips champagne with his wife and their guest. In the early 1980s, Linde, Alexei Starobinsky, Alan Guth and other young cosmologists devised the theory of cosmic inflation as a patch for the broken 1930s-era Big Bang theory, which described the universe as expanding outward from a “singularity” — a nonsensical point of infinite density — and couldn’t explain why the universe hadn’t become mottled and contorted as it grew. Cosmic inflation provided a clever fix for these problems, and BICEP2’s finding suggested that the theory was conclusively proved. “If this is true,” Linde says to the camera, “this is a moment of understanding of nature of such a magnitude that it just overwhelms. Let’s see. Let’s just hope that this is not a trick.”

To many researchers, the most exciting thing about the alleged discovery was the strength of the swirl signal, measured as r = 0.2. The measurement indicated that inflation occurred at an extremely high energy scale and at the earliest moments in time, near the time-energy domain where gravity, as well as the effects of strings, branes or other exotica, would have been strong. The higher the energy scale of inflation, the more cross-wiring there would be between the inflaton and these other primordial ingredients. The result would be pronounced triangles and other non-Gaussianities in the sky.

“After BICEP, we all stopped what we were doing and started thinking about inflation,” Arkani-Hamed said. “Inflation is like having a gigantic particle accelerator at much higher energy scales than you can get to on Earth.” The question became how such an accelerator would operate, he said, “and if there really was exotic stuff up there [near the inflation scale], how you could go about looking for it.”

As these investigations took off, more details of BICEP2’s analysis emerged. It became clear that the discovery was indeed a trick of nature: The team’s telescope at the South Pole had picked up the swirly glow of galactic dust rather than the effect of primordial gravitational waves. A mix of anguish and anger swept through the field. Two years on, primordial gravitational waves still haven’t been detected. In January, BICEP2’s successor, the BICEP/Keck Array, reported that the value of r can be no more than 0.07, which lowers the ceiling on the energy scale of inflation and moves it further below the scale of strings or other exotic physics.

Nonetheless, many researchers were now aware of the potential gold mine of information contained in triangles and other non-Gaussianities. It had become apparent that these fossils from inflation were worth digging for, even if they were buried deeper than BICEP2 had briefly promised. “Yeah, rwent down a little bit,” Maldacena said. But it’s not so bad, in his opinion: A relatively high scale is still possible.

In a paper last spring that drew on previous work by other researchers, Maldacena and Arkani-Hamed used symmetry arguments to show that a key feature of string theory could manifest itself in triangles. String theory predicts an infinite tower of “higher-spin states” — essentially, strings vibrating at an infinitely rising sequence of pitches. So far, no fundamental particles with a “spin” value greater than two have been discovered. Maldacena and Arkani-Hamed showed that the existence of such a higher-spin state would result in alternating peaks and troughs in the strength of the signal produced by triangles in the sky as they grow more elongated. For string theorists, this is exciting. “You can’t build a consistent interacting theory of such a particle except if you have an infinite tower of them” like the tower in string theory, explained Daniel Baumann, a theoretical cosmologist at the University of Amsterdam. Finding the oscillatory pattern in the triangles in the sky would confirm that this tower exists. “Just seeing one particle of spin greater than two would be indicative of string theory being present.”

Other researchers are pursuing similarly general predictions. In February, Kamionkowski and collaborators reported detailed information about primordial particles that is encoded in the geometry of four-point correlations, which “get interesting,” he said, because four points can lie flat or sweep into the third dimension. Observing the signals predicted by Arkani-Hamed, Maldacena and Kamionkowski would be like striking gold, but the gold is buried deep: Their strength is probably near the gravitational floor and will require at least 1,000 times the sensitivity of current equipment to detect. Other researchers prefer to tinker with bespoke string models that predict more pronounced triangles and other shapes. “So far we’ve explored only, I think, a very small fraction of the possibilities for non-Gaussianity,” Kamionkowski said.

Meanwhile, Linde and Kallosh are pushing in a totally different direction. Over the past three years, they’ve become enamored with a class of models called “cosmological alpha-attractors” that do not predict any non-Gaussianities above the gravitational floor at all. According to these models, cosmic inflation was completely pure, driven by a solitary inflaton field. The field is described by a Kähler manifold, which maps onto the geometric disk seen in Escher’s drawing of angels and devils. The Escherian geometry provides a continuum of possible values for the energy scale of inflation, including values so low that the inflaton’s cross-wiring to the gravitational field and other primordial fields would be extremely weak. If such a model does describe the universe, then swirls, triangles and other shapes might never be detected.

Linde isn’t bothered by this. In supporting the alpha-attractor models, he and Kallosh are staking a position in favor of simplicity and theoretical beauty, at the expense of ever knowing for sure whether their cosmological origin story is correct. An alpha-attractor universe, Linde said, is like one of the happy families in the famous opening line of Anna Karenina. As he paraphrased Tolstoy: “Any happy family, well, they look in a sense alike. But all unhappy families — they’re unhappy for different reasons.”

▽▷△

Will our universe turn out to be “happy” and completely free of distinguishing features? Baumann, who co-authored a book last year on string cosmology, argues that models like Linde’s and Kallosh’s are too simple to be plausible. “They are building these models from the bottom up,” he said. “Introducing a single field, trying to be very minimal — it would have been a beautiful model of the world.” But, he said, when you try to embed inflation into a fundamental theory of nature, it’s very hard to engineer a single field acting by itself, immune to the effects of everything else. “String theory has many of these effects; you can’t ignore them.”

And so the search for triangles and other non-Gaussianities is under way. Between 2009 and 2013, the Planck space telescope mapped the cosmic microwave background at the highest resolution yet, and scientists have since been scouring the map for statistical excesses of triangles and other shapes. As of their most recent analysis, they haven’t found any; given the sensitivity of their instruments and their 2-D searching ground, they only ever had an outside chance of doing so. But the scientists are continuing to parse the data in new ways, with another non-Gaussianity analysis expected this year.

Hiranya Peiris, an astrophysicist at University College London who searches for non-Gaussianities in the Planck data, said that she and her collaborators are taking cues from string cosmologists in determining which signals to look for. Peiris is keen to test a string-inflationary mechanism called axion monodromy, including variants recently developed by Silverstein and collaborators Raphael Flauger, Mehrdad Mirbabayi, and Leonardo Senatore that generate an oscillatory pattern in triangles as a function of their size that can be much more pronounced than the pattern studied by Arkani-Hamed and Maldacena. To find such a signal, Peiris and her team must construct templates of the pattern and match them with the data “in a very numerically intensive and demanding analysis,” she said. “Then we have to do careful statistical tests to make sure we are not being fooled by random fluctuations in the data.”

Some string models have already been ruled out by this data analysis. Regarding the public debate about whether string theory is too divorced from empirical testing to count as science, Silverstein said, “I find it surreal, because we are currently doing some traditional science with string theory.”

Now under construction, the Large Synoptic Survey Telescope in Chile will be used to map 20 billion cosmological objects starting in 2023.
LSST Project/NSF/AURA

Moving forward, cosmologists plan to scour ever larger volumes of the universe’s large-scale structure. Starting in 2020, the proposed SPHEREx mission could measure non-Gaussianity sensitively enough in the distribution of 300 million galaxies to determine whether inflation was driven by one clock or two cross-wired clocks (according to models of the theory known as single- and multi-field inflation, respectively). “Just to reach this level would dramatically reduce the number of possible inflation theories,” said Doré, who is working on the SPHEREx project. A few years further out, the Large Synoptic Survey Telescope will map 20 billion cosmological structures. If the statistical presence of triangles is not detected in the universe’s large-scale structure, there is yet another, perhaps final, approach. By mapping an ultra-faint radio signal called the 21-centimeter line, which is emitted by hydrogen atoms and traces back to the creation of the first stars, cosmologists would be able to measure even more “modes,” or arrangements of structures. “It’s going to have information about the whole volume of the universe,” Maldacena said.

If or when triangles show up, they will, one by one, reveal the nature of the inflaton clock and why it ticked. But will enough clues be gathered before we run out of sky in which to gather them?

The promise of unitarity — that information can be scrambled but never lost — comes with a caveat.

“If we assume we can make perfect measurements and we have an infinite sky and so on,” Maldacena said, “then in principle all the interactions and information about particles during inflation is contained in these correlators” — that is, three-point correlations, four-point correlations and so on. But perfect measurements are impossible. And worse, the sky is finite. There is a cosmic horizon: the farthest distance from which light has had time to reach us, and thus, beyond which we cannot see. During inflation, and over the entire history of the accelerating expansion of the universe since then, swirls, triangles, quadrilaterals and other shapes have been flying past this horizon and out of sight. And with them, the subtlest of signals, associated with the rarest, highest-energy processes during inflation, are lost: Cosmologists will never be able to gather enough statistics in our finite patch of sky to tease them out, precluding a complete accounting of nature’s fundamental constituents.

In his paper with Maldacena, Arkani-Hamed initially included a discussion of this issue, but he removed most of it. He finds the possibility of a limit to knowledge “tremendously disturbing” and sees it as evidence that quantum mechanics must be extended. One possible way to do this is suggested by his work on the amplituhedron, which casts quantum mechanical probabilities (and with them, unitarity) as emergent consequences of an underlying geometry. He plans to discuss this possibility in a forthcoming paper that will relate an analogue of the amplituhedron to non-Gaussianities in the sky.

People vary in the extent to which they are bothered by a limit to knowledge. “I’m more practical,” Zaldarriaga said. “There are, like, tens or many tens or orders of magnitude more modes that in principle we could see, that we have not been able to measure just because of technological or theoretical inability. So, these ‘in principle’ questions are interesting, but we are way before this point.”

Kleban also feels hopeful. “Yeah, it’s a finite amount of information,” he said. “But you could say the same thing about evolution, right? There’s a limited number of fossils, and yet we have a pretty good idea of what happened, and it’s getting better and better.”

If all goes well, enough fossils will turn up in the sky to tell a more complete story. A vast searching ground awaits.

Correction: This article was revised on April 19, 2016, to reflect that the Keck Array is BICEP2’s successor, not its predecessor, and on April 21, 2016, to correct the spelling of Mehrdad Mirbabayi’s surname.

This article was reprinted on Wired.com.

Posted by Sc13t4, 0 comments
A Chemist Shines Light on a Surprising Prime Number Pattern

A Chemist Shines Light on a Surprising Prime Number Pattern

About a year ago, the theoretical chemist Salvatore Torquatomet with the number theorist Matthew de Courcy-Irelandto explain that he had done something highly unorthodox with prime numbers, those positive integers that are divisible only by 1 and themselves.

A professor of chemistry at Princeton University, Torquato normally studies patterns in the structure of physical systems, such as the arrangement of particles in crystals, colloids and even, in one of his better-known results, a pack of M&Ms. In his field, a standard way to deduce structure is to diffract X-rays off things. When hit with X-rays, disorderly molecules in liquids or glass scatter them every which way, creating no discernible pattern. But the symmetrically arranged atoms in a crystal reflect light waves in sync, producing periodic bright spots where reflected waves constructively interfere. The spacing of these bright spots, known as “Bragg peaks” after the father-and-son crystallographers who pioneered diffraction in the 1910s, reveals the organization of the scattering objects.

By Natalie Wolchover  at Quanta Magazine

Torquato told de Courcy-Ireland, a final-year graduate student at Princeton who had been recommended by another mathematician, that a year before, on a hunch, he had performed diffraction on sequences of prime numbers. Hoping to highlight the elusive order in the distribution of the primes, he and his student Ge Zhanghad modeled them as a one-dimensional sequence of particles — essentially, little spheres that can scatter light. In computer experiments, they bounced light off long prime sequences, such as the million-or-so primes starting from 10,000,000,019. (They found that this “Goldilocks interval” contains enough primes to produce a strong signal without their getting too sparse to reveal an interference pattern.)

It wasn’t clear what kind of pattern would emerge or if there would be one at all. Primes, the indivisible building blocks of all natural numbers, skitter erratically up the number line like the bounces of a skipping rock, stirring up deep questions in their wake. “They are in many ways pretty hard to tell apart from a random sequence of numbers,” de Courcy-Ireland said. Although mathematicians have uncovered many rules over the centuries about the primes’ spacings, “it’s very difficult to find any clear pattern, so we just think of them as ‘something like random.’”

But in three new papers — one by Torquato, Zhang and the computational chemist Fausto Martellithat was published in the Journal of Physics Ain February, and twoothersco-authored with de Courcy-Ireland that have not yet been peer-reviewed — the researchers report that the primes, like crystals and unlike liquids, produce a diffraction pattern.

“What’s beautiful about this is it gives us a crystallographer’s view of what the primes look like,” said Henry Cohn, a mathematician at Microsoft Research New England and the Massachusetts Institute of Technology.

The resulting pattern of Bragg peaks is not quite like anything seen before, implying that the primes, as a physical system, “are a completely new category of structures,” Torquato said. The Princeton researchers have dubbed the fractal-like pattern “effective limit-periodicity.”

It consists of a periodic sequence of bright peaks, which reflect the most common spacings of primes: All of them (except 2) are at odd-integer positions on the number line, multiples of two apart. Those brightest bright peaks are interspersed at regular intervals with less bright peaks, reflecting primes that are separated by multiples of six on the number line. These have dimmer peaks between them corresponding to farther-apart pairs of primes, and so on in an infinitely dense nesting of Bragg peaks.

Dense Bragg peaks have been seen before, in the diffraction patterns of quasicrystals, those strange materials discovered in the 1980s with symmetric but nonrepeating atomic arrangements. In the primes’ case, though, distances between peaks are fractions of one another, unlike quasicrystals’ irrationally spaced Bragg peaks. “The primes are actually suggesting a completely different state of particle positions that are like quasicrystals but are not like quasicrystals,” Torquato said.

In computer experiments, theoretical chemists have diffracted light off long sequences of prime numbers to reveal the hidden order underlying their seemingly erratic distribution. The primes produce a fractal-like diffraction pattern that’s similar, yet different, to that of quasicrystals.

Lucy Reading-Ikkanda/Quanta Magazine; Crystal diffraction pattern by Sven.hovmoeller; Quasicrystal diffraction pattern by Materialscientist)

According to numerous number theorists interviewed, there’s no reason to expect the Princeton team’s findings to trigger advances in number theory. Most of the relevant mathematics has been seen before in other guises. Indeed, when Torquato showed his plots and formulas to de Courcy-Ireland last spring (at the suggestion of Cohn), the young mathematician quickly saw that the prime diffraction pattern “can be explained in terms of almost universally accepted conjectures in number theory.”

It was the first of many meetings between the two at the Institute for Advanced Study in Princeton, N.J., where Torquato was spending a sabbatical. The chemist told de Courcy-Ireland that he could use his formula to predict the frequency of “twin primes,” which are pairs of primes separated by two, like 17 and 19. The mathematician replied that Torquato could in fact predict all other separations as well. The formula for the Bragg peaks was mathematically equivalent to the Hardy-Littlewood k-tuple conjecture, a powerful statement made by the English mathematicians Godfrey Hardy and John Littlewood in 1923 about which “constellations” of primes can exist. One rule forbids three consecutive odd-numbered primes after {3, 5, 7}, since one in the set will always be divisible by three, as in {7, 9, 11}. This rule illustrates why the second-brightest peaks in the primes’ diffraction pattern come from pairs of primes separated by six, rather than four.

Hardy and Littlewood’s conjecture further specified how often all the allowed prime constellations will occur along the number line. Even the simplest case of Hardy-Littlewood, the “twin primes conjecture,” although it has seen a burst of modern progress, remains unproved. Because prime diffraction essentially reformulates it, experts say it’s highly unlikely to lead to a proof of Hardy-Littlewood, or for that matter the famous Riemann hypothesis, an 1859 formula linking the primes’ distribution to the “critical zeros” of the Riemann zeta function.

The findings resonate, however, in a relatively young research area called “aperiodic order,” essentially the study of nonrepeating patterns, which lies at the intersection of crystallography, dynamical systems, harmonic analysis and discrete geometry, and grew after the discovery of quasicrystals. “Techniques that were originally developed for understanding crystals … became vastly diversified with the discovery of quasicrystals,” said Marjorie Senechal, a mathematical crystallographer at Smith College. “People began to realize they suddenly had to understand much, much more than just the simple straightforward periodic diffraction,” she said, “and this has become a whole field, aperiodic order. Uniting this with number theory is just extremely exciting.”

Graphic Illustration: Tiles lock together to form an aperiodic tessellation, in which the sequence of tile orientations never repeats. Markings on the tiles match up to generate an infinite hierarchy of larger and larger triangles — an example of a “limit periodic” pattern.

Adapted from Parcly Taxel

The primes’ pattern resembles a kind of aperiodic order known since at least the 1950s called limit periodicity, “while adding a surprising twist,” Cohn said. In true limit-periodic systems, periodic spacings are nested in an infinite hierarchy, so that within any interval, the system contains parts of patterns that repeat only in a larger interval. An example is the tessellation of a strange, multipronged shape called the Taylor-Socolar tile, discovered by the Australian amateur mathematician Joan Taylor in the 1990s, and analyzed in detail with Joshua Socolarof Duke University in 2010. According to Socolar, computer experiments indicate that limit-periodic phases of matter should be able to form in nature, and calculations suggest such systems might have unusual properties. No one guessed a connection to the primes. They are “effectively” limit periodic — a new kind of order — because the synchronicities in their spacings only hold statistically across the whole system.

For his part, de Courcy-Ireland wants to better understand the “Goldilocks” scale at which effective limit-periodicity emerges in the primes. In 1976, Patrick Gallagher of Columbia University showedthat the primes’ spacings look random over short intervals; longer strips are needed for their pattern to emerge. In the new diffraction studies, de Courcy-Ireland and his chemist collaborators analyzed a quantity called an “order metric” that controls the presence of the limit-periodic pattern. “You can identify how long the interval has to be before you start seeing this quantity grow,” he said. He is intrigued that this same interval length also shows up in a different prime number rule called Maier’s theorem. But it’s too soon to tell whether this thread will lead anywhere.

The main advantage of the prime diffraction pattern, said Jonathan Keatingof the University of Bristol, is that “it is evocative” and “makes a connection with different ways of thinking.” But the esteemed number theorist Andrew Granvilleof the University of Montreal called Torquato and company’s work “pretentious” and “just a regurgitation of known ideas.”

Torquato isn’t especially concerned about how his work will be perceived by number theorists. He has found a way to glimpse the pattern of the primes. “I actually think it’s stunning,” he said. “It’s a shock.”

Posted by Sc13t4, 0 comments
Quantum Physics May Be Even Spookier Than You Think

Quantum Physics May Be Even Spookier Than You Think

A new experiment hints at surprising hidden mechanics of quantum superpositions

It is the central question in quantum mechanics, and no one knows the answer: What really happens in a superposition—the peculiar circumstance in which particles seem to be in two or more places or states at once? Now, in a forthcoming papera team of researchers in Israel and Japan has proposed an experiment that could finally let us say something for sure about the nature of this puzzling phenomenon.

Their experiment, which the researchers say could be carried out within a few months, should enable scientists to sneak a glance at where an object—in this case a particle of light, called a photon—actually resides when it is placed in a superposition. And the researchers predict the answer will be even stranger and more shocking than “two places at once.”

The classic example of a superposition involves firing photons at two parallel slits in a barrier. One fundamental aspect of quantum mechanics is that tiny particles can behave like waves, so that those passing through one slit “interfere” with those going through the other, their wavy ripples either boosting or canceling one another to create a characteristic pattern on a detector screen. The odd thing, though, is this interference occurs even if only oneparticle is fired at a time. The particle seems somehow to pass through both slits at once, interfering with itself. That’s a superposition.

And it gets weirder: Measuring which slit such a particle goes through will invariably indicate it only goes through one—but then the wavelike interference (the “quantumness,” if you will) vanishes. The very act of measurement seems to “collapse” the superposition. “We know something fishy is going on in a superposition,” says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research. “But you’re not allowed to measure it. This is what makes quantum mechanics so diabolical.”

For decades researchers have stalled at this apparent impasse. They cannot say exactly what a superposition is without looking at it; but if they try to look at it, it disappears. One potential solution—developed by Elitzur’s former mentor, Israeli physicist Yakir Aharonov, now at Chapman University, and his collaborators—suggests a way to deduce something about quantum particles beforemeasuring them. Aharonov’s approach is called the two-state-vector formalism (TSVF) of quantum mechanics, and postulates quantum events are in some sense determined by quantum states not just in the past—but also in the future. That is, the TSVF assumes quantum mechanics works the same way both forward and backward in time. From this perspective, causes can seem to propagate backward in time, occurring aftertheir effects.

But one needn’t take this strange notion literally. Rather, in the TSVF one can gain retrospective knowledge of what happened in a quantum system by selecting the outcome: Instead of simply measuring where a particle ends up, a researcher chooses a particular location in which to look for it. This is called post-selection, and it supplies more information than any unconditional peek at outcomes ever could. This is because the particle’s state at any instant is being evaluated retrospectively in light of its entire history, up to and including measurement. The oddness comes in because it looksas if the researcher—simply by choosing to look for a particular outcome—then causes that outcome to happen. But this is a bit like concluding that if you turn on your television when your favorite program is scheduled, your action causes that program to be broadcast at that very moment. “It’s generally accepted that the TSVF is mathematically equivalent to standard quantum mechanics,” says David Wallace, a philosopher of science at the University of Southern California who specializes in interpretations of quantum mechanics. “But it does lead to seeing certain things one wouldn’t otherwise have seen.”

Take, for instance, a version of the double-slit experiment devised by Aharonov and co-worker Lev Vaidman in 2003, which they interpreted with the TSVF. The pair described (but did not build) an optical system in which a single photon acts as a “shutter” that closes a slit by causing another “probe” photon approaching the slit to be reflected back the way it came. By applying post-selection to the measurements of the probe photon, Aharonov and Vaidman showed, one could discern a shutter photon in a superposition closing both (or indeed arbitrarily many) slits simultaneously. In other words, this thought experimentwould in theory allow one to say with confidence the shutter photon is both “here” and “there” at once. Although this situation seems paradoxical from our everyday experience, it is one well-studied aspect of the so-called “nonlocal” properties of quantum particles, where the whole notion of a well-defined location in space dissolves.

In 2016 physicists Ryo Okamoto and Shigeki Takeuchi of Kyoto University verified Aharonov and Vaidman’s predictions experimentallyusing a light-carrying circuit in which the shutter photon is created using a quantum router, a device that lets one photon control the route taken by another. “This was a pioneering experiment that allowed one to infer the simultaneous position of a particle in two places,” says Elitzur’s colleague Eliahu Cohen of the University of Ottawa in Ontario.

Now Elitzur and Cohen have teamed up with Okamoto and Takeuchi to concoct an even more mind-boggling experiment. They believe it will enable researchers to say with certainty something about the location of a particle in a superposition at a series of different points in time—before any actual measurement has been made.

This time the probe photon’s route would be split into three by partial mirrors. Along each of those paths it may interact with a shutter photon in a superposition. These interactions can be considered to take place within boxes labeled A, B and C, one of which is situated along each of the photon’s three possible routes. By looking at the self-interference of the probe photon, one can retrospectively conclude with certainty the shutter particle was in a given box at a specific time.

Credit: Amanda Montañez

The experiment is designed so the probe photon can only show interference if it interacted with the shutter photon in a particular sequence of places and times: Namely, if the shutter photon was in both boxes A and C at some time (t1), then at a later time (t2) only in C, and at a still later time (t3) in both B and C. So interference in the probe photon would be a definitive sign the shutter photon made this bizarre, logic-defying sequence of disjointed appearances among the boxes at different times—an idea Elitzur, Cohen and Aharonov proposed as a possibilitylast year for a single particle spread across three boxes. “I like the way this paper frames questions about what is happening in terms of entire histories rather than instantaneous states,” says physicist Ken Wharton of San Jose State University, who is not involved in the new project. “Talking about ‘states’ is an old pervasive bias whereas full histories are generally far more rich and interesting.”

That richness, Elitzur and colleagues argue, is what the TSVF gives access to. The apparent vanishing of particles in one place at one time—and their reappearance in other times and places—suggests a new and extraordinary vision of the underlying processes involved in the nonlocal existence of quantum particles. Through the lens of the TSVF, Elitzur says, this flickering, ever-changing existence can be understood as a series of events in which a particle’s presence in one place is somehow “canceled” by its own “counterparticle” in the same location. He compares this with the notion introduced by British physicist Paul Dirac in the 1920s who argued particles possess antiparticles, and if brought together, a particle and antiparticle can annihilate each other. This picture at first seemed just a manner of speaking but soon led to the discovery of antimatter. The disappearance of quantum particles is not “annihilation” in this same sense but it is somewhat analogous—these putative counterparticles, Elitzur posits, should possess negative energy and negative mass, allowing them to cancel their counterparts.

So although the traditional “two places at once” view of superposition might seem odd enough, “it’s possible a superposition is a collection of states that are even crazier,” Elitzur says. “Quantum mechanics just tells you about their average.” Post-selection then allows one to isolate and inspect just some of those states at greater resolution, he suggests. Such an interpretation of quantum behavior would be, he says, “revolutionary”—because it would entail a hitherto unguessed menagerie of real (but very odd) states underlying counterintuitive quantum phenomena.

The researchers say conducting the actual experiment will require fine-tuning the performance of their quantum routers, but they hope to have their system ready to roll in three to five months. For now some outside observers are not exactly waiting with bated breath. “The experiment is bound to work,” says Wharton—but he adds it “won’t convince anyone of anything, since the results are predicted by standard quantum mechanics.” In other words, there would be no compelling reason to interpret the outcome in terms of the TSVF rather than one of the many other ways that researchers interpret quantum behavior.

Elitzur agrees their experiment could have been conceived using the conventional view of quantum mechanics that prevailed decades ago—but it never was. “Isn’t that a good indication of the soundness of the TSVF?” he asks. And if someone thinks they can formulate a different picture of “what is really going on” in this experiment using standard quantum mechanics, he adds, “well, let them go ahead!”

Posted by Sc13t4, 0 comments
Stephen Hawking Reveals What Existed Before the Big Bang

Stephen Hawking Reveals What Existed Before the Big Bang

In an interview with astrophysicist Neil deGrasse Tyson, iconic physicist Stephen Hawking recently revealed what he believes existed prior to the Big Bang.

 “Nothing was around,” said Hawking, who fortunately elaborated on this point. “The Euclidean space-time is a closed surface without end, like the surface of the Earth,” said Hawking, referring to the four-dimensional conceptual model that incorporates the three dimensions of space with time. “One can regard imaginary and real time as beginning at the South Pole, which is a smooth point of space-time where the normal laws of physics hold. There is nothing south of the South Pole so there was nothing around before the Big Bang.” At least, there was nothing around that humans can currently experience or conceptualize.

Related: Stephen Hawking: Humans must leave Earth within 100 years to survive

Hawking has offered some pessimistic assessments of the near-future of our planet. He predicts that the Earth will become a ball of fire within the next 600 years while also warning humanity that we have less than a century to leave Earth before it becomes uninhabitable. He also warned about the existential dangers of artificial intelligence. “Computers can, in theory, emulate human intelligence, and exceed it,” he said in 2017. “Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

by 

Via USA Today

Images via Star Talk and NASA

Posted by Sc13t4, 0 comments