Month: December 2018

Where Do We Go When We Die?

Where Do We Go When We Die?

Where do we go when we die? Signs that consciousness remains after death are increasing.

What happens when we die? Who really knows? But to deem these questions completely unanswerable is absurd in light of all the evidence that’s emerged over the past view decades. Sure, contemplating what happens after death can be a little too ‘out there’ for some people, it can even contradict long-held belief systems that we’ve been holding on to for so long, with a tight grip, so much so that it can be hard to even entertain an alternate perspective that’s backed with some type of credible evidence.  It’s called cognitive dissonance.

There is nothing wrong with discovery, and throughout all stages of human history new discoveries have always been denounced and ridiculed before they eventually make their way into the mainstream. This is exactly what we are seeing with non-material science. The birth of quantum physics clearly showed a strong relationship between consciousness and what we perceive as our physical material world, this is why all of the founding fathers of quantum theory, like Max Planck,  regarded “consciousness as fundamental,” and matter as “derivative from consciousness,” and it’s why Nikola Tesla believed that humanity would not make giant leaps forward until it studies “non-physical” phenomena – subjects such as, like telepathy, clairvoyance, psychokinesis, remote viewing, near death experiences (NDE’s) and more.

Today, new discoveries like this can have huge implications, and can shake the foundations of the global collective worldview as well as change the global perception forever.

In the mainstream scientific community, there is still a harsh resistance to this type of phenomenon, despite the fact that there are hundreds of peer-reviewed studies showing, in some cases, even greater statistically significant results than that of the ‘hard sciences,’ under the same controlled laboratory conditions. It truly goes to show that science, today, is in large part not about remaining neutral, but rather pulling the curtain over results that still challenge what we think we know about the nature of reality. As a result, we have tremendous amounts of scientific dogma, rather than scientific truth.

A 1999 a statistics professor at UC Irvine published a paper showing that parapsychological experiments have produced much stronger results than those showing a daily dose of aspirin helps prevent a heart attack. There are multiple examples, most of them coming from the Department of Defence.

Below is a great quote that I’ve used multiple times, so my apologies if you’ve already seen it but it really gets my point across,

“Despite the unrivalled empirical success of quantum theory, the very suggestion that it may be literally true as a description of nature is still greeted with cynicism, incomprehension and even anger.” – T. Folger, “Quantum Shmantum”; Discover 22:37-43, 2001)

Dr Gary Schwartz (University of Arizona ), is one of hundreds of scientists who have gathered to emphasize that matter is not the only reality. You can read more about that in this article:

Distinguished Scientists Gather To Emphasize That Matter Is NOT The Only Reality

He sums up the problem quite well,

Some materialistically inclined scientists and philosophers refuse to acknowledge these phenomena because they are not consistent with their exclusive conception of the world. Rejection of post-materialist investigation of nature or refusal to publish strong science findings supporting a post-materialist framework are antithetical to the true spirit of scientific inquiry, which is that empirical data must always be adequately dealt with. Data which do not fit favoured theories and beliefs cannot be dismissed as priori. Such dismissal is the realm of ideology, not science.”  Dr. Gary Schwartz (source)

So, What Happens After We Die?

Contemplating where we come from, and where we are before and after death, has been done for thousands of years. The stories of our creation span the literature of all cultures throughout human history, from a variety of different time periods, and if we look at the stories of our creation from sources that predate religion, they all seem to be very similar, and very spiritual in nature, but what does modern-day research show us?

From a medical standpoint, death means that our heart is stopped, all brain activity and blood circulation comes to a halt and we are no longer doing any breathing. It’s important to note that, on numerous occasions, individuals have been pronounced clinically dead, only to be revived and brought back to live via CPR and other mechanisms.

An article written for Newsweek explains.

“Modern resuscitation was a game-changer for emergency care, but it also blew apart our understanding of what it means to be dead. Without many people returning from the dead to show us otherwise, it was natural to assume, from a scientific perspective, that our consciousness dies at the same time as our bodies.”

Today, it’s a different story. Large studies have shown that a significant amount of people who have been clinically dead, experience some type of ‘awareness’ during that time. For example, one patient – a 57-year-old man at the time, despite being pronounced “dead” and completely unconscious, with no detectable biological activity going on, recalled watching the entire process of his resuscitation.

What’s also weird is that scientists have discovered that once you die, “only after you die, that the cells inside our bodies start to gradually go toward their own process of death,” Dr. Sam Parnia, director of critical care and resuscitation research at New York University Langone Medical Center, told Newsweek. “I’m not saying the brain still works or any part of you still works once you’ve died. But the cells don’t instantly switch from alive to dead. Actually, the cells are much more resilient to the heart stopping – to the person dying – than we used to understand.”

His published research in this area provides a number of examples as well.

All researchers in this field have found that the experiences patients are having while dead, are unexplainable and given that so many have experienced it, brushing them off as mere hallucinations is not completely valid. These experiences, as mentioned above, have also been verified by the doctors involved with the patients themselves.

“How these patients were able to describe objective events that took place while they were dead, we’re not exactly sure……But it does seem to suggest that when our brains and bodies die, our consciousness may not, or at least not right away….I don’t mean that people have their eyes open or that their brain’s are working after they die…that petrifies people. I’m saying we have consciousness that makes up who we are – our selves, thoughts, feelings, emotions-and that entity, it seems, does not become annihilated just because we’ve crossed the threshold of death; it appears to been functioning and not dissipate. How long it lingers, we can’t say.” – Dr. Sam Parnia

You can watch a lecture of the leading scientists in this field summarize 50 years of research in this area, below

First seem: http://www.collective-evolution.com/2018/02/22/where-do-we-go-when-we-die-signs-that-consciousness-remains-after-death-are-increasing/

Related CE Articles:

Beyond Space & Time: Quantum Theory Suggests Consciousness Moves On After Death

Quantum Theory Sheds Light on Life After Death

Is Consciousness A Product of the Brain or a Receiver of It? 

 

Posted by Sc13t4, 0 comments
Synchronicities

Synchronicities

Carl Jung’s ‘Synchronicities’ – is there meaning to this experience that makes us question the universe?

Synchronicity is an everpresent reality for those that have eyes to see” ~ Carl Jung.

We’ve all had them – those moments when something happens that makes you ponder the role of design in the universe, and your own place within it. When falling in love, engaging in artistic endeavours, or struggling with tragedy, these moments can occur frequently. Are things indeed “mean to be” at some deeper level? Or is the universe just an unfolding series of random events, occurring one after another, while our limited human minds desperately try to find the thread that links them together?

Synchronicity is the technical name given to the events I’m referring to. Carl Jung, the Swiss psychologist, coined the term in his 1951 essay on this topic. A synchronicity is, essentially, a meaningful coincidence. Something happens in the world around us that seems to defy probability and “normal” explanations.

The classic example is Jung’s own vignette in treating a particularly stubborn patient. He describes his talking sessions with her that delved into themes of her excessive rationality and rejection of any deeper meanings in the universe. As his patient was describing her feelings and a recent dream in which she was given a golden scarab, Jung heard a light tapping on the window behind him. The tapping persisted and Jung opened the window to find a large scarab beetle flying against the window. He caught it and handed it to her, saying, “here is your scarab.”

The scarab beetle is, according to Jung, a classic symbol of rebirth. So the dream scarab and the real world scarab beetle coincided to create a moment of transformation for the patient, who was able to overcome her problems.

I’ve been keeping a list of synchronicities from my own life for a few years now. Many are fairly trivial events that may best be explained as mere coincidence. One example: I bought a game on Amazon as a gift for my nephew. The game had 354 reviews. Right after this I bought Nelly’s song, “Just a Dream” (a great song), on iTunes. It also had 354 reviews. Is there any deeper meaning in these events? I doubt it! But one could stretch to find something if you wanted to.

A second example is a bit harder to dismiss as coincidence. I studied biology in college and have continued to read widely in evolutionary theory since finishing college in 1998. I’ve also published a few papers in this field since that time. I was reading a book on evolutionary theory and the strange but fascinating topic of bedbug sex came up. Female bedbugs don’t have vaginas — I know, it’s weird! Male bedbugs instead stab their penis into the female’s body, break through the carapace, and deposit sperm directly into the body cavity. I shook my head in wonder and went home shortly thereafter. When I got home from the coffee shop where I had been reading, I turned on a recording of “The Daily Show” with Jon Stewart and, lo and behold, the topic of bedbug sex came up! He showed a very funny and exquisitely weird skit by Isabella Rosellini demonstrating bedbug sex. I had never before heard about bedbug sex and here it came up twice in one day, in entirely unrelated contexts.

So what do these two episodes of bedbug sex offer in terms of deeper meaning? To be honest, I have no idea, but I can certainly speculate. I have been thinking and writing about sexual selection and other mechanisms of evolution for many years, and have developed a published theory that expands Darwin’s ideas on sexual selection. So perhaps I was somehow being encouraged to keep going on this path by my possibly synchronistic experience. It’s kind of a stretch, I know, but not entirely unreasonable.

Ok, one last example from my life, as an example of a strong synchronicity: I’ve been to Hawaii a number of times since late 2013, with my primary motivation to buy property there (I’m writing this essay in Hilo, Hawaii). I almost never talk to people next to me on the plane because I really enjoy the quiet time to read or work on writing projects, and because I’m afraid of being held captive in a boring conversation for many hours. The first trip to Hawaii, however, was with a woman I was dating at the time, so there was less risk of having to talk to the person next to us for the whole flight. I struck up conversation on a whim with a woman seated by herself beside us, and it turned out that she lived on the Big Island and we learned a lot about it in our conversation. We all became friends after she invited us to her birthday party that week, and to this day we’re still friends and see each other often.

The second trip to Hawaii was a month later and I was traveling by myself this time. Another woman traveling solo was in the seat next to me, I again chose to strike up a conversation, and she was also quite interesting and friendly. She was visiting a good friend of hers who lived in Hilo. The same day we arrived in Hilo I was having dinner with the woman I met on my first trip and we ran into the second woman, who I’d just met on the plane that day, at the same restaurant, which is one of many in Hilo! I ended up hanging out with the second woman a couple of days later and we’re also still friends.

My third trip was a month later. I was again traveling alone and was going for three months this time. I was hoping to finally buy some property after scouting a lot on the first two trips, and also to research a novel I’m working on that is set on the Big Island. This time I was seated next to a guy traveling by himself who seemed to be in his late twenties or early thirties. Again, I struck up conversation; again, this was strange because I almost never speak to people on the plane. Again, we had great conversation and it turned out that he was a traveling nurse going to Hawaii for a three-month contract. We became great friends and had many adventures during my stay.

Anyway, to wrap up: three of three trips to Hawaii yielded good new friends and opportunities to learn a ton about the Big Island. Coincidence may still be a good explanation, but despite my hard-nosed scientific outlook on most things, I can’t help but wonder if mere coincidence may not be the best explanation here.

If we’re looking, instead, at these events from the point of view of synchronicity, the deeper meaning is fairly obvious to me: in some manner the universe seemed to be helping me to make a home in Hawaii. This was the correlation between external events and my mental states that is the hallmark of synchronicity.

We could also look at these events as simply resulting from my excitement about going to Hawaii and a place that I was thinking about making a serious part of my life (I still live in Santa Barbara, but I split my time between Santa Barbara and my place near Hilo; paradise to paradise…). My excitement made me more talkative and more interested in people around me. Possibly. But it’s also quite unusual that people traveling solo, youngish, and interesting, would be seated next to me three times in a row.

I took a fourth trip to Hawaii in mid-2014 and I did not meet anyone interesting on the plane and didn’t even talk to the person next to me. But three out of four instances is still enough to make me scratch my head.

Explaining Synchronicity

So what’s going on with synchronistic experiences? First, let’s define our term carefully. Jung defined a synchronicity as meaningful and causally related correlations between outer (physical) and inner (mental) events. A good shorthand is meaningful coincidence. The coincidence is between external events and inner meaning that matches those events in some way or was inspired by them.

Jung attempted to explain synchronicity through an appeal to the “collective unconscious.” This collective unconscious is described by Jung as either the sum of our unconscious minds held in common by all people or, more intriguingly, as a deeper level of reality that undergirds our physical world. Synchronicities bubble up from the collective unconscious, and are a goad to “individuation,” a key part of Jung’s teachings.

Jung suggested that the correlations between external and internal events had a similar root cause. So while the correlations were not causal— they are “acausal”—there is a deeper causal explanation for each half of the synchronistic event. Jung seemed to believe that the universe itself was attempting to teach some lesson or insight by offering up these meaningful coincidences.

Another intriguing possibility is that synchronistic experiences are suggestive of the idea that we — you, I, and everything around us — are part of a much larger mind. Just as in our own dreams events can happen that skirt the laws of physics or logic, if we are indeed part of a much larger mind, a much larger dream, then synchronistic experiences are the clues. This idea was sketched by the German writer Wilhelm von Scholz and mentioned by Jung in Synchronicity.

So What Does It All Mean?

Looking at the bigger picture, and not only my own candidates for synchronistic experiences, synchronicity is perhaps the most compelling reason for me personally to remain agnostic about a higher-level intelligence in our universe. I’m not a religious person. I’m not a Christian and I was a militant atheist for many years. I’ve shifted, however, in the last ten years to a softer stance on the big questions about God, spirituality and meaning.

I’ve written previously on the “anatomy of God,” describing how I find the evidence and rationale for a “God as Source” quite convincing. God as Source is the ground of being, apeiron, Akasha, the One, etc., that is the soil from which all things grow. The Source is not conscious. It is beyond the dichotomy of conscious/unconscious. It is pure Spirit.

God as Summit, a conscious being that may or may not take an interest in our lives or even our planet, is a different matter. The metaphysical system that I find most reasonable — a system known as process philosophy, with Alfred North Whitehead as its primary modern expositor — certainly has room for God as Summit. Whether God as Summit really exists, however, is a separate debate. If I had to bet on it, I’d bet that there is no God as Summit at this point. But I remain agnostic.

The synchronicities that have happened in my life are numerous and strange. They don’t add up necessarily to any compelling evidence for God as Summit, but they certainly do make me wonder.

Turning back to Jung’s famous scarab beetle example of synchronicity we must, to be fair and scientific, acknowledge that the beetle he caught wasn’t technically a scarab beetle; it was, instead, a scarabaeid beetle (common rose-chafer) whose “gold-green colour most nearly resembles that of a golden scarab” beetle, in Jung’s own words. It seems, then, that Jung was exerting some poetic license at the moment he gave the beetle to his patient and in his later description of the episode.

Does it matter that it wasn’t technically a scarab beetle? Clearly it didn’t matter to the patient, of whom Jung claims “this experience punctured the desired hole in her rationalism…” Would this have happened without Jung’s poetic license? We have no way of knowing. These details demonstrate that there is a large gray area with respect to synchronicities that each of us must navigate when assigning meaning to particular events.

This criticism aside, we all have surely had numerous synchronicities happen to us that demonstrate my broader points above: there are deep mysteries inherent in reality and we cannot, if we are to be scientific, ignore these mysteries and the dimly-perceived world of deeper meanings that synchronicities sometimes highlight in each of our lives.

——

tambook

First sen on: http://www.collective-evolution.com/2018/03/21/carl-jungs-synchronicities-is-there-meaning-to-this-experience-that-makes-us-question-the-universe/

Check out my book Eco. Ego. Eros.

Posted by Sc13t4, 0 comments
The Quantum Vacuum: How ‘Empty’ Space is actually the Seat of the Most Violent Physics

The Quantum Vacuum: How ‘Empty’ Space is actually the Seat of the Most Violent Physics

A century from now, it will be well known that: the vacuum of space which fills the universe is itself the real substratum of the universe; vacuum in a circulating state becomes matter; the electron is the fundamental particle of matter and is a vortex of vacuum with a vacuum-less void at the center and it is dynamically stable; the speed of light relative to vacuum is the maximum speed that nature has provided and is an inherent property of the vacuum; vacuum is a subtle fluid unknown in material media; vacuum is mass-less, continuous, non viscous, and incompressible and is responsible for all the properties of matter; and that vacuum has always existed and will exist forever….Then scientists, engineers and philosophers will bend their heads in shame knowing that modern science ignored the vacuum in our chase to discover reality for more than a century.

The quote above comes from Paramahamsa Tewari, Inventor of what’s called the Reactionless AC Synchronous Generator (RLG).

What he says above has been the subject of discussion within the realms of physics and astronomy for decades. At the turn of the nineteenth century, physicists started to explore the relationship between energy and the structure of matter. In doing so, the belief that a physical, Newtonian material universe that was at the very heart of scientific knowing was dropped, and the realization that matter is nothing but an illusion replaced it. Scientists began to recognize that everything in the Universe is made out of energy.

Quantum physicists discovered that physical atoms are made up of vortices of energy that are constantly spinning and vibrating, each one radiating its own unique energy signature. This is also known as “the Vacuum” or “The Zero-Point Field.”

What’s even more fascinating is that the “stuff” within this space can be accessed and used. This was experimentally confirmed when The Casimir Effect illustrated zero point or vacuum state energy, which predicts that two metal plates close together attract each other due to an imbalance in the quantum fluctuations (source)(source). You can see a visual demonstration of this concept here. Before Casimir, these pockets of “nothing” were thought to be voids.

Unfortunately, when contemplating the nature of our reality and what we perceive to be our physical world, the existence of the vacuum and and what lies within what we call “space” is very much over-looked. I find it amusing how we’re still searching for the ‘God’ particle when a large amount of evidence points to the idea that most of what we refer to as “reality” is actually something we can’t perceive with our physical senses!

No point is more central than this, that space is not empty, it is the seat of the most violent physics – John Wheeler

It’s quite confusing, which is why I am posting the video below of someone (out of many people) who spends their life researching and experimenting with these cool concepts.

Below is a video of Nassim Haramein giving a TEDx talk at USCD. Nassim currently leads teams of physicists, electrical engineers, mathematicians and other scientists to explore the frontier of unification principles and their implications. Haramein’s lifelong vision of applied unified physics to create positive change in the world today is reflected in the mission of The Resonance Project Foundation. He shares the developments of his research through scientific publications and educational offerings through the Resonance Academy.

Currently Nassim is focused on his most recent developments in quantum gravity and their applications to technology, new energy research, applied resonance, life sciences, permaculture, and consciousness studies. Nassim currently resides in Kauai compassionately raising his two young sons, and surfing the sunlit swells on the shores of the magnificent Hawaiian islands.

HERE is an example of some of his published research, with co authors, one of whom is Elizabeth  A. Rauscher, an American physicist. She is a former researcher with the Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, the Stanford Research Institute, and NASA.

“Space is actually not empty and it’s full of energy…The energy in space is not trivial there’s a lot of it and we can actually calculate how much energy there is in that space and that reality might actually come out of it. Everything we see is actually emerging from that space.”

From: http://www.collective-evolution.com/2018/03/16/the-quantum-vacuum-how-empty-space-is-actually-the-seat-of-the-most-violent-physics/

Posted by Sc13t4, 0 comments
Why the Tiny Weight of Empty Space Is Such a Huge Mystery

Why the Tiny Weight of Empty Space Is Such a Huge Mystery

The amount of energy infusing empty space seems too small to explain without a multiverse. But physicists have at least one alternative left to explore.

The controversial idea that our universe is just a random bubble in an endless, frothing multiverse arises logically from nature’s most innocuous-seeming feature: empty space. Specifically, the seed of the multiverse hypothesis is the inexplicably tiny amount of energy infused in empty space — energy known as the vacuum energy, dark energy or the cosmological constant. Each cubic meter of empty space contains only enough of this energy to light a lightbulb for 11-trillionths of a second. “The bone in our throat,” as the Nobel laureate Steven Weinberg once put it, is that the vacuum ought to be at least a trillion trillion trillion trillion trillion times more energetic, because of all the matter and force fields coursing through it. Somehow the effects of all these fields on the vacuum almost equalize, producing placid stillness. Why is empty space so empty?

While we don’t know the answer to this question — the infamous “cosmological constant problem” — the extreme vacuity of our vacuum appears necessary for our existence. In a universe imbued with even slightly more of this gravitationally repulsive energy, space would expand too quickly for structures like galaxies, planets or people to form. This fine-tuned situation suggests that there might be a huge number of universes, all with different doses of vacuum energy, and that we happen to inhabit an extraordinarily low-energy universe because we couldn’t possibly find ourselves anywhere else.

Some scientists bristle at the tautology of “anthropic reasoning” and dislike the multiverse for being untestable. Even those open to the multiverse idea would love to have alternative solutions to the cosmological constant problem to explore. But so far it has proved nearly impossible to solve without a multiverse. “The problem of dark energy [is] so thorny, so difficult, that people have not got one or two solutions,” said Raman Sundrum, a theoretical physicist at the University of Maryland.

To understand why, consider what the vacuum energy actually is. Albert Einstein’s general theory of relativity says that matter and energy tell space-time how to curve, and space-time curvature tells matter and energy how to move. An automatic feature of the equations is that space-time can possess its own energy — the constant amount that remains when nothing else is there, which Einstein dubbed the cosmological constant. For decades, cosmologists assumed its value was exactly zero, given the universe’s reasonably steady rate of expansion, and they wondered why. But then, in 1998, astronomers discovered that the expansion of the cosmos is in fact gradually accelerating, implying the presence of a repulsive energy permeating space. Dubbed dark energy by the astronomers, it’s almost certainly equivalent to Einstein’s cosmological constant. Its presence causes the cosmos to expand ever more quickly, since, as it expands, new space forms, and the total amount of repulsive energy in the cosmos increases.

However, the inferred density of this vacuum energy contradicts what quantum field theory, the language of particle physics, has to say about empty space. A quantum field is empty when there are no particle excitations rippling through it. But because of the uncertainty principle in quantum physics, the state of a quantum field is never certain, so its energy can never be exactly zero. Think of a quantum field as consisting of little springs at each point in space. The springs are always wiggling, because they’re only ever within some uncertain range of their most relaxed length. They’re always a bit too compressed or stretched, and therefore always in motion, possessing energy. This is called the zero-point energy of the field. Force fields have positive zero-point energies while matter fields have negative ones, and these energies add to and subtract from the total energy of the vacuum.

The total vacuum energy should roughly equal the largest of these contributing factors. (Say you receive a gift of $10,000; even after spending $100, or finding $3 in the couch, you’ll still have about $10,000.) Yet the observed rate of cosmic expansion indicates that its value is between 60 and 120 orders of magnitude smaller than some of the zero-point energy contributions to it, as if all the different positive and negative terms have somehow canceled out. Coming up with a physical mechanism for this equalization is extremely difficult for two main reasons.

First, the vacuum energy’s only effect is gravitational, and so dialing it down would seem to require a gravitational mechanism. But in the universe’s first few moments, when such a mechanism might have operated, the universe was so physically small that its total vacuum energy was negligible compared to the amount of matter and radiation. The gravitational effect of the vacuum energy would have been completely dwarfed by the gravity of everything else. “This is one of the greatest difficulties in solving the cosmological constant problem,” the physicist Raphael Bousso wrote in 2007. A gravitational feedback mechanism precisely adjusting the vacuum energy amid the conditions of the early universe, he said, “can be roughly compared to an airplane following a prescribed flight path to atomic precision, in a storm.”

Compounding the difficulty, quantum field theory calculations indicate that the vacuum energy would have shifted in value in response to phase changes in the cooling universe shortly after the Big Bang. This raises the question of whether the hypothetical mechanism that equalized the vacuum energy kicked in before or after these shifts took place. And how could the mechanism know how big their effects would be, to compensate for them?

So far, these obstacles have thwarted attempts to explain the tiny weight of empty space without resorting to a multiverse lottery. But recently, some researchers have been exploring one possible avenue: If the universe did not bang into existence, but bounced instead, following an earlier contraction phase, then the contracting universe in the distant past would have been huge and dominated by vacuum energy. Perhaps some gravitational mechanism could have acted on the plentiful vacuum energy then, diluting it in a natural way over time. This idea motivated the physicists Peter Graham, David Kaplan and Surjeet Rajendran to discover a new cosmic bounce model, though they’ve yet to show how the vacuum dilution in the contracting universe might have worked.

In an email, Bousso called their approach “a very worthy attempt” and “an informed and honest struggle with a significant problem.” But he added that huge gaps in the model remain, and “the technical obstacles to filling in these gaps and making it work are significant. The construction is already a Rube Goldberg machine, and it will at best get even more convoluted by the time these gaps are filled.” He and other multiverse adherents see their answer as simpler by comparison.

Posted by Sc13t4, 0 comments
A Private View of Quantum Reality

A Private View of Quantum Reality

Christopher Fuchs is the developer and main proponent of QBism, an alternative interpretation of quantum mechanics that treats the quantum wave function as a reflection of ignorance.

Katherine Taylor for Quanta Magazine

Christopher Fuchs describes physics as “a dynamic interplay between storytelling and equation writing. Neither one stands alone, not even at the end of the day.” And indeed Fuchs, a physicist at the University of Massachusetts, Boston, has a radical story to tell. The story is called QBism, and it goes something like this.

Once upon a time there was a wave function, which was said to completely describe the state of a physical system out in the world. The shape of the wave function encodes the probabilities for the outcomes of any measurements an observer might perform on it, but the wave function belonged to nature itself, an objective description of an objective reality.

Then Fuchs came along. Along with the researchers Carlton Caves and Rüdiger Schack, he interpreted the wave function’s probabilities as Bayesian probabilities — that is, as subjective degrees of belief about the system. Bayesian probabilities could be thought of as gambling attitudes for placing bets on measurement outcomes, attitudes that are updated as new data come to light. In other words, Fuchs argued, the wave function does not describe the world — it describes the observer. “Quantum mechanics,” he says, “is a law of thought.”

Quantum Bayesianism, or QBism as Fuchs now calls it, solves many of quantum theory’s deepest mysteries. Take, for instance, the infamous “collapse of the wave function,” wherein the quantum system inexplicably transitions from multiple simultaneous states to a single actuality. According to QBism, the wave function’s “collapse” is simply the observer updating his or her beliefs after making a measurement. Spooky action at a distance, wherein one observer’s measurement of a particle right here collapses the wave function of a particle way over there, turns out not to be so spooky — the measurement here simply provides information that the observer can use to bet on the state of the distant particle, should she come into contact with it. But how, we might ask, does her measurement here affect the outcome of a measurement a second observer will make over there? In fact, it doesn’t. Since the wavefunction doesn’t belong to the system itself, each observer has her own. My wavefunction doesn’t have to align with yours.

A quantum particle can be in a range of possible states. When an observer makes a measurement, she instantaneously “collapses” the wave function into one possible state. QBism argues that this collapse isn’t mysterious. It just reflects the updated knowledge of the observer. She didn’t know where the particle was before the measurement. Now she does.

A quantum particle can be in a range of possible states. When an observer makes a measurement, she instantaneously “collapses” the wave function into one possible state. QBism argues that this collapse isn’t mysterious. It just reflects the updated knowledge of the observer. She didn’t know where the particle was before the measurement. Now she does.

Olena Shmahalo/Quanta Magazine

In a sea of interpretations of quantum weirdness, QBism swims alone. The traditional “Copenhagen interpretation” treats the observer as somehow standing outside of nature, imbued with mysterious wave-function-collapsing powers, governed by laws of physics that are different from those that govern what’s being observed. That’s all well and good until a second observer comes along to observe the first observer. The “many worlds” interpretation claims that the universe and all of its observers are described by a single, giant wave function that never collapses. Of course, to make that work, one must insist that at every fork in the road — every coin toss, every decision, every moment — the wave function branches and so do we, splitting into countless versions of ourselves who have collectively done and not done everything we’ll ever do or not do. For those to whom a set of infinite parallel realities is too high a price to pay to avoid wave-function collapse, there’s always the Bohmian interpretation, which seeks to restore a more concrete reality to the world by postulating the existence of a guiding force that permeates the universe and deterministically governs everything in it. Unfortunately, this new reality lies forever out of reach of scientific probing.

Those interpretations all have something in common: They treat the wave function as a description of an objective reality shared by multiple observers. QBism, on the other hand, treats the wave function as a description of a single observer’s subjective knowledge. It resolves all of the quantum paradoxes, but at the not insignificant cost of anything we might call “reality.” Then again, maybe that’s what quantum mechanics has been trying to tell us all along — that a single objective reality is an illusion.

QBism also raises a host of new and equally mysterious questions. If the wave function describes an observer, does the observer have to be human? Does that observer have to have consciousness? Could it be a dog? (“Dogs don’t use wave functions,” Fuchs said. “Heck, I didn’t collapse a wave function until I was 34.”) If my wave function doesn’t have to align with yours, do we live in the same universe? And if quantum mechanics doesn’t describe an external reality, what does?

Fuchs struggles with these questions, often working through his thoughts in the form of emails. His missives have become legendary. For two decades Fuchs has compiled them into huge documents — he calls them his samizdats — which have made the rounds among quantum physicists and philosophers as a kind of underground manuscript. After Fuchs lost his Los Alamos home to a fire in May 2000, he decided to back them up by posting them on the scientific preprint site arxiv.org as a massive paper, which was later published by Cambridge University Press as a 500-page bookA second samizdat was released 13 years later with an additional 2,300 pages. The emails reveal both Fuchs’ searching mind and his colorful character. As the physicist David Mermin puts it, “If Chris Fuchs did not exist then God would have been remiss in not inventing him.”

So how will the QBism story end? Ultimately, Fuchs wants to answer a single question, one famously asked by the eminent physicist John Archibald Wheeler, who was Fuchs’ mentor: Why the quantum? That is, why should the world be built in such a way that it can only be described by the strange rules of quantum mechanics?

In the meantime, Quanta caught up with Fuchs at a coffee shop in Cambridge, Massachusetts, to ask him some questions of our own. An edited and condensed version of our conversation follows.

QUANTA MAGAZINE: You’ve said, “I knew I had to become a physicist, not for the love of physics, but for the distrust of it.”

CHRISTOPHER FUCHS: I was a big science fiction fan when I was a kid. I grew up in a small town in Texas and I really enjoyed the idea of space flight. It seemed inevitable — we were going to the moon, that was just the first step, science is limitless and eventually we’d be doing the things they do in Star Trek: go to planets, find new creatures, have adventures. So I started reading books about physics and space travel, and it was there that I first learned that space travel would be difficult because of the great distances between stars. How do you get around this? I learned about John Wheeler and black holes and wormholes, and that possibly wormholes could be a way to get around the speed-limit problem, or we could get past the speed limit using exotic particles called tachyons. I ate the stuff up. Most of it turned out to be pretty improbable; wormholes had proven to be unstable and nobody really believed in tachyons. Overall, the message to me was that physics wouldn’t allow us to get to the stars. As a bit of a joke, I would tell my friends, if the laws of physics won’t allow us to go to the stars, the laws of physics must be wrong!

Probability does not exist! It will go the way of phlogiston, witches, elves and fairies.

You ended up studying with John Wheeler.

The first time I went to the University of Texas, it dawned on me that the guy I’d read about years before, John Wheeler, was actually a professor there. So I went and read some of his newer papers, in which he was talking about “law without law.” He’d say things like, “In the end, the only law is that there is no law.” There’s no ultimate law of physics. All the laws of physics are mutable and that mutability itself is a principle of physics. He’d say, there’s no law of physics that hasn’t been transcended. I saw this, and I remembered my joke about how the laws of physics must be wrong, and I was immensely attracted to this idea that maybe ultimately there actually are no laws of physics. What there is in place of laws, I didn’t know. But if the laws weren’t 100 percent trustworthy, maybe there was a back door to the stars. It was all youthful romanticism; I hadn’t even had a physics course yet.

In one of your papers, you mention that Erwin Schrödinger wrote about the Greek influence on our concept of reality, and that it’s a historical contingency that we speak about reality without including the subject — the person doing the speaking. Are you trying to break the spell of Greek thinking?

Schrödinger thought that the Greeks had a kind of hold over us — they saw that the only way to make progress in thinking about the world was to talk about it without the “knowing subject” in it. QBism goes against that strain by saying that quantum mechanics is not about how the world is without us; instead it’s precisely about us in the world. The subject matter of the theory is not the world or us but us-within-the-world, the interface between the two.

It’s so ingrained in us to think about the world without thinking of ourselves in it. It reminds me of Einstein questioning space and time — these features of the world that seemed so absolute that no one even thought to question them.

It’s said that in earlier civilizations, people didn’t quite know how to distinguish between objective and subjective. But once the idea of separating the two gained a toehold, we were told that we have to do this, and that science is about the objective. And now that it’s done, it’s hard to turn back. I think the biggest fear people have of QBism is precisely this: that it’s anthropocentric. The feeling is, we got over that with Copernicus, and this has got to be a step backwards. But I think if we really want a universe that’s rife with possibility with no ultimate limits on it, this is exactly where you’ve got to go.

How does QBism get you around those limits?

One way to look at it is that the laws of physics aren’t about the stuff “out there.” Rather, they are our best expressions, our most inclusive statements, of what our own limitations are. When we say the speed of light is the ultimate speed limit, we’re saying that we can’t go beyond the speed of light. But just as our brains have gotten bigger through Darwinian evolution, one can imagine that eventually we’ll have evolved to a stage where we can take advantage of things that we can’t now. We might call those things “changes in the laws of physics.” Usually we think of the universe as this rigid thing that can’t be changed. Instead, methodologically we should assume just the opposite: that the universe is before us so that we can shape it, that it can be changed, and that it will push back on us. We’ll understand our limits by noticing how much it pushes back on us.

Let’s talk about probability.

Probability does not exist! Bruno de Finetti, in the intro to his two-volume set on probability theory, writes in all-capital letters, “PROBABILITY DOES NOT EXIST.” He says it will go the way of phlogiston, witches, elves and fairies.

When the founders of quantum mechanics realized that the theory describes the world in terms of probabilities, they took that to mean that the world itself is probabilistic.

In Pierre-Simon Laplace’s day, probability was thought of as a subjective statement — you don’t know everything, but you can manage by quantifying your knowledge. But sometime in the late 1800s and early 1900s, probabilities started cropping up in ways that appeared objective. People were using statistical methods to derive things that could be measured in the laboratory — things like heat. So people figured, if this quantity arises because of probabilistic considerations, and it’s objective, it must be that the probabilities are objective as well. Then quantum mechanics came along. The Copenhagen crowd was arguing that quantum mechanics is a complete theory, finished, closed, which was often taken to mean that all of its features should be objective features of nature. If quantum states give probabilities, those probabilities should also be objective features of nature. On the other side of the fence was Albert Einstein, who said quantum mechanics is not complete. When he described probabilities in quantum mechanics, he seemed to interpret them as statements of incomplete knowledge, subjective states.

So when you say that probability doesn’t exist, you mean that objective probability doesn’t exist.

Right, it doesn’t exist as something out in the world without a gambling agent. But suppose you’ve convinced yourself that the right way to understand probability is as a description of uncertainty and ignorance. Now there’s a spectrum of positions you could take. According to the Bayesian statistician I.J. Good, there are 46,656 varieties. When we started working on quantum Bayesianism, we tried to take a stance on probability that was like E.T. Jaynes’ stance: We’ll admit that probabilities are in our heads — my probabilities are in my head, your probabilities are in your head — but if I base my probabilities on the same information that you base yours on, our two probability assignments should be the same. Conditioned on the information, they should be objective. In the spectrum of 46,656 varieties, this stance is called “objective Bayesianism.”

At the other end of the spectrum is Bruno de Finetti. He says there’s no reason whatsoever for my probabilities and yours to match, because mine are based on my experience and yours are based on your experience. The best we can do, in that case, if we think of probabilities as gambling attitudes, is try to make all of our personal gambling attitudes internally consistent. I should do that with mine, and you with yours, but that’s the best we can do. That’s what de Finetti meant when he said probability does not exist. He meant, let’s take the extreme stance. Instead of saying probabilities are mostly in my head but there are some extra rules that still anchor them to the world, he got rid of the anchor.

Eventually my colleague Rüdiger Schack and I felt that to be consistent we had to break the ties with Jaynes and move more in the direction of de Finetti. Where Jaynes made fun of de Finetti, we thought, actually, that’s where the real solution lies.

Is that when the name changed from quantum Bayesianism to QBism?

Quantum Bayesianism was too much of a mouthful, so I started calling it QBism. As soon as I started calling it QBism, people paid more attention to it! But my colleague David Mermin started complaining that QBism really shouldn’t be short for quantum Bayesianism because there are a lot of Bayesians out there who wouldn’t accept our conclusions. So he wanted to call it quantum Brunoism, for Bruno de Finetti. The trouble with that is that there are parts of the metaphysics of QBism that even de Finetti wouldn’t accept!

But then I found the perfect B. The trouble is, it’s so ugly you wouldn’t want to show it off in public. It’s a term that comes from Supreme Court Justice Oliver Wendell Holmes Jr. He described his own philosophy as “bettabilitarianism.” It’s the philosophy that, as Louis Menand said, “the world is loose at the joints.” The best you can do is gamble on the consequences of your actions. [The portmanteau comes from bet and ability.] I think this fits it perfectly, but I don’t want to say that QBism stands for quantum bettabilitarianism, so I think it’s best to do what KFC did. It used to be Kentucky Fried Chicken; now it’s just KFC.

If quantum mechanics is a user’s manual, as you’ve called it, who’s the user? Einstein talked about observers, but an observer in quantum mechanics is different from an observer in relativity.

The other day I was talking to the philosopher Rob DiSalle. He said that the observer is not so problematic in relativity because one observer can, so to speak, “look over the shoulder of another observer.” I like that phrasing. In other words, you can take what one observer sees and use transformation laws to see what the other observer will see. Bohr really played that up. He played up the similarities between quantum mechanics and relativity, and he couldn’t understand why Einstein wouldn’t accept quantum theory. But I think the problems are different. As QBism understands a quantum measurement outcome, it’s personal. No one else can see it. I see it or you see it. There’s no transformation that takes the one personal experience to the other personal experience. William James was just wrong when he tried to argue that “two minds can know one thing.”

Does that mean that, as Arthur Eddington put it, the stuff of the world is mind stuff?

QBism would say, it’s not that the world is built up from stuff on “the outside” as the Greeks would have had it. Nor is it built up from stuff on “the inside” as the idealists, like George Berkeley and Eddington, would have it. Rather, the stuff of the world is in the character of what each of us encounters every living moment — stuff that is neither inside nor outside, but prior to the very notion of a cut between the two at all.

So eventually objectivity comes in?

I hope it does. Ultimately I view QBism as a quest to point to something in the world and say, that’s intrinsic to the world. But I don’t have a conclusive answer yet. Quantum mechanics is a single-user theory, but by dissecting it, you can learn something about the world that all of us are immersed in.

Treating quantum mechanics as a single-user theory resolves a lot of the paradoxes, like spooky action at a distance.

Yes, but in a way that a lot of people find troubling. The usual story of Bell’s theorem is that it tells us the world must be nonlocal. That there really is spooky action at a distance. So they solved one mystery by adding a pretty damn big mystery! What is this nonlocality? Give me a full theory of it. My fellow QBists and I instead think that what Bell’s theorem really indicates is that the outcomes of measurements are experiences, not revelations of something that’s already there. Of course others think that we gave up on science as a discipline, because we talk about subjective degrees of belief. But we think it solves all of the foundational conundrums. The only thing it doesn’t solve is Wheeler’s question, why the quantum?

Why the quantum?

I wish I had more of a sense. I’ve become fascinated by these beautiful mathematical structures called SICs, symmetric informationally complete measurements — horrible name, almost as bad as bettabilitarianism. They can be used to rewrite the Born rule [the mathematical procedure that generates probabilities in quantum mechanics] in a different language, in which it appears that the Born rule is somehow deeply about analyzing the real in terms of hypotheticals. If you have it in your heart — and not everyone does — that the real message of quantum mechanics is that the world is loose at the joints, that there really is contingency in the world, that there really can be novelty in the world, then the world is about possibilities all the time, and quantum mechanics ties them together. It might take us 25 years to get the mathematics right, but in 25 years let’s have this conversation again!

This article was reprinted on Wired.com.

Posted by Sc13t4, 0 comments
The Case Against Dark Matter

The Case Against Dark Matter

A proposed theory of gravity does away with dark matter, even as new astrophysical findings challenge the need for galaxies full of the invisible mystery particles.

 For 80 years, scientists have puzzled over the way galaxies and other cosmic structures appear to gravitate toward something they cannot see. This hypothetical “dark matter” seems to outweigh all visible matter by a startling ratio of five to one, suggesting that we barely know our own universe. Thousands of physicists are doggedly searching for these invisible particles.

But the dark matter hypothesis assumes scientists know how matter in the sky ought to move in the first place. This month, a series of developments has revived a long-disfavored argument that dark matter doesn’t exist after all. In this view, no missing matter is needed to explain the errant motions of the heavenly bodies; rather, on cosmic scales, gravity itself works in a different way than either Isaac Newton or Albert Einstein predicted.

Instead of hordes of invisible particles, “dark matter is an interplay between ordinary matter and dark energy,” Verlinde said.

To make his case, Verlinde has adopted a radical perspective on the origin of gravity that is currently in vogue among leading theoretical physicists. Einstein defined gravity as the effect of curves in space-time created by the presence of matter. According to the new approach, gravity is an emergent phenomenon. Space-time and the matter within it are treated as a hologram that arises from an underlying network of quantum bits (called “qubits”), much as the three-dimensional environment of a computer game is encoded in classical bits on a silicon chip. Working within this framework, Verlinde traces dark energy to a property of these underlying qubits that supposedly encode the universe. On large scales in the hologram, he argues, dark energy interacts with matter in just the right way to create the illusion of dark matter.

In his calculations, Verlinde rediscovered the equations of “modified Newtonian dynamics,” or MOND. This 30-year-old theory makes an ad hoc tweak to the famous “inverse-square” law of gravity in Newton’s and Einstein’s theories in order to explain some of the phenomena attributed to dark matter. That this ugly fix works at all has long puzzled physicists. “I have a way of understanding the MOND success from a more fundamental perspective,” Verlinde said.

Many experts have called Verlinde’s paper compelling but hard to follow. While it remains to be seen whether his arguments will hold up to scrutiny, the timing is fortuitous. In a new analysis of galaxiespublished on Nov. 9 in Physical Review Letters, three astrophysicists led by Stacy McGaugh of Case Western Reserve University in Cleveland, Ohio, have strengthened MOND’s case against dark matter.

The researchers analyzed a diverse set of 153 galaxies, and for each one they compared the rotation speed of visible matter at any given distance from the galaxy’s center with the amount of visible matter contained within that galactic radius. Remarkably, these two variables were tightly linked in all the galaxies by a universal law, dubbed the “radial acceleration relation.” This makes perfect sense in the MOND paradigm, since visible matter is the exclusive source of the gravity driving the galaxy’s rotation (even if that gravity does not take the form prescribed by Newton or Einstein). With such a tight relationship between gravity felt by visible matter and gravity given by visible matter, there would seem to be no room, or need, for dark matter.

Even as dark matter proponents rise to its defense, a third challenge has materialized. In new research that has been presented at seminars and is under review by the Monthly Notices of the Royal Astronomical Society, a team of Dutch astronomers have conducted what they call the first test of Verlinde’s theory: In comparing his formulas to data from more than 30,000 galaxies, Margot Brouwer of Leiden University in the Netherlands and her colleagues found that Verlinde correctly predicts the gravitational distortion or “lensing” of light from the galaxies — another phenomenon that is normally attributed to dark matter. This is somewhat to be expected, as MOND’s original developer, the Israeli astrophysicist Mordehai Milgrom, showed years ago that MOND accounts for gravitational lensing data. Verlinde’s theory will need to succeed at reproducing dark matter phenomena in cases where the old MOND failed.

Kathryn Zurek, a dark matter theorist at Lawrence Berkeley National Laboratory, said Verlinde’s proposal at least demonstrates how something like MOND might be right after all. “One of the challenges with modified gravity is that there was no sensible theory that gives rise to this behavior,” she said. “If [Verlinde’s] paper ends up giving that framework, then that by itself could be enough to breathe more life into looking at [MOND] more seriously.”

The New MOND

In Newton’s and Einstein’s theories, the gravitational attraction of a massive object drops in proportion to the square of the distance away from it. This means stars orbiting around a galaxy should feel less gravitational pull — and orbit more slowly — the farther they are from the galactic center. Stars’ velocities do drop as predicted by the inverse-square law in the inner galaxy, but instead of continuing to drop as they get farther away, their velocities level off beyond a certain point. The “flattening” of galaxy rotation speeds, discovered by the astronomer Vera Rubin in the 1970s, is widely considered to be Exhibit A in the case for dark matter — explained, in that paradigm, by dark matter clouds or “halos” that surround galaxies and give an extra gravitational acceleration to their outlying stars.

 

Lucy Reading-Ikkanda for Quanta Magazine

Searches for dark matter particles have proliferated — with hypothetical “weakly interacting massive particles” (WIMPs) and lighter-weight “axions” serving as prime candidates — but so far, experiments have found nothing.

Meanwhile, in the 1970s and 1980s, some researchers, including Milgrom, took a different tack. Many early attempts at tweaking gravity were easy to rule out, but Milgrom found a winning formula: When the gravitational acceleration felt by a star drops below a certain level — precisely 0.00000000012 meters per second per second, or 100 billion times weaker than we feel on the surface of the Earth — he postulated that gravity somehow switches from an inverse-square law to something close to an inverse-distance law. “There’s this magic scale,” McGaugh said. “Above this scale, everything is normal and Newtonian. Below this scale is where things get strange. But the theory does not really specify how you get from one regime to the other.”

Physicists do not like magic; when other cosmological observations seemed far easier to explain with dark matter than with MOND, they left the approach for dead. Verlinde’s theory revitalizes MOND by attempting to reveal the method behind the magic.

Verlinde, ruddy and fluffy-haired at 54 and lauded for highly technical string theory calculations, first jotted down a back-of-the-envelope version of his idea in 2010. It built on a famous paper he had written months earlier, in which he boldly declared that gravity does not really exist. By weaving together numerous concepts and conjectures at the vanguard of physics, he had concluded that gravity is an emergent thermodynamic effect, related to increasing entropy (or disorder). Then, as now, experts were uncertain what to make of the paper, though it inspired fruitful discussions.

The particular brand of emergent gravity in Verlinde’s paper turned out not to be quite right, but he was tapping into the same intuition that led other theorists to develop the modern holographic description of emergent gravity and space-time — an approach that Verlinde has now absorbed into his new work.

In this framework, bendy, curvy space-time and everything in it is a geometric representation of pure quantum information — that is, data stored in qubits. Unlike classical bits, qubits can exist simultaneously in two states (0 and 1) with varying degrees of probability, and they become “entangled” with each other, such that the state of one qubit determines the state of the other, and vice versa, no matter how far apart they are. Physicists have begun to work out the rules by which the entanglement structure of qubits mathematically translates into an associated space-time geometry. An array of qubits entangled with their nearest neighbors might encode flat space, for instance, while more complicated patterns of entanglement give rise to matter particles such as quarks and electrons, whose mass causes the space-time to be curved, producing gravity. “The best way we understand quantum gravity currently is this holographic approach,” said Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver who has done influential work on the subject.

The mathematical translations are rapidly being worked out for holographic universes with an Escher-esque space-time geometry known as anti-de Sitter (AdS) space, but universes like ours, which have de Sitter geometries, have proved far more difficult. In his new paper, Verlinde speculates that it’s exactly the de Sitter property of our native space-time that leads to the dark matter illusion.

De Sitter space-times like ours stretch as you look far into the distance. For this to happen, space-time must be infused with a tiny amount of background energy — often called dark energy — which drives space-time apart from itself. Verlinde models dark energy as a thermal energy, as if our universe has been heated to an excited state. (AdS space, by contrast, is like a system in its ground state.) Verlinde associates this thermal energy with long-range entanglement between the underlying qubits, as if they have been shaken up, driving entangled pairs far apart. He argues that this long-range entanglement is disrupted by the presence of matter, which essentially removes dark energy from the region of space-time that it occupied. The dark energy then tries to move back into this space, exerting a kind of elastic response on the matter that is equivalent to a gravitational attraction.

Because of the long-range nature of the entanglement, the elastic response becomes increasingly important in larger volumes of space-time. Verlinde calculates that it will cause galaxy rotation curves to start deviating from Newton’s inverse-square law at exactly the magic acceleration scale pinpointed by Milgrom in his original MOND theory.

Van Raamsdonk calls Verlinde’s idea “definitely an important direction.” But he says it’s too soon to tell whether everything in the paper — which draws from quantum information theory, thermodynamics, condensed matter physics, holography and astrophysics — hangs together. Either way, Van Raamsdonk said, “I do find the premise interesting, and feel like the effort to understand whether something like that could be right could be enlightening.”

One problem, said Brian Swingle of Harvard and Brandeis universities, who also works in holography, is that Verlinde lacks a concrete model universe like the ones researchers can construct in AdS space, giving him more wiggle room for making unproven speculations. “To be fair, we’ve gotten further by working in a more limited context, one which is less relevant for our own gravitational universe,” Swingle said, referring to work in AdS space. “We do need to address universes more like our own, so I hold out some hope that his new paper will provide some additional clues or ideas going forward.”

 Video: Erik Verlinde describes how emergent gravity and dark energy can explain away dark matter. Ilvy Njiokiktjien for Quanta Magazine

The Case for Dark Matter

Verlinde could be capturing the zeitgeist the way his 2010 entropic-gravity paper did. Or he could be flat-out wrong. The question is whether his new and improved MOND can reproduce phenomena that foiled the old MOND and bolstered belief in dark matter.

One such phenomenon is the Bullet cluster, a galaxy cluster in the process of colliding with another. The visible matter in the two clusters crashes together, but gravitational lensing suggests that a large amount of dark matter, which does not interact with visible matter, has passed right through the crash site. Some physicists consider this indisputable proof of dark matter. However, Verlinde thinks his theory will be able to handle the Bullet cluster observations just fine. He says dark energy’s gravitational effect is embedded in space-time and is less deformable than matter itself, which would have allowed the two to separate during the cluster collision.

But the crowning achievement for Verlinde’s theory would be to account for the suspected imprints of dark matter in the cosmic microwave background (CMB), ancient light that offers a snapshot of the infant universe. The snapshot reveals the way matter at the time repeatedly contracted due to its gravitational attraction and then expanded due to self-collisions, producing a series of peaks and troughs in the CMB data. Because dark matter does not interact, it would only have contracted without ever expanding, and this would modulate the amplitudes of the CMB peaks in exactly the way that scientists observe. One of the biggest strikes against the old MOND was its failure to predict this modulation and match the peaks’ amplitudes. Verlinde expects that his version will work — once again, because matter and the gravitational effect of dark energy can separate from each other and exhibit different behaviors. “Having said this,” he said, “I have not calculated this all through.”

While Verlinde confronts these and a handful of other challenges, proponents of the dark matter hypothesis have some explaining of their own to do when it comes to McGaugh and his colleagues’ recent findings about the universal relationship between galaxy rotation speeds and their visible matter content.

In October, responding to a preprint of the paper by McGaugh and his colleagues, two teams of astrophysicists independently argued that the dark matter hypothesis can account for the observations. They say the amount of dark matter in a galaxy’s halo would have precisely determined the amount of visible matter the galaxy ended up with when it formed. In that case, galaxies’ rotation speeds, even though they’re set by dark matter and visible matter combined, will exactly correlate with either their dark matter content or their visible matter content (since the two are not independent). However, computer simulations of galaxy formation do not currently indicate that galaxies’ dark and visible matter contents will always track each other. Experts are busy tweaking the simulations, but Arthur Kosowsky of the University of Pittsburgh, one of the researchers working on them, says it’s too early to tell if the simulations will be able to match all 153 examples of the universal law in McGaugh and his colleagues’ galaxy data set. If not, then the standard dark matter paradigm is in big trouble. “Obviously this is something that the community needs to look at more carefully,” Zurek said.

Even if the simulations can be made to match the data, McGaugh, for one, considers it an implausible coincidence that dark matter and visible matter would conspire to exactly mimic the predictions of MOND at every location in every galaxy. “If somebody were to come to you and say, ‘The solar system doesn’t work on an inverse-square law, really it’s an inverse-cube law, but there’s dark matter that’s arranged just so that it always looks inverse-square,’ you would say that person is insane,” he said. “But that’s basically what we’re asking to be the case with dark matter here.”

Given the considerable indirect evidence and near consensus among physicists that dark matter exists, it still probably does, Zurek said. “That said, you should always check that you’re not on a bandwagon,” she added. “Even though this paradigm explains everything, you should always check that there isn’t something else going on.”

This article was reprinted on TheAtlantic.com.

Posted by Sc13t4, 0 comments
Astrophysics gets turned on its head: black holes come first

Astrophysics gets turned on its head: black holes come first

Supermassive black holes observed for the first time at the earliest epoch of star and galaxy formation are indicating that black holes form first and guide the later accretion and structuring of stars and galaxies

By: William Brown and Amira Val Baker; RSF research scientists

For decades physicist Nassim Haramein has been expounding a controversial idea in astrophysics—that structures from elementary particles to galaxies and the universe itself are the result of infinitely curved spacetime geometries, popularly known as black holes. In essence, this means that all the stuff we think of as material, physical objects in fact only appear substantive because of the geometry and torque of spacetime in these regions. As Charles Misner and John Wheeler stated it:

There is nothing in the world except empty curved space. Matter, charge, electromagnetism, and other fields are only manifestations of the bending of space. Physics is geometry— classical physics as geometry

Haramein’s theory is contrary to the conventional model of galactic, stellar, and black hole formation. Look up any source and it will invariably describe how black holes form from the core collapse of massive stars (greater than 20 solar masses).  In short, the conventional model states that once a massive star has reached its limit for continued thermonuclear fusion—which for even the most massive stars stops at the element iron—then there is no longer sufficient energy radiating outward to counter-balance the inward gravitational force of the star. The star thus undergoes gravitational collapse forming a stellar remnant in the form of a white dwarf, neutron star or black hole.

Incidentally, there is another crisis brewing in astrophysics as it has become clear that the conventional model cannot explain where elements heavier than iron come from: it used to be assumed that all elements heavier than iron are formed during the supernova explosion resulting from core collapse of the massive stars, but calculations have shown this not to be a viable scenario. Interestingly, black holes (specifically primordial black holes—not resulting from stellar gravitational collapse) have now been implicated in the formation of elements heavier than iron (see RSF science news post small primordial black holes implicated in formation of heavy elements).

Returning to the terminal processes “ending” the life of our massive star, once the outward radiative pressure is gone, the star begins to collapse. If the star exceeds the Tolman-Oppenheimer-Volkoff limit (TOV limit), its mass will be so great that the core collapses into a singularity, infinitely curving spacetime, forming a black hole while the outer layers of the star compress into a final thermonuclear fusion event that releases the energy equivalent of billions of stars, known as a supernova. The supernova sends shockwaves of plasma and “star stuff” out, which may trigger gravitational condensation in nebulae, birthing more stars, while the core that has collapsed to a singularity is masked behind a light-like boundary known as the event horizon.

Singularities and Einstein-Rosen bridges

That is the conventional model in a nutshell—black holes, neutron stars, and white dwarfs are the corpses of dead stars.

There are numerous problems with this theory, but none perhaps has been as unsettling to astrophysicists as the recent observation of supermassive black holes that reside at the edge of the visible universe (Discovery of an Ancient Supermassive Black Hole is Upending Conventional Theory of Star and Galaxy Formation), and hence are some of the oldest structures in the universe. This is a problem, because if black holes are formed from stellar collapse, then how can supermassive black holes be present when the first stars were just beginning to form? Looking at Haramein’s model, the answer is simple—black holes form first, during the early epochs of the universe when energy densities were extremely large, and they then act as the nucleating centers guiding star and galaxy formation.

The idea may seem remarkable but looking at standard models of cosmology it is known that immediately following the so-called Big Bang energy densities will be so great that black holes will be produced in vast quantities. What’s more, calculations show that the size of the black hole is determined by the time-evolution following the Big Bang, which is to say that black holes smaller than a stellar mass could have formed in the earliest stages, known as primordial black holes (PBHs). So, at a Planck time after the Big Bang, which is ~10-43­s, black holes of the Planck mass (~10-5g) would form (see Bernard Carr, Quantum Black holes as the Link Between Microphysics and Macrophysics, 2017).

Haramein has utilized these Planck-sized black holes, referred to as Planck Spherical Oscillators in his paper Quantum Gravity and the Holographic Mass, to calculate the exact mass of objects from elementary particles to stars and astronomical black holes using spacetime quanta, discovering a scale-invariant quantum gravitational solution.

At one second after the Big Bang, PBHs of a 100 thousand solar masses would form. Accordingly, in the span between the Planck second and 1 second, an enormous range of black hole masses would have formed. Note that it is commonly held that black holes of a proton size or smaller (~1015g), would almost immediately “evaporate” due to Hawking radiation, however there is good reason to believe that Hawking radiation is not a purely evaporative process, but in fact that quantum mass fluctuations around the event horizon can feed black holes, keeping their mass constant or even increasing the mass (see Maroc Spaans, On Quantum Contributions to Black Hole Growth, 2013).

Even if Hawking radiation is considered in its bare-form, which stipulates that the evaporative rate is inversely correlated with the mass of the black hole, researchers like Rovelli and Vidotto have described how proton-sized black holes will appear to “freeze” due to time dilation, and will therefore appear stable to outside frames of reference (all those that do not include the event horizon or traveling at light speed) for periods longer than the current age of the universe (see our article Planck Stars: quantum gravity research ventures beyond the event horizon).

Looking at this early formation raises the intriguing question, could protons be primordial black holes? As well, could supermassive black holes have formed in a short period following the Big Bang, where they would then be present during the first star formation (known as population III stars)? Cosmologists generally give a cut-off value for PBH formation of a few hundred solar masses, but recent observations are suggesting that the model is not entirely accurate. These observations include the detection of black holes outside of the expected mass-range predicted by the conventional stellar-collapse model, detected using the laser interferometry gravitational-wave observatory (LIGO). These “anomalous” black holes were above the expected mass range of 10 to 20 solar masses (which have raised the possibility that so-called dark matter could be primordial black holes). A prominent recent observation that is throwing the current model into question is the observation of quasars at the edge of the visible universe, with one residing 13.04 billion light years from Earth (meaning it formed earlier than 690 million years after the Big Bang) and housing a black hole close to a billion solar masses.

Feeding black holes?

An interesting observation is made when astronomers peer back to the earliest epochs of the universe, there are objects at the edge of the visible universe—which is to say that we are receiving the light they emitted close to 13 billion years ago.

Distance and Time in Cosmology

Since their discovery, scientists have eventually come to understand the nature of these enigmatic luminous progenitors, they are young galaxies that are extremely luminescent due to the activity of a supermassive black hole at their center, what’s referred to as active galactic nuclei (AGN). This fact was an important corroborative finding for Haramein’s model of galaxy and star formation, because the key concept is that black holes are at the center of all galaxies, where they act as the nucleating centers for galaxy accretion, determine the number of stars that are formed, and exert an overall considerable influence on the architecture of galactic systems.

Recent SDSS (Sloan Digital Sky Survey) studies have found a quasar that existed 690 million years after the “big bang”. It is estimated that for a quasar to be visible at such great distances the mass of the central supermassive black hole should be around 1 billion solar masses. Based on the conventional theories of black hole formation and growth—via stellar death—this far exceeds that of the expected mass, which have conventionally calculated black holes at this stage in cosmic evolution to be only a few hundred solar masses. Note, this predicted mass for the first black holes is based on the assumption that these “seed” black holes are remnants of the first stars – known as Pop III stars – which were formed as a result of the primordial gas cooling when the Universe was approximately 200 million old.

The conventional cosmological model suggests that because these original black holes would be forming in close proximity they would eventually merge to form more massive black holes of several thousand solar masses. However, albeit more massive, they are still not massive enough to account for the predicted masses of quasar stellar black holes that we see today.

So then, how did these behemoths arise so early?

Direct-collapse black holes and Super-Eddington Feeding Rates

One possibility is that the first black holes underwent some hitherto unpredicted extraordinary period of growth. The optimal feeding rate of a black hole is based on the Eddington limit, which describes the maximum rate of growth. Under the Eddington limit, with exponential growth a 10-solar mass black hole could grow to a billion-solar mass black hole in about one billion years.

If it was to be maintained that the first black holes come from Population III stars, then they would have to feed at a rate higher than the Eddington rate. This is theoretically possible in dense gas rich environments typical of the early universe. However, such an occurrence would only be possible for short durations – and could also cause dampening because radiation emitted during super-Eddington periods would in effect halt the growth of the black hole. Such a scenario would therefore be a rarity.

Another scenario that has been suggested by astrophysicist Priyamvada Natarajan and her colleagues is that the first black hole seeds could have formed without stellar deaths. Instead Natarajan et al. suggests they formed directly from gas, referred to as Direct-Collapse Black Holes (DCBH). Such objects would have formed within a few hundred million years after the big bang with masses of 10-100 thousand solar masses.

 

Large gas disks would usually cool and fragment instigating stellar growth and galaxy formation. However, under Natarajan’s model large gas agglomerates are posited to collapse into dense clumps that directly form seed black holes of 10,000 to 1 million solar masses. She concludes that his could happen if normal cooling processes were halted – that is if molecular hydrogen formation, which aids disk cooling, was stopped such that the disk remains hot. The disk would then be too hot to form stars and as well would be dynamically unstable resulting in contraction until eventual collapse forming a black hole—a DCBH to be specific.

As these seed DCBHs grow they would briefly reach a point where their mass is greater than all the stars in their parent galaxy. For this brief moment the parent galaxy is referred to as an obese black hole galaxy (OBG). The mass of all the stars in a galaxy is typically 1000 times greater than the central black hole, so an OBG would have a unique spectral signal, particularly in the infrared wavelength of the spectrum. Natarajan is hoping that with the launch of the James Webb Telescope in 2019 she will be able to find evidence of this unique spectral signal and thus prove the existence of DCBHs.

Significance to the unified physics of Nassim Haramein

As mentioned, Haramein’s model of early formation of black holes and their importance to the evolution and development of the first stars and galaxies is now seeing corroborating evidence as empirical data comes in that strongly indicates this as a more accurate theory than the conventional model. As other researchers work to make sense of the new observations and its contradictions with existing theory, their new models are coming to closely resemble that of the cosmological portion of Haramein’s unified physics model. This is a good sign, because progress in this direction may bring a majority of the scientific community to the Haramein’s unified physics theory that solves dark matter, formation and evolution of stars and galaxies, universal expansion and other unresolved topics in cosmology and astrophysics.

References:

  1. Mezcua J. Hlavacek-Larrondo J. R. Lucey  M. T. Hogan  A. C. Edge B. R. McNamara. The most massive black holes on the Fundamental Plane of black hole accretion. Monthly Notices of the Royal Astronomical Society, Volume 474, Issue 1, 11 February 2018, Pages 1342–1360.

Fabio Pacucci, Priyamvada Natarajan, Marta Volonteri, Nico Cappelluti, C. Megan Urry. Conditions for Optimal Growth of Black Hole Seeds. The Astrophysical Journal Letters, Volume 850, Number 2, 1 December 2017.

Priyamvada Natarajan. The Puzzle of the First Black Holes. Scientific American, 1 February 2018.

Posted by Sc13t4, 0 comments

The End of the Aether

One example representation of the five Classical elements. In this example representation of the classical elements, the dodecahedron represents the Universe, Spirit, or Aether.

For thousands of years, the Aether (ether, æther, aither), a field which connects and permeates all things, was an essential facet of both the philosophy and science of reality in cultures around the world. Also known as “quintessence,” the Aether is the fifth element in the series of classical elements thought to make up our experience of the universe. Greek Stoicheion, Japanese Godai, Tibetan Bön, European Medieval Alchemy, as well as Druidry, Paganism, Wicca, Native American and Tribal Shamanism, and many other spiritual and philosophical lineages all describe the fundamental elements to be:

  • Fire
  • Air
  • Earth
  • Water
  • Aether

Although the Aether goes by as many names as there are cultures that have referenced it, the general meaning always transcends and includes the same four “material” elements.1 It is sometimes more generally translated simply as “Spirit” when referring to an incorporeal living force behind all things.2 In Japanese, it is considered to be the void through which all other elements come into existence. In Hinduism, it is known as Akasha, which simply means “space” in Sanskrit.3

Article by Resonance Academy Faculty:  Adam Apollo  2013

There are also many terms for the movement of energy through the Aether, or the movement of Aether itself. These include qi (also written as chi, ch’i, or ki) which is a traditional Chinese and Taoist concept for the natural energy or “life force” of any living thing. In Hinduism, a similar idea is known as prana, which is the life force which connects all the elements of the universe. For Hawaiian and some polynesian cultures, this field of living force is known as mana. The same concept is known as ruah in Hebrew, as lüng in Tibetan Buddhism, as pneuma in ancient Greece, and vital energy in Western philosophy. It is also popularized by the idea of “The Force” in Star Wars.

AetherWind, Wikipedia - CC BY-SA 3.0

AetherWind, Wikipedia – CC BY-SA 3.0

The Aether played a central role in our framework of understanding natural phenomena in the universe all the way up until the turn of the 20th century. As we discussed previously, gravity (or the source of mass) was thought by both Rene Descartes and Isaac Newton (as well as Nicolas Fatio de Duillier in 1690, Georges-Louis Le Sage in 1748, and others) to originate from the mechanics of the Aether. During the time of Isaac Newton, this was a somewhat dangerous proposition, as postulation of such “invisible forces working at a distance” was considered to be an introduction of “occult agencies” into science.4

Yet even until the late 1800s, a “luminiferous” (light-bearing) Aether continued to be thought of as the medium through which electromagnetic waves (light) traveled. Proponents of both the particle and wave nature of light (see Section 4 – A Brief Modern History of Light) agreed that light must travel through some sort of medium.5However, while the theories of the time required the Aether in order to remain consistent, many experiments done to detect this medium failed. Its properties seemed to be too dynamic and ephemeral to test, and many of its qualities had already been explained by alternative theories:

Aethers were invented for the planets to swim in, to constitute electric atmospheres and magnetic effluvia, to convey sensations from one part of our bodies to another, and so on, until all space had been filled three or four times over with aethers…. The only aether which has survived is that which was invented by Huygens to explain the propagation of light.6James Clerk Maxwell – Encyclopædia Britannica 1878

Several theories were produced to reconcile the observed properties of light with the movement of other objects that must be travelling through the Aether, including the Earth. These “Aether drag” theories were developed in consideration of whether the Aether was completely stationary (or moved uniformly) while objects moved through it, or if the Aether was partially “dragged” along with massive objects like the Earth.7 In either case, both theories assumed that movement of the Aether should be detectable from the Earth’s surface, since the Earth both spins and moves around the Sun. Many experiments were concocted to prove (or refute) the existence of the Aether using these simple principles.

One of these experiments done by Hippolyte Fizeau in 1851 measured the speed of light in water, in order to detect if a moving medium or fluid would influence the movement of light traveling through it. Although the effect was much smaller than expected, the experiment showed that a moving medium could in fact adjust the speed of light, supporting the “partial Aether drag theory” of Augustin-Jean Fresnel.8 Although this was disturbing to most physicists at the time, it is important to note that Einstein expressed the importance of this experiment in his work on Special Relativity, over half a century later.9

The most famous of these experiments were performed by Albert A. Michelson and Edward W. Morley, and the final test is popularly known as the Michelson-Morley experiment. They set up an extremely sensitive interferometer to measure the “Aetheric wind,” or the presumably detectable movement of the Aether as it passed through a spinning Earth travelling through space. Their experiments were designed to detect a change in the speed of light as low as 0.01%, mostly by viewing subtle changes in the interference patterns of light waves being split and recombined after travelling short distances (using an interferometer). Their results, which showed a negligible change in the light interference, regardless of the time of day or the season, suggested that the Aether must be dragged nearly completely by the Earth:10

The Experiments on the relative motion of the earth and ether have been completed and the result decidedly negative. … As displacement is proportional to squares of the relative velocities it follows that if the ether does slip past the relative velocity is less than one sixth of the earth’s velocity.Albert Abraham Michelson, 1887

However, this conflicted with the commonly observed properties of stellar aberration, as an Aether being dragged along by the Earth would be expected to distort our view of the stars in various ways.11 Stellar aberration is an astronomical phenomenon which produces an apparent motion of celestial objects due to the change of the astronomer’s inertial frame of reference. This conflict, and the failure of so many other experiments attempting to verify the “Aetheric wind” resulted in continued rejection of the Aether among the general scientific community.

Michelson-Morley experiment, Stigmatella aurantiaca CC BY-SA 3.0

Michelson-Morley experiment, Stigmatella aurantiaca CC BY-SA 3.0

When Einstein published his Special Theory of Relativity, it solved many of the remaining theoretical problems that required an Aether for explanation by replacing it with a conceptual framework for spacetime itself. He suggested that there was a more simple way of looking at things which did not require a luminiferous Aether; an idea which appealed greatly to the anti-Aether popular science of the early 1900s. The failure of the Michelson-Morley experiment also helped to accelerate the widespread acceptance of Einstein’s proposed constancy for the speed of light.12

Following Einstein’s publishing of General Relativity in 1916, his layman’s book on Special and General Relativity explained:

According to this theory there is no such thing as a “specially favoured” (unique) co-ordinate system to occasion the introduction of the æther-idea, and hence there can be no æther-drift, nor any experiment with which to demonstrate it. … Thus for a co-ordinate system moving with the earth the mirror system of Michelson and Morley is not shortened, but it is shortened for a co-ordinate system which is at rest relatively to the sun.13

This put the final nail in the coffin of the Aether for the scientific community at large. Since that time, the Aether has generally been considered, for all practical purposes, to be non-existent.

Yet shortly after this was published, one of Einstein’s mentors, Hendrik Lorentz, wrote him a letter suggesting that General Relativity reintroduced the Aether, rather than discarding it. While reintroducing the Aether would have doomed any young emerging scientist, Einstein had by this time already earned his fame through Special Relativity and Mass-Energy equivalence (E=mc2). He was still cautious at first, only offering a few public responses that brought up the subject, and his apparent contradictions did not make a “new Aether” appealing to the scientific community.14

Einstein eventually published a complete explanation that would reconcile his apparently contradictory perspectives on the Aether, first by asserting that Special Relativity does not explicitly negate the existence of an Aether, suggesting only that it didn’t explicitly need an Aether to explain the relative properties of electromagnetism and movement. He then clarified and explained the essential importance of the Aether (ether) to General Relativity:

To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.15

This description, and the rest of his paper published in 1920, suggests that Einstein saw spacetime itself as the “new Aether.” However, this perspective was never popularized and the Aether was slowly forgotten as a “metaphysical” artifact of a previous scientific era.

To clarify Einstein’s meaning further, he is basically suggesting that if an object rotates, there must be a source for that rotation. If the source propelling an object to spin is not the object itself, then it must be the space around the object.

In reviewing this brief history of the Aether, we might inquire as to whether it was appropriate to completely remove it from science and all subsequent science education. Some valuable questions we might ask include:

  • Why did so many different cultures and civilizations share a nearly identical concept that is in some cases called an Aether?
  • If something is reduced and explained through more specific terms and concepts, is the original inclusive term still valuable? In other words, we know that there are many kinds of Salt, and that the actual scientific composition may be sodium chloride or may be composed of related molecules. Should we remove the term salt, and instead only use molecular names that are more specific?
  • Do you think that Einstein’s view of spacetime as a continuous background fabric that connects everything in the universe could be appropriately defined as the Aether?

Article by Resonance Academy Faculty:
Adam Apollo
2013

 

 

 


  1. Wikipedia – Aether (classical element) http://en.wikipedia.org/wiki/Aether_(classical_element)
  2. Wikipedia – Spirit http://en.wikipedia.org/wiki/Spirit
  3. Dictionary of World Philosophy by A. Pablo Iannone, Taylor & Francis, 2001, p. 30. ISBN 0-415-17995-5
  4. Edelglass et al., Matter and Mind, ISBN 0-940262-45-2. p. 54
  5. Wikipedia – Luminiferous Ether – http://en.wikipedia.org/wiki/Luminiferous_ether
  6. Maxwell, James Clerk (1878), “Ether“, Encyclopædia Britannica Ninth Edition 8: 568–572
  7. Whittaker, Edmund Taylor (1910), A History of the Theories of Aether and Electricity (1. ed.), Dublin: Longman, Green and Co.
  8. Lahaye, Thierry; Labastie, Pierre; Mathevet, Renaud (2012). “Fizeau’s “aether-drag” experiment in the undergraduate laboratory”. American Journal of Physics 80 (6): 497. arXiv:1201.0501
  9. Miller, A.I. (1981). Albert Einstein’s special theory of relativity. Emergence (1905) and early interpretation (1905–1911). Reading: Addison–Wesley. ISBN 0-201-04679-2
  10. Shankland, R.S. (1964). “Michelson–Morley experiment”.American Journal of Physics 31 (1): 16–35.Bibcode:1964AmJPh..32…16S.doi:10.1119/1.1970063
  11. Janssen, Michel & Stachel, John (2010), “The Optics and Electrodynamics of Moving Bodies”, in John Stachel, Going Critical, Springer, ISBN 1-4020-1308-6
  12. Stachel, John (1982), “Einstein and Michelson: the Context of Discovery and Context of Justification”, Astronomische Nachrichten 303 (1): 47–53, Bibcode:1982AN….303…47S,doi:10.1002/asna.2103030110
  13. Einstein A. (1916 (translation 1920)), Relativity: The Special and General Theory, New York: H. Holt and Company
  14. A. Einstein (1918), “Dialog about Objections against the Theory of Relativity“, Naturwissenschaften 6 (48): 697–702,Bibcode:1918NW……6..697E, doi:10.1007/BF01495132 asasf
  15. Einstein, Albert: “Ether and the Theory of Relativity” (1920), republished in Sidelights on Relativity (Methuen, London, 1922)
Posted by Sc13t4, 0 comments
All Are Prime

All Are Prime

In the Universe of Equations, Virtually All Are Prime

Equations, like numbers, cannot always be split into simpler elements. Researchers have now proved that such “prime” equations become ubiquitous as equations grow larger.

Prime numbers get all the love. They’re the stars of countless popular stories, and they feature in the most celebrated open questions in mathematics. But there’s another mathematical phenomenon that’s almost as foundational, yet receives far less attention: prime equations.

These are equations — polynomial equations in particular — that can’t be divided by any other equations. Like prime numbers, they’re at the heart of a wide range of research areas in mathematics. For many particular problems, if you can understand something about the prime equations, you’ll find you’ve answered the question you actually set out to solve.

“When we have a question, we can reduce it to some knowledge about prime numbers,” said Lior Bary-Soroker of Tel Aviv University. “Exactly the same thing happens with polynomials.”

Just as with prime numbers, the most basic thing to know about prime equations is: How often do they occur? Over the last year mathematicians have made considerable progress on answering that question. In a paper posted at the end of October, Emmanuel Breuillardand Péter Varjú of the University of Cambridge proved that virtually all equations of a certain type are prime.

This means that unlike prime numbers, which are scarce, prime equations are abundant. The new paper solves a 25-year-old conjecture and has implications everywhere from online encryption to the mathematics of randomness.

More Ways to Fail

Many questions in mathematics boil down to questions about polynomial equations. These are the kinds of equations — like y = 2x − 3 and y = x2 + 5x + 6 — that consist of variables raised to some power with coefficients in front.

These equations behave just like ordinary numbers in many respects: You can add, subtract, multiply and divide them. And as with numbers, it’s natural to ask which equations can be expressed as a product of two smaller equations.1

When an equation cannot be divided into two smaller equations, mathematicians say that it’s irreducible. Mathematicians would like to know how often irreducible polynomial equations occur.

Trying to make statements about the frequency of irreducible polynomials among all possible polynomials — equations with any number of variables, raised to any power, with any coefficients — is hard. So mathematicians have attacked narrower versions of the question, by restricting the exponents (looking at polynomials with no variables raised higher than the fifth power, for example) or limiting the coefficients to a narrow range. In October 2017 Bary-Soroker and Gady Kozma, a mathematician at the Weizmann Institute of Science in Israel, proved that virtually all polynomials with a certain restricted range of coefficients are irreducible.

Breuillard and Varjú solved a slightly different problem. They considered polynomials of any length, with any exponents, and with any coefficients (the only restriction being that the list of possible coefficients is finite).

Breuillard and Varjú’s method gave them access to a much simpler problem. In 1993, Andrew Odlyzko, a mathematician now at the University of Minnesota, and Bjorn Poonen, now at the Massachusetts Institute of Technology, conjectured that as you consider increasingly complicated polynomials with the constraint that their coefficients must be either 0 or 1, equations that can be factored become vanishingly rare in the sea of “prime” polynomials. Odlyzko and Poonen’s conjecture, by restricting polynomials to just two coefficients, was an effort to gain a foothold in an overwhelming question.

“If you want to study something and you can’t prove a lot of things, it’s good to start with something simple,” Bary-Soroker said.

Their conjecture was also motivated by basic arithmetic. Prime numbers are common among the first 10 numbers but grow ever rarer after that. To be prime, a number needs to avoid being divisible by any whole number smaller than itself (save the number 1). As numbers get bigger, the list of numbers that could divide them grows longer — there are more ways primality can fail in big numbers than in small numbers.

With polynomials, a different dynamic is at work. In order for a polynomial to be factorable, its coefficients have to stand in just the right relationship with one another. The polynomial y = x2 + 5x + 6 can be factored into (x + 3) × (x + 2) only because there happen to be two numbers (2 and 3) that you can add to make the second coefficient (5) and multiply to make the third coefficient (6). Polynomials with more terms have a more complicated set of demands that the coefficients must fulfill. Finding factors that satisfy all the coefficients become less likely as the number of coefficients grows.

“For a polynomial to be reducible you have to have a coincidence, some special relations among the coefficients,” Odlyzko said. “With a high-degree polynomial you have more relations that have to be satisfied.”

Random Walks

Breuillard and Varjú did not set out to study polynomial irreducibility. Instead, they were interested in the mathematics of a random walk. In this random walk, imagine yourself standing on a clock face, with the numbers 1 through 11 marked out at regular intervals. You start at the spot corresponding to 1 and flip a coin: Tails, you multiply the number you’re on by some other number you’ve chosen ahead of time, then advance to the corresponding spot on the circle. (In such clock or “modular” number systems, if the outcome is a number greater than 11, you just keep going around the clock until you’ve advanced the required number of spaces.) If the coin flip comes up heads, you multiply the number you’re on by your preselected number, add one, and advance to the corresponding spot.

Lucy Reading-Ikkanda/Quanta Magazine

Given these conditions, Breuillard and Varjú wanted to understand two things: How long will it take for you to visit every point on the circle? And how long will it take for you to visit every point approximately the same number of times?

These questions are known to mathematicians as the “mixing problem,” and they turn out to have something to do with polynomial irreducibility. Breuillard and Varjú recognized that the paths of a random walk can be described by a polynomial equation with 0 and 1 as coefficients. The “mixing time” of the random walk is closely related to whether or not most of the polynomials describing that random walk are irreducible.

“We observed that we could say something about the kinds of questions we wanted to understand if we knew whether these polynomials were irreducible,” Varjú said.

To test for irreducibility, Breuillard and Varjú adapted a technique developed in the 1980s that links irreducibility to number theory. They wanted to know how many solutions a given polynomial has in a given modular number system. Previous work had shown that the number of solutions a polynomial has reflects the number of factors. So if it has three solutions on average across modular number systems, it has three factors. Just one solution? Then one factor. And if a polynomial has just one factor, that means it’s irreducible.

Using this method, applied to modular number systems based on prime numbers, Breuillard and Varjú proved that as you consider larger and larger polynomials (with coefficients of 0 or 1), the proportion of polynomials that is irreducible gets closer and closer to 100 percent.

Their proof has a caveat. It depends on the truth of another conjecture: the Riemann hypothesis, the most important and daunting unsolved problem in mathematics. But the Riemann hypothesis is widely accepted, which buoys Breuillard and Varjú’s work.

Their result has wide-ranging implications. On a practical level, it’s good if not unexpected news for online encryption, since factorable polynomials could undermine a commonly used digital encryption scheme. Maybe more importantly, it’s a big step toward understanding the nature of these equations, which abound in life and math but are hard to characterize in totality.

“Previous estimates for the fraction of these polynomials [that are irreducible] were much weaker,” Odlyzko said. “Now these guys say practically all of them are irreducible.”

Posted by Sc13t4, 0 comments