##
Archive for **November 14th, 2021**

Posted November 14, 2021

on:# What could change in our future conditions if we simply re-write our Human History?

# What if Everything You Learned About Human History Is Wrong?

Note: An abstract/small summary of t**he book would have been far more informative than an article that basically is Not enlightening on the new perspective for our evolution in political framework.**

Published Oct. 31, 2021

The anthropologist** David Graeber **at a 2012 debate about the Occupy movement. His new book with** David Wengrow**, “The Dawn of Everything,” takes on the standard narrative of the origins of human societies. They aim to rewrite the story of our shared past — and future.

One August night in 2020, David Graeber — the anthropologist and anarchist activist who became famous as an early organizer of Occupy Wall Street — took to Twitter to make a modest announcement.

“My brain feels bruised with numb surprise,” he wrote, riffing on a Doors lyric. “It’s finished?”

He was referring to the book he’d been working on for nearly a decade with the archaeologist David Wengrow, which took as its immodest goal nothing less than upending everything we think we know about the origins and evolution of human societies.

Even before the Occupy movement made him famous, Graeber had been hailed as one of the most brilliant minds in his field.

But his most ambitious book also turned out t**o be his las**t. A month after his Twitter announcement, Graeber, 59, died suddenly of necrotizing pancreatitis, prompting a shocked outpouring of tributes from scholars, activists and friends around the world.

“The Dawn of Everything: A New History of Humanity,” out Nov. 9 from Farrar Straus and Giroux, may or may not dislodge the standard narrative popularized in mega-sellers like Yuval Noah Harari’s “Sapiens” and Jared Diamond’s “Guns, Germs and Steel.”

But it has already gathered a string of superlative-studded (if not entirely uncritical) reviews. Three weeks before publication, after it suddenly shot to #2 on Amazon, the **publisher ordered another 75,000 copies on top of the 50,000 first printing.**

In a video interview last month, Wengrow, a professor at University College London, slipped into a mock-grandiose tone to recite one of Graeber’s favorite catchphrases: “We are going to change the course of human history — starting with the past.”

More seriously, Wengrow said, “The Dawn of Everything” — which weighs in at a whopping 704 pages, **including a 63-page bibliography — aims to synthesize new archaeological discoveries of recent decades** that haven’t made it out of specialist journals and into public consciousness.

“There’s a whole new picture of the human past and human possibility that seems to be coming into view,” he said. “And it really doesn’t resemble in the slightest these very entrenched stories going around and around.”

The Big History best-sellers by Harari, Diamond and others have their differences. But they rest, Graeber and Wengrow argue, on a **similar narrative of linear progress** (or, depending on your point of view, decline).

According to this story, for the first 300,000 years or so after Homo sapiens appeared, pretty much nothing happened.

People everywhere lived in small, egalitarian hunter-gatherer groups, until the sudden invention of agriculture around 9,000 B.C. gave rise to sedentary societies and states based on inequality, hierarchy and bureaucracy.

But all of this, Graeber and Wengrow ar**gue, is wrong.**

Recent archaeological discoveries, they write, show that early humans, far from being automatons blindly moving in evolutionary lock step in response to material pressures, **self-consciously experimented with “a carnival parade of political forms.”**

It’s a more accurate story, they argue, but also “a more hopeful and more interesting” one.

“We are all projects of collective self-creation,” they write. “What if, instead of telling the story about how our society fell from some idyllic state of equality, we ask how we came to be trapped in such tight conceptual shackles that we can no longer even imagine the possibility of reinventing ourselves?”

The book’s own origins go back to around 2011, when Wengrow, whose archaeological fieldwork has focused on Africa and the Middle East, was working at New York University.

The two had met several years earlier, when Graeber was in Britain looking for a job after Yale declined to renew his contract, for unstated reasons that he and others saw as related to his **anarchist politics**.

In New York, the two men sometimes met for expansive conversation over dinner. After Wengrow went back to London, Graeber “started sending me notes on things I’d written,” Wengrow recalled.** “The exchanges ballooned, until we realized we were almost writing a book over email.”**

At first, they thought it might be a short book on the origins of social inequality. But soon they started to feel like that question — a chestnut going back to the Enlightenment — was all wrong.

“The more we thought, we wondered why should you frame human history in terms of that question?” Wengrow said.** “It presupposes that once upon a time, there was something else.”**

Wengrow, 49, an Oxford-educated scholar whose manner is more standard-issue professorial than the generally rumpled Graeber, said the relationship was a true partnership. He, like many, spoke with awe of Graeber’s brilliance (as a teenager, a much-repeated story goes,** his hobby of deciphering Mayan hieroglyphics** caught the eye of professional archaeologists), as well as what he described as his extraordinary generosity.

“David was like one of those Amazonian village chiefs who were always the poorest guy in the village, since their whole function was to give things away,” Wengrow said. “He just had that ability to look at your work and sprinkle magic dust over the whole thing.”

Most recent big histories are by geographers, economists, psychologists and political scientists, **many writing under the guiding framework of biological evolution**.

(In a cheeky footnote assessing rival Big Historians’ expertise, they describe Diamond, a professor of geography at the University of California, Los Angeles, as the holder of “a Ph.D on the physiology of the gall bladder.”)

Graeber and Wengrow, by contrast, write in the** grand tradition of social theory descended from Weber, Durkheim and Levi-Strauss**.

In a 2011 blog post, Graeber recalled how a friend, after reading his similarly sweeping “Debt: The First 5,000 Years” said he wasn’t sure anyone had written a book like that in 100 years. “I’m still not sure it was a compliment,” Graeber quipped.

“The Dawn of Everything” includes discussions of princely burials in Europe during the ice age, contrasting attitudes toward slavery among the Indigenous societies of Northern California and the Pacific Northwest, the political implications of dry-land versus riverbed farming, and the complexity of preagricultural settlements in Japan, among many, many other subjects.

But the dazzling range of references raises a question: Who is qualified to judge whether it’s true?

Reviewing the book in The Nation, the historian Daniel Immerwahr called Graeber “a wildly creative thinker” who was “better known for being interesting than right” and asked if the book’s confident leaps and hypotheses “can be trusted.”

And Immerwahr deemed at least one claim — that colonial American settlers captured by Indigenous people “almost invariably” chose to stay with them — “ballistically false,” claiming that the authors’ single cited source (a 1977 dissertation) “actually argues the opposite.”

Wengrow countered that it was Immerwahr who was reading the source wrong. And he noted that he and Graeber had taken care to publish the book’s core arguments in leading peer-reviewed scholarly journals or deliver them as some of the most prestigious invited lectures in the field.

“I remember thinking at the time, why do we have to put ourselves through this?” Wengrow said of the process. “We’re reasonably established in our fields. But it was David who was adamant that it was terribly important.”

James C. Scott, an eminent political scientist at Yale whose 2017 book “Against the Grain: A Deep History of the Earliest States” also ranged across fields to challenge the standard narrative, said some of Graeber and Wengrow’s arguments, like his own, would inevitably be “thrown out” as other scholars engaged with them.

But he said the two men had **delivered a “fatal blow” to the already-weakened idea that settling down in agricultural states was what humans “had been waiting to do all along.”**

But the most striking part of “The Dawn of Everything,” Scott said, is an early chapter on what the authors call the “Indigenous critique.” The European Enlightenment, they argue, rather than being a gift of wisdom bestowed on the rest of the world, grew out of a dialogue with Indigenous people of the New World, whose trenchant assessments of the shortcomings of European society influenced emerging ideas of freedom.

“I’ll bet it has a huge significance in our understanding of the relationship between the West and the rest,” Scott said.

“The Dawn of Everything” sees pervasive evidence for large complex societies that thrived without the existence of the state, and defines freedom chiefly as “**freedom to disobey**.”

It’s easy to see how such arguments dovetail with Graeber’s anarchist beliefs, but Wengrow pushed back against a question about the book’s politics.

“I’m not particularly interested in debates that begin with slapping a label on a piece of research,” he said. “It almost never happens with scholars who lean right.”

But if the book helps convince people, in the words of the Occupy slogan, that “another world is possible,” that’s not unintentional.

“We’ve reached the **stage of history where we have scientists and activists agreeing our prevailing system is putting us and our planet on a course of real catastrophe**,” Wengrow said.

“To find yourself paralyzed, with your horizons closed off by false perspectives on human possibilities, based on a mythological conception of history, is not a great place to be.”

### Listen to This Article

**Listen** 10:19

*To hear more audio stories from publications like The New York Times, download **Audm for iPhone or Android**.*

### Why this monolithic tendency for a unified theory of the Universe and its forces? What is this arrow of time?

Posted November 14, 2021

on:# On decoherence in quantum gravity

Dmitriy Podolskiy and Robert Lanza

First published: 26 September 2016

https://doi.org/10.1002/andp.201600011

**Note: I usually refrain from editing articles that are Not political in nature**.

## Abstract

It was previously argued that the phenomenon of quantum gravitational **decoherence **described by the** Wheeler-DeWitt equation** is responsible for the emergence of the **arrow of time.**

Here we show that the characteristic spatio-temporal scales of quantum gravitational decoherence are typically **logarithmically larger than a characteristic curvature radius** of the background space-time.

This largeness is a direct consequence of the fact that gravity is a **non-renormalizable theory**, and the corresponding effective field theory is nearly decoupled from **matter degrees of freedom in the physical limit **.

Therefore, quantum gravitational decoherence is too ineffective to guarantee the emergence of the arrow of time and the **“quantum-to-classical” transition **to happen at scales of physical interest.

We argue that the emergence of the** arrow of time is directly related to the nature and properties of physical observer.**

## 1 Introduction

Quantum mechanical decoherence is one of the cornerstones of the quantum theory 1, 2. Macroscopic physical systems are known to decohere during vanishingly tiny fractions of a second, which, as generally accepted, effectively leads to emergence of a **deterministic quasi-classical world** which we experience.

The theory of decoherence has passed extensive experimental tests, and dynamics of the decoherence process itself was many times observed in the laboratory 3–15. The analysis of decoherence in non-relativistic quantum mechanical systems is apparently based on the *notion of time*, the latter itself believed to emerge due to decoherence between different WKB branches of the solutions of the Wheeler-DeWitt equation describing quantum gravity 2, 16–19.

Thus, to claim understanding of decoherence “at large”, one has to first understand decoherence in quantum gravity. The latter is clearly problematic, as no consistent and complete theory of quantum gravity has emerged yet.

Although it is generally believed that when describing dynamics of decoherence in relativistic field theories and gravity one does not face any fundamental difficulties and gravity decoheres quickly due to interaction with matter 20–23, we shall demonstrate here by simple estimates that decoherence of quantum gravitational degrees of freedom might in some relevant cases (in particular, in a physical situation realized in the very early Universe) actually be **rather ineffective.**

The nature of this ineffectiveness is to a large degree related to the non-renormalizability of gravity. To understand how the latter influences the dynamics of decoherence, one can consider theories with a **Landau pole** such as the scalar field theory in dimensions.

This theory is believed to be trivial 24, since the physical coupling λ_{phys} vanishes in the continuum limit.1

When , where the triviality is certain 25, 26, critical exponents of theory and other theories from the same universality class coincide with the ones predicted by the **mean field theory.**

Thus, such theories are effectively free in the continuum limit, i.e., when the UV cutoff . Quantum mechanical decoherence of the field states in such QFTs can only proceed through the interaction with other degrees of freedom. If such degrees of freedom are not in the menu, decoherence is not simply slow, it is essentially absent.

In effective** field theory** formulation of gravity dimensionless couplings are suppressed by negative powers of the **Planck mass** , which plays the role of UV cutoff and becomes infinite in the decoupling limit .

Decoherence times for arbitrary configurations of quantum gravitational degrees of freedom also grow with growing although, as we shall see below, only logarithmically slowly and become infinite at complete decoupling. If we recall that gravity is *almost* decoupled from physical matter in the real physical world, ineffectiveness of quantum gravitational decoherence does not seem any longer so surprising.

While matter degrees of freedom propagating on a fixed or slightly perturbed background space-time corresponding to a fixed solution branch of the WdW equation decohere very rapidly, decoherence of different WKB solution branches remains a question from the realm of quantum gravity.

Thus, we would like to argue that in order to fit the ineffectiveness of quantum gravitational decoherence and a nearly perfectly decohered world which we experience in experiments, some additional physical arguments are necessary based on properties of observer, in particular, her/his ability to process and remember information.

**This paper is organized as follows**.

We discuss decoherence in non-renormalizable quantum field theories and relation between non-renormalizable QFTs and classical statistical systems with first order phase transition in Section 2.

We discuss decoherence in non-renormalizable field theories in Section 3 using both first- and second-quantized formalisms. Section 4 is devoted to the discussion of decoherence in dS space-time.

We also argue that meta-observers in dS space-time should not be expected to experience effects of decoherence. Standard approaches to quantum gravitational decoherence based on analysis of WdW solutions and master equation for the density matrix of quantum gravitational degrees of freedom are reviewed in Section 5.

Finally, we argue in Section 6 that one of the mechanisms responsible for the emergence of the arrow of time is related to ability of observers to preserve information about experienced events.

## 2 Preliminary notes on non-renormalizable field theories

To develop a quantitative approach for studying decoherence in non-renormalizable field theories, it is instructive to use the duality between quantum field theories in *d* space-time dimensions and statistical physics models in *d* spatial dimensions. In other words, to gain some intuition regarding behavior of non-renormalizable quantum field theories, one can first analyze the behavior of their statistical physis counterparts describing behavior of classical systems with appropriate symmetries near the phase transition.

Consider for example a large class of non-renormalizable QFTs, which includes theories with *global* discrete and continuous symmetries in the number of space-time dimensions higher than the upper critical dimension *d*_{up}: . Euclidean versions of such theories are known to describe a vicinity of the 1st order phase transition on the lattice 27, and their continuum limits do not formally exist2: even at close proximity of the critical temperature physical correlation length of the theory never becomes infinite.

One notable example of such a theory is the scalar statistical field theory, describing behavior of the order parameter ϕ in the nearly critial system with discrete *Z*_{2} symmetry. This theory is trivial 25, 26 in .3 Triviality roughly follows from the observation that the effective dimensionless coupling falls off as , when the continuum limit is approached.What does it mean physically? First, the behavior of the theory in is well approximated by mean field. This can be readily seen when applying Ginzburg criterion for the applicability of mean field approximation 28: at the mean field theory description is applicable arbitrarily close to the critical temperature. This is also easy to check at the diagrammatic level: the two-point function of the field ϕ has the following form in momentum representation

where , and at one loop level (see Fig. 1)(1)where is the dimensionless coupling. The first term in the r.h.s. of 1 represents the mean field correction leading to the renormalization/redefinition of . The second term is strongly suppressed at in comparison to the first one. The same applies to any high order corrections in powers of λ as well as corrections from any other local terms in the effective Lagrangian of the theory.

As we see, the behavior of the theory is in fact simple despite its non-renormalizability; naively, since the coupling constant λ has a dimension , one expects uncontrollable power-law corrections to observables and coupling constants of the theory. Nevertheless, as 1 implies, the perturbation theory series can be re-summed in such a way that only mean field terms survive. Physics-wise, it is also clear why one comes to this conclusion. At *Z*_{2}-invariant statistical physics models do not possess a second order phase transition, but of course do possess a first order one.4 Behavior of the theory in the vicinity of the first order phase transition can always be described in the mean field approximation, in terms of the homogeneous order parameter .Our argument is not entirely complete as there is a minor culprit. Assume that an effective field theory with the EFT cutoff Λ coinciding with the physical cutoff is considered. Near the point of the 1st order phase transition, when the very small spatial scales (much smaller than the correlation length ξ of the theory) are probed, it is almost guaranteed that the probed physics is the one of the broken phase. The first order phase transition proceeds through the nucleation of bubbles of a critical size , thus very small scales correspond to physics inside a bubble of the true vacuum , and the EFT of the field is a good description of the behavior of the theory at such scales. As the spatial probe scale increases, such description will inevitably break down at the IR scale(2)where in the pre-critical limit. This scale is directly related to the nucleation rate of bubbles: at scales much larger than the bubble size *R* one has to take into account the stochastic background of the ensemble of bubbles of true vacuum on top of the false vacuum, and deviation of it from the the single-bubble background leads to the breakdown of the effective field theory description, see Fig. 2. Spatial homogeneity is also broken at scales by this stochastic background, and this large-scale spatial inhomogeneity is one of the reasons of the EFT description breakdown.

Finally, if the probe scale is much larger than (say, roughly, of the order of or larger), the observer probes a false vacuum phase with . *Z*_{2} symmetry dictates the existence of two true minima , and different bubbles have different vacua among the two realized inside them. If one waits long enough, the process of constant bubble nucleation will lead to self-averaging of the observed . As a result, the “true” measured over very long spatial scales is always zero.

The main conclusion of this Section is that despite the EFT breakdown at both UV (momenta ) and IR (momenta ) scales, the non-renormalizable statistical theory perfectly remains under control: one can effectively use a description in terms of EFT at small scales and a mean field at large scales. In all cases, the physical system remains nearly completely described in terms of the homogeneous order parameter or a “master field”, as its fluctuations are almost decoupled. Let us now see what this conclusion means for the quantum counterparts of the discussed statistical physics systems.

## 3 Decoherence in relativistic non-renormalizable field theories

We first focus on the quantum field theory with global *Z*_{2}-symmetry. All of the above (possibility of EFT descriptions at both and , breakdown of EFT at and with given by the expression 2) can be applied to the quantum theory, but there is an important addition concerning decoherence, which we shall now discuss in more details.

### 3.1 Master field and fluctuations

As we discussed above, for the partition function of the invariant statistical field theory describing a vicinity of a first order phase transition one approximately has(3)(4)where is the volume of the system, and as in the previous Section. Physically, the spatial fluctuations of the order parameter ϕ are suppressed, and the system is well described by statistical properties of the homogeneous order parameter .The Wick rotated quantum counterpart of the statistical physics model 3 is determined by the expression for the quantum mechanical “amplitude”

(5)written entirely in terms of the “master field” Φ (as usual, is the volume of -dimensional space). In other words, in the first approximation the non-renormalizable theory in dimensions can be described in terms of a master field Φ, roughly homogeneous in space-time. As usual, the wave function of the field can be described as

where Φ_{0} and *t*_{0} are fixed, while Φ and *t* are varied, and the density matrix is given by(6)where the trace is taken over the degrees of freedom not included into Φ and , namely, fluctuations of the field above the master field configuration Φ. The contribution of the latter can be described using the prescription

(7)In the “mean field” approximation (corresponding to the continuum limit) fluctuations are completely decoupled from the master field Φ, making 5 a good approximation of the theory. To conclude, one physical consequence of the triviality of statistical physics models describing vicinity of a first order phase transition is that in their quantum counterparts decoherence of entangled states of the master field Φ does not proceed.

### 3.2 Decoherence in the EFT picture

When the correlation length is large but finite, decoherence takes a finite but large amount of time, essentially, as we shall see, determined by the magnitude of ξ. This time scale will now be estimated by two different methods.

As non-renormalizable QFTs admit an EFT description (which eventually breaks down), dynamics of decoherence in such theories strongly depends on the probe scale, coarse-graining effectively performed by the observer. Consider a spatio-temporal coarse-graining scale and assume that all modes of the field ϕ with energies/momenta represent the “environment”, and interaction with them leads to the decoherence of the observed modes with momenta . If also , EFT expansion near is applicable. In practice, similar to Kenneth Wilson’s prescription for renormalization group analysis, we separate the field ϕ into the fast, , and slow, , components, considering as an environment, and since translational invariance holds “at large”, and are linearly separable.5The density matrix of the “slow” field or master field configurations is related to the Feynman-Vernon influence functional of the theory 21 according to

(8)where(9)and(10)

where ϕ_{1, 2} are the Schwinger-Keldysh components of the field , and are Feynman, negative frequency Wightman and Dyson propagators of the “fast” field , respectively.6 It is easy to see that the expression 9 is essentially the same as 7, that is of no surprise since an observer with an IR cutoff cannot distinguish between Φ and .The part of the Feynman-Vernon functional 10 that is interesting for us can be rewritten as(11)

(note that non-trivial effects including the one of decoherence appear in the earliest only at the second order in λ).

An important observation to make is that since the considered non-renormalizable theory becomes trivial in the continuum limit, see 5, the kernels μ and ν can be approximated as local, i.e., , . This is due to the fact that fluctuations are (almost) decoupled from the master field in the continuum limit, their contribution to 9 is described by the (almost) *Gaussian* functional. Correspondingly, if one assumes factorization and Gaussianity of the initial conditions for the modes of the “fast” field , the Markovian approximation is valid for the functional 9, 10.A rather involved calculation (see 21) then shows that the density matrix 8 is subject to the master equation(12)

where only terms of the Hamiltonian density , which lead to the exponential decay of non-diagonal matrix elements of ρ are kept explicitly, while … denote oscillatory terms.The decoherence time can easily be estimated as follows. If only “quasi”-homogeneous master field is kept in 12, the density matrix is subject to the equation(13)where . We expect that is close to (but does not necesserily coincides with) the minimum of the potential , which will be denoted Φ_{0} in what follows. For , i.e., diagonal matrix elements of the density matrix the decoherence effects are strongly suppressed. For the matrix elements with the decoherence rate is determined by(14)Thus, the decoherence time scale in this regime is(15)It is possible to further simplify this expression. First of all, one notes that λ_{renorm} will be entering the final answer instead of the bare coupling λ. As was discussed above (and shown in details in 25, 26), the dimensionless renormalized coupling *g*_{renorm} is suppressed in the continuum limit as , where ξ is the physical correlation length. Second, the physical volume *V* satisfies the relation (amounting to the statement that the continuum limit corresponds to correlation length being of the order of the system size). Finally, , i.e., every quantity in 15 can be presented in terms of the physical correlation length ξ only. This should not be surprising. As was argued in the previous Sections, the mean field theory description holds effectively in the limit (or ), which is characterized by uncoupling of fluctuations from the mean field Φ. Self-coupling of fluctuations is also suppressed in the same limit, thus the physical correlation length ξ becomes a single parameter defining the theory. The only effect of taking into account next orders in powers of λ (or other interactions!) in the effective action 9 and the Feynman-Vernon functional 10 is the redefinition of ξ, which ultimately has to be determined from observations. In this sense, 15 holds to all orders in λ, and it can be expected that(16)where universally for all Φ, of physical interest.

According to the expressions 15, 16 decay of non-diagonal elements of the density matrix would take much longer than (where *c* is the speed of light) for . It still takes about for matrix elements with to decay, a very long time in the limit .Finally, if , i.e., the “vacuum” is excited, returns to minimum after a certain time and fluctuates near it. It was shown in 21 that the field Φ is subject to the Langevin equation(17)

where the random force is due to the interaction between the master field Φ and the fast modes , determined by the term in the effective action. (The Eq. 17 was derived be application of Hubbard-Stratonovich transformation to the effective action for the fields Φ and and assuming that Φ is close to Φ_{0}.) The average

so the master field rolling towards the minimum of its potential plays a role of “time” in the theory. The roll towards the minimum Φ_{0} is very slow, as the rolling time is large in the continuum limit . Once the field reaches the minimum, there is no “time”, as the master field Φ providing the function of a clock is minimized. The decoherence would naively be completely absent for the superposition state of vacua as follows from 14. However, the physical vaccuum as seen by a coarse-grained observer is subject to the Langevin equation 17 even in the closest vicinity of , and the fluctuations are never zero; one roughly has

which should be substituted in the estimate 15 for matrix elements with .

What was discussed above holds for coarse-graining scales , where *R*_{IR} is given by the expression 2. If the coarse-graining scale is , the EFT description breaks down, since at this scale the effective dimensionless coupling between different modes becomes of the order 1, and the modes contributing to and can no longer be considered weakly interacting. However, we recall that at probe scales the unbroken phase mean field description is perfectly applicable (see above). This again implies extremely long decoherence time scales.

The emergent physical picture is the one of entangled states with coherence surviving during a very long time (at least ) on spatial scales of the order of at least ξ. The largeness of the correlation length ξ in statistical physics models describing the vicinity of a first order phase transition implied a large scale correlation at the spatial scales . As was suggested above, the decoherence is indeed very ineffective in such theories. We shall see below that the physical picture presented here has a very large number of analogies in the case of decoherence in quantum gravity.

### 3.3 Decoherence in functional Schrodinger picture

Let us now perform a first quantization analysis of the theory and see how decoherence emerges in this analysis. As the master field Φ is constant in space-time, the field state approximately satisfies the Schrodinger equation

where the form of the Hamiltonian follows straightforwardly from 5:

The physical meaning of *E*_{0} is the vacuum energy of the scalar field, which one can safely choose to be 0.Next, one looks for the quasi-classical solution of the Schrodinger equation of the form . The wave function of fluctuations (or using terminology of the previous Subsection) in turn satisfies the Schrodinger equation(18)where and is the Hamiltonian of fluctuations ,

where , and the full state of the field is (again, we naturally assume that the initial state was a factorized Gaussian). It was previously shown (see 17 and references therein) that the “time”-like affine parameter τ in 18 coincides in fact with the physical time *t*.Writing down the expression for the density matrix of the master field Φ

(19)where

one can then repeat the analysis of 17. Namely, one takes a Gaussian ansatz for (again, this is validated by the triviality of the theory)

where *N* and Ω satisfy the equations(20)(21)

and the trace denotes integration over modes with different momenta:

The expression for can immediately be found using the Eq. 20 and the normalization condition

(if , the former completely determines the absolute value , while the latter — the phase ). Then, after taking the Gaussian functional integration in 19, the density matrix can be rewritten in terms of the real part of as

Assuming the closeness of Φ_{1} and Φ_{2} and following 17 we expand

where again , , and keep terms proportional to Δ^{2} only. A straightforward but lengthy calculation shows that the exponentially decaying term in the density matrix has the form(22)where *D* is the decoherence factor, and the decoherence time can be directly extracted from this expression.

To do so, we note that is subject to the Eq. 21. When , one has , and Ω does not have any dynamics according to 21. However, if , . As the dynamics of Φ is slow (see Eq. 17), one can consider ω as a function of the constant field Φ and integrate the Eq. 21 directly. As the time *t* enters the solution of this equation only in combination , one immediately sees that the factor 22 contains a term in the exponent, defining the decoherence time. The latter coincides with the expression 15 derived in the previous Section as should have been expected.

Thus, the main conclusion of this Section is that the characteristic decoherence time scale in non-renormalizable field theories akin to the theory in number of dimensions higher than 4 is at least of the order of the physical correlation length ξ of the theory, which is taken to be large in the continuum limit. Thus, decoherence in the nearly continuum limit is very ineffective for such theories.

## 4 Decoherence of QFTs on curved space-times

Before proceeding to the discussion of the case of gravity, it is instructive to consider how the dynamics of decoherence of a QFT changes once the theory is set on a curved space-time. As we shall see in a moment, even when the theory is renormalizable (the number of space-time dimensions ), the setup features many similarities with the case of a non-renormalizable field theory in the flat space-time discussed in the previous Section.

Consider a scalar QFT with potential in 4 curved space-time dimensions. Again, we assume the nearly critical case, and that is why the renormalized quadratic term determining the correlation length of the theory is set to vanish (compared to the cutoff scale Λ, again for definiteness ).The scale ξ is no longer the only relevant one in the theory. The structure of the Riemann tensor of the space-time (the latter is assumed to be not too curved) introduces new infrared scales for the theory, and the dynamics of decoherence in the theory depends on relation between these scales and the mass scale *m*. Without a much loss of generality and for the sake of simplicity, one can consider a space-time characterized by a single such scale (cosmological constant) related to the Ricci curvature of the background space-time. It is convenient to write

assuming that the *V*_{0} term dominates in the energy density.At spatio-temporal probe scales much smaller than the horizon size one can choose the state of the field to be the Bunch-Davies (or Allen-Mottola) vacuum or an arbitrary state from the same Fock space. Procedures of renormalization, construction the effective action of the theory and its Feynman-Vernon influence function are similar to the ones for QFT in Minkowski space-time. Thus, so is the dynamics of decoherence due to tracing out unobservable UV modes; the decoherence time scale is again of the order of the physical correlation length of the theory:

in complete analogy with the estimate 16. This standard answer is replaced by(23)when the mass of the field becomes smaller than the Hubble scale, , and the naive correlation length ξ exceeds the horizon size of . (The answer 23 is correct up to a logarithmic prefactor .)It is interesting to analyze the case in more details. The answer (4) is only applicable for a physical observer living inside a single Hubble patch. How does the decoherence of the field ϕ look like from the point of view of *a meta-observer*, who is able to probe the super-horizon large scale structure of the field ϕ?7 It is well-known 29, 30 that the field ϕ in the planar patch of coarse-grained at the spatio-temporal scale of cosmological horizon is (approximately) subject to the Langevin equation(24)

where average is taken over the Bunch-Davies vacuum, very similar to 17, but with the difference that the amplitude of the white noise and the dissipation coefficient are correlated with each other. The corresponding Fokker-Planck equation(25)describes behavior of the probability to measure a given value of the field ϕ at a given moment of time at a given point of coarse-grained space. Its solution is normalizable and has an asymptotic behavior(26)As correlation functions of the coarse-grained field ϕ are calculated according to the prescription

(note that two-, three, etc. point functions of ϕ are zero, and only one-point correlation functions are non-trivial) what we are dealing with in the case 26 is nothing but *a mean field theory* with a free energy calculated as an integral of the mean field ϕ over the 4 − volume of a single Hubble patch. As we have discussed in the previous Section, decoherence is not experienced as a physical phenomenon by the meta-observer at all. In fact, the coarse-graining comoving scale separating the two distinctly different regimes of a weakly coupled theory with a relatively slow decoherence and a mean field theory with entirely absent decoherence is of the order(27)where is the de Sitter entropy (compare this expression with 2).Overall, the physical picture which emerges for the scalar quantum field theory on background is not very different from the one realized for the non-renormalizable field theory in Minkowski space-time, see Fig. 3:

- for observers with small coarse-graining (comoving) scale the decoherence time scale is at most , which is rather large physically (of the order of cosmological horizon size for a given Hubble patch),
- for a meta-observer with a coarse-graining (comoving) scale , where
*R*_{IR}is given by 27, the decoherence is absent entirely, and the underlying theory is experienced as a mean field by such meta-observers.

Another feature of the present setup which is consistent with the behavior of a non-renormalizable field theory in a flat space-time is the breakdown of the effective field theory for the curvature perturbation in the IR 31 (as well as IR breakdown of the perturbation theory on a fixed background) 32, compare with the discussion in Section 3. The control on the theory can be recovered if the behavior of observables in the EFT regime is glued to the IR mean field regime of eternal inflation 33.

## 5 Decoherence in quantum gravity

Given the discussions of the previous two Sections, we are finally ready to muse on the subject of decoherence in quantum gravity, emergence of time and the cosmological arrow of time, focusing on the case of dimensions. The key observation for us is that the *critical number of dimensions for gravity is* , thus it is tempting to hypothesize that the case of gravity might have some similarities with the non-renormalizable theories discussed in Section 3.One can perform the analysis of decoherence of quantum gravity following the strategy represented in Section 3.2, i.e., studying EFT of the second-quantized gravitational degrees of freedom, constructing the Feynman-Vernon functional for them and extracting the characteristic decoherence scales from it (see for example 34). However, it is more convenient to follow the strategy outlined in Section 3.3. Namely, we would like to apply the Born-Oppenheimer approximation 17 to the Wheeler-de Witt equation(28)describing behavior of the relevant degrees of freedom (gravity + a free massive scalar field with mass *m* and the Hamiltonian ). As usual, gravitational degrees of freedom include functional variables of the ADM split: scale factor *a*, shift and lapse functions and the transverse traceless tensor perturbations . The WdW equation 28 does not contain time at all; similar to the case of the Fokker-Planck equation 25 for inflation 29 the scale factor *a* replaces it. Time emerges only after a particular WKB branch of the solution Ψ is picked, and the WKB piece of the wave function Ψ is explicitly separated from the wave functions of the multipoles 17, so that the full state is factorized: . Similar to the case discussed in Section 3.3, the latter then satisfy the functional Schrodinger equations(29)(compare to 18). In other words, as gravity propagates in space-time dimensions, we assume a almost complete decoupling of the multipoles from each other. Their Hamiltonian is expected to be Gaussian with possible dependence on *a*: ‘s are analogous to the states ψ described by 18 in the case of a non-renormalizable field theory in the flat space-time. (We note though that this assumption of decoupling might, generally speaking, break down in the vicinity of horizons such as black hole horizons, where the effective dimensionality of space-time approaches 2, the critical number of dimensions for gravity.)The affine parameter τ along the WKB trajectory is again defined according to the prescription

and starts to play a role of physical time 17. One is motivated to conclude that the emergence of time is related to the decoherence between different WKB branches of the WdW wave function Ψ, and such emergence can be quantitatively analyzed.It was found in 17 by explicit calculation that the density matrix for the scale factor *a* behaves as

with the decoherence factor for a single WKB branch of the WdW solution is given by(30)We note the analogy of this expression with the expression 22 derived in the the Section 3.3: decoherence vanishes in the limit (or ) and is suppressed by powers of cutoff ( can roughly be considered as a dimensionless effective coupling between matter and gravity). In particular, decoherence is completely absent in the decoupling limit .To estimate the involved time scales, let us consider for definiteness the planar patch of with . It immediately follows from 30 that the single WKB branch decoherence only becomes effective after(31)Hubble times, a logarithmically large number of efoldings in the regime of physical interest, when (see also discussion of the decoherence of cosmic fluctuations in 35, where a similar logarithmic amplification with respect to a single Hubble time is found). Similarly, the decoherence scale between the two WKB branches of the WdW solution (corresponding to expansion and contraction of the inflating space-time)

can be shown to be somewhat smaller 17, 34: one finds for the decoherence factor

and the decoherence time (derived from the bound ) is given by(32)still representing a logarithmically large number of efoldings. Taking for example GeV and GeV one finds . Even for inflaitonary energy scale GeV the decoherence time scale is given by inflationary efoldings, still a noticeable number. Interestingly, it also takes a few efoldings for the modes leaving the horizon to freeze and become quasi-classical.Note that (a) *H*_{0} does not enter the expression 31 at all, and it can be expected to hold for other (relatively spatially homogeneous) backgrounds beyond , (b) 31 is proportional to powers of effective dimensionless coupling between matter and gravity, which gets suppressed in the “continuum”/decoupling limit by powers of cutoff, (c) decoherence is absent for the elements of the density matrix with . These analogies allow us to expect that a set of conclusions similar to the ones presented in Sections 3 and 23 would hold for gravity on other backgrounds as well:

- we expect the effective field theory description of gravity to break down in the IR at scales 8; the latter is exponentially larger than the characteristic scale of curvature radius of the background; we roughly expect(33)
- at very large probe scales gravitational decoherence is absent; a meta-observer testing theory at such scales is dealing with the “full” solution of the Wheeler-de Witt equation, not containing time in analogy with eternal inflation sale 27 in space-time filled with a light scalar field,
- at probe scales purely gravitational decoherence is slow, as it typically takes for the WdW wave function to decohere, if time is measured by the clock associated with the matter degrees of freedom.

Finally, it should be noted that gravity differs from non-renormalizable field theories described in Sections 2, 3 in several respects, two of which might be of relevance for our analysis: (a) gravity couples to *all* matter degrees of freedom, the fact which might lead to a suppression of the corresponding effective coupling entering in the decoherence factor 30 and (b) it effectively couples to macroscopic configurations of matter fields without any screening effects (this fact is responsible for a rapid decoherence rate calculated in the classic paper 20). Regarding the point (a), it has been previously argued that the actual scale at which effective field theory for gravity breaks down and gravity becomes strongly coupled is suppressed by the effective number of matter fields *N* (see for example 36, where the strong coupling scale is estimated to be of the order , rather than the Planck mass ). It is in fact rather straightforward to extend the arguments presented above to the case of *N* scalar fields with *Z*_{2} symmetry. One immediately finds that the time scale of decoherence between expanding and contracting branches of the WdW solution is given by

(to be compared with the Eq. 32), while the single branch decoherence proceeds at time scales of the order

For the decoherence between expanding and contracting WdW branches discussed in this Section and for the emergence of cosmological arrow of time, it is important that most of the matter fields are in the corresponding vacuum states (with the exception of light scalars, they are not redshifted away), and the effective *N* remains rather low, so our estimations remained affected only extremely weakly by *N* dependence. As for the point (b), macroscopic configurations of matter (again, with the exception of light scalars with ) do not yet exist at time scales of interest.

## 6 discussion

We have concluded the previous Section with an observation that quantum gravitational decoherence responsible for the emergence of the arrow of time is in fact rather ineffective. If the typical curvature scale of the space-time is , it takes at least(34)efoldings for the quasi-classical WdW wave-function describing a superposition of expanding and contracting regions to decohere into separate WKB branches. Whichever matter degrees of freedom we are dealing with, we expect the estimate 34 to hold and remain robust.

Once the decoherence happened, the direction of the arrow of time is given by the vector ; at smaller spatio-temporal scales than 34 the decoherence factor remains small, and the state of the system represents a quantum foam, the amplitudes *c*_{1, 2} determining probabilities to pick an expanding/contracting WKB branch, correspondingly. Interestingly, the same picture is expected to be reproduced once the probe scale of an observer becomes larger than characteristic curvature scale *R*. As we explained above, the ineffectiveness of gravitational decoherence is directly related to the fact that gravity is a non-renormalizable theory, which is nearly completely decoupled from the quantum dynamics of the matter degrees of freedom.If so, a natural question emerges why do we then experience reality as a quasi-classical one with the arrow of time strictly directed from the past to the future and quantum mechanical matter degrees of freedom decohered at macroscopic scales? Given one has an answer to the first part of the question, and the quantum gravitational degrees of freedom are considered as quasi-classical albeit perhaps stochastic ones, its second part is very easy to answer. Quasi-classical stochastic gravitational background radiation leads to a decoherence of matter degrees of freedom at time scale of the order

where *E*_{1, 2} two rest energies of two quantum states of the considered configuration of matter (see for example 37, 38). This decoherence process happens extremely quickly for macroscopic configurations of total mass much larger than the Planck mass kg. Thus, the problem, as was mentioned earlier, is with the first part of the question.As there seems to be no physical mechanism in quantized general relativity leading to quantum gravitational decoherence at spatio-temporal scales smaller than 34, an alternative idea would be to put the burden of fixing the arrow of time on the observer. In particular, it is tempting to use the idea of 39, 40, where it was argued that quasi-classical trajectories are associated with the increase of quantum mutual information between the observer and the observed system and the corresponding increase of the mutual entanglement entropy. Vice versa, it should be expected that quasi-classical trajectories are associated with the decrease of the quantum mutual information. Indeed, consider an observer *A*, an observed system *B* and a reservoir *R* such that the state of the combined system is pure, i.e., *R* is a purification space of the system . It was shown in 39 that(35)where is the difference of the von Neumann entropies of the observer subsystem described by the density matrix , estimated at times *t* and 0, while is the quantum mutual information difference, trivially related to the difference in quantum mutual entropy for subsystems *A* and *B*. It immediately follows from 35 that an apparent decrease of the von Neumann entropy is associated with the decrease in the quantum mutual information , very roughly, erasure of the quantum correlations between *A* and *B* (encoded the memory of the observer *A* during observing the evolution of the system *B*).

As the direction of the arrow of time is associated with the increase of von Neumann entropy, the observer *A* is simply unable to recall behavior of the subsystem *A* associated with the decrease of its von Newmann entropy in time. In other words, if the physical processes representing “probing the future” are possible to physically happen, and our observer is capable to detect them, she will not be able to store the memory about such processes. Once the quantum trajectory returns to the starting point (“present”), any memory about observer’s excursion to the future is erased.

It thus becomes clear discussion of the emergence of time (and physics of decoherence in general) demands somewhat stronger involvement of an observer than usually accepted in literature. In particular, one has to prescribe to the observer not only the infrared and ultraviolet “cutoff” scales defining which modes of the probed fields should be regarded as environmental degrees of freedom to be traced out in the density matrix, but also a quantum memory capacity. In particular, if the observer does not possess any quantum memory capacity at all, the accumulation of the mutual information between the observer and the observed physical system is impossible, and the theorem of 39, 40 does not apply: in a sense, the “brainless” observer does not experience time and/or decoherence of any degrees of freedom (as was earlier suggested in 41).

It should be emphasized that the argument of 39 applies only to quantum mutual information; such processes are possible that the classical mutual information increases, whereas the quantum mutual information decreases: recall that the quantum mutual information is the upper bound of . Thus, the logic of the expression 35 applies to observers with “quantum memory” with exponential capacity in the number of qubits9 rather than with classical memory with polynomial capacity such as the ones described by Hopfield networks.1 There exist counter-arguments in favor of the existence of a genuine strong coupling limit for 42.2 Similarly, Euclidean *Z*_{2}, *O*(2) and gauge field theories all known to possess a first order phase transition on the lattice at .3 Most probably, it is trivial even in 24, where it features a Landau pole (although there exist arguments in favor of a non-trivial behavior at strong coupling, see for example 42).4 This is equivalent to the statement that trvial theories do not admit continuum limit.5 A note should be taken at this point regarding the momentum representation of the modes. As usual, is defined as integral over Fourier modes of the field with small momenta. As explained above, the quantum theory with existing continuum limit is a Wick-rotated counterpart of the statistical physics model describing a second order phase transition. In the vicinity of a second order phase transition broken and unbroken symmetry phases are continuously intermixed together, which leads to the translational invariance of correlation functions of the order parameter ϕ. In the case of the first order phase transition, such invariance is strictly speaking broken in the presence of stochastic background of nucleating bubbles of the broken symmetry phase, see the discussion in the previous Section. Therefore, the problem “at large” rewritten in terms of and becomes of Caldeira-Legett type 43. If we focus our attention on the physics at scales smaller than the bubble size, translational invariance does approximately hold, and we can consider and as linearly separable (if they are not, we simply diagonalize the part of the Hamiltonian quadratic in ϕ).6 Here, we kept only the leading terms in as higher loops as well as other non-renormalizable interactions provide contributions to the FV functional, which are subdominant (and vanishing!) in the continuum limit .7 This question is not completely meaningless, since a setup is possible in which the value of *V*_{0} suddenly jumps to zero, so that the background space-time becomes Minkowski in the limit , and the field structure inside a single Minkowski lightcone becomes accessible for an observer. If her probe/coarse-graining scale is , this is the question which we are trying to address.8 Space-like interval connecting two causally unconnected events.9 Number of possible stored patterns is , where is the number of qubits in the memory device.