Thursday, February 15, 2018

What does it mean for string theory that the LHC has not seen supersymmetric particles?



The LHC data so far have not revealed any evidence for supersymmetric particles, or any other new particles. For all we know at present, the standard model of particle physics suffices to explain observations.

There is some chance that better statistics which come with more data will reveal some less obvious signal, so the game isn’t yet over. But it’s not looking good for susy and her friends.
Simulated signal of black hole
production and decay at the LHC.
[Credits: CERN/ATLAS]

What are the consequences? The consequences for supersymmetry itself are few. The reason is that supersymmetry by itself is not a very predictive theory.

To begin with, there are various versions of supersymmetry. But more importantly, the theory doesn’t tell us what the masses of the supersymmetric particles are. We know they must be heavier than something we would have observed already, but that’s it. There is nothing in supersymmetric extensions of the standard model which prevents theorists from raising the masses of the supersymmetric partners until they are out of the reach of the LHC.

This is also the reason why the no-show of supersymmetry has no consequences for string theory. String theory requires supersymmetry, but it makes no requirements about the masses of supersymmetric particles either.

Yes, I know the headlines said the LHC would probe string theory, and the LHC would probe supersymmetry. The headlines were wrong. I am sorry they lied to you.

But the LHC, despite not finding supersymmetry or extra dimensions or black holes or unparticles or what have you, has taught us an important lesson. That’s because it is clear now that the Higgs mass is not “natural”, in contrast to all the other particle masses in the standard model. That the mass be natural means, roughly speaking, that getting masses from a calculation should not require the input of finely tuned numbers.

The idea that the Higgs-mass should be natural is why many particle physicists were confident the LHC would see something beyond the Higgs. This didn’t happen, so the present state of affairs forces them to rethink their methods. There are those who cling to naturalness, hoping it might still be correct, just in a more difficult form. Some are willing to throw it out and replace it instead with appealing to random chance in a multiverse. But most just don’t know what to do.

Personally I hope they’ll finally come around and see that they have tried for several decades to solve a problem that doesn’t exist. There is nothing wrong with the mass of the Higgs. What’s wrong with the standard model is the missing connection to gravity and a Landau pole.

Be that as it may, the community of theoretical particle physicists is currently in a phase of rethinking. There are of course those who already argue a next larger collider is needed because supersymmetry is just around the corner. But the main impression that I get when looking at recent publications is a state of confusion.

Fresh ideas are needed. The next years, I am sure, will be interesting.



I explain all about supersymmetry, string theory, the problem with the Higgs-mass, naturalness, the multiverse, and what they have to do with each other in my upcoming book “Lost in Math.”

Monday, February 12, 2018

Book Update: First Review!

The final proofs are done and review copies were sent out. One of the happy receivers, Emmanuel Rayner, read the book within two days and so we have a first review on Goodreads now. That’s not counting the two-star review by someone who I am very sure hasn’t read the book because he “reviewed” it before there were review copies. Tells you all about online ratings you need to know.

The German publisher, Fischer, is still waiting for the final manuscript which has not yet left the US publisher’s rear end. Fischer wants to get started on the translation so that the German edition appears in early fall, only a few months later than the US edition.

Since I get this question a lot, no, I will not translate the book myself. To begin with, it seemed like a rather stupid thing to do, agree on translating an 80k word manuscript if someone else can do it instead. Maybe more importantly, my German writing is miserable, that owing to a grammar reform which struck the country the year after I had moved overseas, and which therefore entirely passed me by. It adds to this that the German spell-check on my laptop isn’t working (it’s complicated), I have an English keyboard, hence no umlauts, and also did I mention I didn’t have to do it in the first place.

Problems start with the title. “Lost in Math” doesn’t translate well to German, so the Fischer people search for a new title. Have been searching for two months, for all I can tell. I imagine them randomly opening pages of a dictionary, looking for inspiration.

Meanwhile, they have recruited and scheduled an appointment for me with a photographer to take headshots. Because in Germany you leave nothing to coincidence. So next week I’ll be photographed.

In other news, end of February I will give a talk at a workshop on “Naturalness, Hierarchy, and Fine Tuning” in Aachen, and I agreed to give a seminar in Heidelberg end of April, both of which will be more or less about the topic of the book. So stop by if you are interested and in the area.

And do not forget to preorder a copy if you haven’t yet done so!

Wednesday, February 07, 2018

Which problems make good research problems?

mini-problem [answer here]
Scientists solve problems; that’s their job. But which problems are promising topics of research? This is the question I set out to answer in Lost in Math at least concerning the foundations of physics.

A first, rough, classification of research problems can be made using Thomas Kuhn’s cycle of scientific theories. Kuhn’s cycle consists of a phase of “normal science” followed by “crisis” leading to a paradigm change, after which a new phase of “normal science” begins. This grossly oversimplifies reality, but it will be good enough for what follows.

Normal Problems

During the phase of normal science, research questions usually can be phrased as “How do we measure this?” (for the experimentalists) or “How do we calculate this?” (for the theorists).

The Kuhn Cycle.
[Img Src: thwink.org]
In the foundations of physics, we have a lot of these “normal problems.” For the experimentalists it’s because the low-hanging fruits have been picked and measuring anything new becomes increasingly challenging. For the theorists it’s because in physics predictions don’t just fall out of hypotheses. We often need many steps of argumentation and lengthy calculations to derive quantitative consequences from a theory’s premises.

A good example for a normal problem in the foundations of physics is cold dark matter. The hypothesis is easy enough: There’s some cold, dark, stuff in the cosmos that behaves like a fluid and interacts weakly both with itself and other matter. But that by itself isn’t a useful prediction. A concrete research problem would instead be: “What is the effect of cold dark matter on the temperature fluctuations of the cosmic microwave background?” And then the experimental question “How can we measure this?”

Other problems of this type in the foundations of physics are “What is the gravitational contribution to the magnetic moment of the muon?,” or “What is the photon background for proton scattering at the Large Hadron Collider?”

Answering such normal problems expands our understanding of existing theories. These are calculations that can be done within the frameworks we have, but calculations can be be challenging.

The examples in the previous paragraphs are solved problems, or at least problems that we know how to solve, though you can always ask for higher precision. But we also have unsolved problems in this category.

The quantum theory of the strong nuclear force, for example, should largely predict the masses of particles that are composed of several quarks, like neutrons, protons, and other similar (but unstable) composites. Such calculations, however, are hideously difficult. They are today made by use of sophisticated computer code – “lattice calculations” – and even so the predictions aren’t all that great. A related question is how does nuclear matter behave in the core of neutron stars.

These are but some randomly picked examples for the many open questions in physics that are “normal problems,” believed to be answerable with the theories we know already, but I think they serve to illustrate the case.

Looking beyond the foundations, we have normal problems like predicting the solar cycle and solar weather – difficult because the system is highly nonlinear and partly turbulent, but nothing that we expect to be in conflict with existing theories. Then there is high-temperature superconductivity, a well-studied but theoretically not well-understood phenomenon, due to the lack of quasi-particles in such materials. And so on.

So these are the problems we study when business goes as normal. But then there are problems that can potentially change paradigms, problems that signal a “crisis” in the Kuhnian terminology.

Crisis Problems

The obvious crisis problems are observations that cannot be explained with the known theories.

I do not count most of the observations attributed to dark matter and dark energy as crisis problems. That’s because most of this data can be explained well enough by just adding two new contributions to the universe’s energy budget. You will undoubtedly complain that this does not give us a microscopic description, but there’s no data for the microscopic structure either, so no problem to pinpoint.

But some dark matter observations really are “crisis problems.” These are unexplained correlations, regularities in galaxies that are hard to come by with cold dark matter, such as the Tully-Fisher-relation or the strange ability of dark matter to seemingly track the distribution of matter. There is as yet no satisfactory explanation for these observations using the known theories. Modifying gravity successfully explains some of it but that brings other problems. So here is a crisis! And it’s a good crisis, I dare to say, because we have data and that data is getting better by the day.

This isn’t the only good observational crisis problem we presently have in the foundations of physics. One of the oldest ones, but still alive and kicking, is the magnetic moment of the muon. Here we have a long-standing mismatch between theoretical prediction and measurement that has still not been resolved. Many theorists take this as an indication that this cannot be explained with the standard model and a new, better, theory is needed.

A couple more such problems exist, or maybe I should say persist. The DAMA measurements for example. DAMA is an experiment that searches for dark matter. They have been getting a signal of unknown origin with an annual modulation, and have kept track of it for more than a decade. The signal is clearly there, but if it was dark matter that would conflict with other experimental results. So DAMA sees something, but no one knows what it is.

There is also the still-perplexing LSND data on neutrino oscillation that doesn’t want to agree with any other global parameter fit. Then there is the strange discrepancy in the measurement results for the proton radius using two different methods, and a similar story for the lifetime of the neutron. And there are the recent tensions in the measurement of the Hubble rate using different methods, which may or may not be something to worry about.

Of course each of these data anomalies might have a “normal” explanation in the end. It could be a systematic measurement error or a mistake in a calculation or an overlooked additional contribution. But maybe, just maybe, there’s more to it.

So that’s one type of “crisis problem” – a conflict between theory and observations. But besides these there is an utterly different type of crisis problem, which is entirely on the side of theory-development. These are problems of internal consistency.

A problem of internal consistency occurs if you have a theory that predicts conflicting, ambiguous, or just nonsense observations. A typical example for this would be probabilities that become larger than one, which is inconsistent with a probabilistic interpretation. Indeed, this problem was the reason physicists were very certain the LHC would see some new physics. They couldn’t know it would be the Higgs, and it could have been something else – like an unexpected change to the weak nuclear force – but the Higgs it was. It was restoring internal consistency that led to this successful prediction.

Historically, studying problems of consistency has led to many stunning breakthroughs.

The “UV catastrophe” in which a thermal source emits an infinite amount of light at small wavelength is such a problem. Clearly that’s not consistent with a meaningful physical theory in which observable quantities should be finite. (Note, though, that this is a conflict with an assumption. Mathematically there is nothing wrong with infinity.) Planck solved this problem, and the solution eventually led to the development of quantum mechanics.

Another famous problem of consistency is that Newtonian mechanics was not compatible with the space-time symmetries of electrodynamics. Einstein resolved this disagreement, and got special relativity. Dirac later resolved the contradiction between quantum mechanics and special relativity which, eventually, gave rise to quantum field theory. Einstein further removed contradictions between special relativity and Newtonian gravity, getting general relativity.

All these have been well-defined, concrete, problems.

But most theoretical problems in the foundations of physics today are not of this sort. Yes, it would be nice if the three forces of the standard model could be unified to one. It would be nice, but it’s not necessary for consistency. Yes, it would be nice if the universe was supersymmetric. But it’s not necessary for consistency. Yes, it would be nice if we could explain why the Higgs mass is not technically natural. But it’s not inconsistent if the Higgs mass is just what it is.

It is well documented that Einstein and even more so Dirac were guided by the beauty of their theories. Dirac in particular was fond of praising the use of mathematical elegance in theory-development. Their personal motivation, however, is only of secondary interest. In hindsight, the reason they succeeded was that they were working on good problems to begin with.

There are a few real theory-problems in the foundations of physics today, but they exist. One is the lacking quantization of gravity. Just lumping the standard model together with general relativity doesn’t work mathematically, and we don’t know how to do it properly.

Another serious problem with the standard model alone is the Landau pole in one of the coupling constants. That means that the strength of one of the forces becomes infinitely large. This is non-physical for the same reason the UV catastrophe was, so something must happen there. This problem has received little attention because most theorists presently believe that the standard model becomes unified long before the Landau pole is reached, making the extrapolation redundant.

And then there are some cases in which it’s not clear what type of problem we’re dealing with. The non-convergence of the perturbative expansion is one of these. Maybe it’s just a question of developing better math, or maybe there’s something we get really wrong about quantum field theory. The case is similar for Haag’s theorem. Also the measurement problem in quantum mechanics I find hard to classify. Appealing to a macroscopic process in the theory’s axioms isn’t compatible with the reductionist ideal, but then again that is not a fundamental problem, but a conceptual worry. So I’m torn about this one.

But for what crisis problems in theory development are concerned, the lesson from the history of physics is clear: Problems are promising research topics if they really are problems, which means you must be able to formulate a mathematical disagreement. If, in contrast, the supposed problem is that you simply do not like a particular aspect of a theory, chances are you will just waste your time.



Homework assignment: Convince yourself that the mini-problem shown in the top image is mathematically ill-posed unless you appeal to Occam’s razor.

Wednesday, January 31, 2018

Physics Facts and Figures

Physics is old. Together with astronomy, it’s the oldest scientific discipline. And the age shows. Compared to other scientific areas, physics is a slowly growing field. I learned this from a 2010 paper by Larsen and van Ins. The authors counted the number of publications per scientific areas. In physics, the number of publications grows at an annual rate of 3.8%. This means it currently takes 18 years for the body of physics literature to double. For comparison, the growth rate for publications in electric engineering and technology is 9% (7.5%) and has a doubling time of 8 years (9.6 years).

The total number of scientific papers closely tracks the total number of authors, irrespective of discipline. The relation between the two can be approximately fit by a power law, so that the number of papers is equal to the number of authors to the power of β. But this number, β, turns out to be field-specific, which I learned from a more recent paper: “Allometric Scaling in Scientific Fields” by Dong et al.

In mathematics the exponent β is close to one, which means that the number of papers increases linearly with the number of authors. In physics, the exponent is smaller than one, approximately 0.877. And not only this, it has been decreasing in the last ten years or so. This means we are seeing here diminishing returns: More physicists result in a less than proportional growth of output.

Figure 2 from Dong et al, Scientometrics 112, 1 (2017) 583.
β measures is the exponent by which the number of papers
scales with the number of authors. 
The paper also found some fun facts. For example, a few sub-fields of physics are statistical outliers in that their researchers produce more than the average number papers. Dong et al quantified this by a statistical measure that unfortunately doesn’t have an easy interpretation. Either way, they offer a ranking of the most productive sub-fields in physics which is (in order):

(1) Physics of black holes, (2) Cosmology, (3) Classical general relativity, (4) Quantum information (5) Matter waves (6) Quantum mechanics (7) Quantum field theory in curved space time (8) general theory and models of magnetic ordering (9) Theories and models of many electron systems (10) Quantum gravity.

Isn’t it interesting that this closely matches the fields that tend to attract media attention?

Another interesting piece of information that I found in the Dong et al paper is that in all sub-fields the exponent relating the numbers of citations with the number of authors is larger than one, approximately 1.1. This means that on the average the more people work in a sub-field, the more citation they receive. I think this is relevant information for anyone who wants to make sense of citation indices.

A third paper that I found very insightful to understand the research dynamics in physics is “A Century of Physics” by Sinatra et al. Among other things, they analyzed the frequency by which sub-fields of physics reference to their own or other sub-fields. The most self-referential sub-fields, they conclude, are nuclear physics and the physics of elementary particles and fields.

Papers from these two sub-fields also have by far the lowest expected “ultimate impact” which the authors define as the typical number of citations a paper attracts over its lifetime, where the lifetime is the typical number of years in which the paper attracts citations (see figure below). In nuclear physics (labelled NP in figure) and and particle physics (EPF), the interest of papers is short-term and the overall impact remains low. By this measure, the category with the highest impact is electromagnetism, optics, acoustics, heat transfer, classical mechanics and fluid dynamics (labeled EOAHCF).

Figure 3 e from Sinatra et al, Nature Physics 11, 791–796 (2015).

A final graph from the Sinatra et al paper which I want to draw your attention to is the productivity of physicists. As we saw earlier, the total number of papers normalized to the total number of authors is somewhat below 1 and has been falling in the recent decade. However, if you look at the number of papers per author, you find that it has been sharply rising since the early 1990s, ie, basically ever since there was email.

Figure 1 e from Sinatra et al, Nature Physics 11, 791–796 (2015)

This means that the reason physicists seem so much more productive today than when you were young is that they collaborate more. And maybe it’s not so surprising because there is a strong incentive for that: If you and I both write a paper, we both have one paper. But if we agree to co-author each other’s paper, we’ll both have two. I don’t mean to accuse scientists of deliberate gaming, but it’s obvious that accounting for papers by the number puts single-authors at a disadvantage.

So this is what physics is, in 2018. An ageing field that doesn’t want to accept its dwindling relevance.

Thursday, January 25, 2018

More Multiverse Madness

The “multiverse” – the idea that our universe is only one of infinitely many – enjoys some credibility, at least in the weirder corners of theoretical physics. But there are good reasons to be skeptical, and I’m here to tell you all of them.

Before we get started, let us be clear what we are talking about because there isn’t only one but multiple multiverses. The most commonly discussed ones are: (a) The many worlds interpretation of quantum mechanics, (b) eternal inflation, and (c) the string theory landscape.

The many world’s interpretation is, guess what, an interpretation. At least to date, it makes no predictions that differ from other interpretations of quantum mechanics. So it’s up to you whether you believe it. And that’s all I have to say about this.

Eternal inflation is an extrapolation of inflation, which is an extrapolation of the concordance model, which is an extrapolation of the present-day universe back in time. Eternal inflation, like inflation, works by inventing a new field (the “inflaton”) that no one has ever seen because we are told it vanished long ago. Eternal inflation is a story about the quantum fluctuations of the now-vanished field and what these fluctuations did to gravity, which no one really knows, but that’s the game.

There is little evidence for inflation, and zero evidence for eternal inflation. But there is a huge number of models for both because available data don’t constraint the models much. Consequently, theorists theorize the hell out of it. And the more papers they write about it, the more credible the whole thing looks.

And then there’s the string theory landscape, the graveyard of disappointed hopes. It’s what you get if you refuse to accept that string theory does not predict which particles we observe.

String theorists originally hoped that their theory would explain everything. When it became clear that didn’t work, some string theorists declared if they can’t do it then it’s not possible, hence everything that string theory allows must exist – and there’s your multiverse. But you could do the same thing with any other theory if you don’t draw on sufficient observational input to define a concrete model. The landscape, therefore, isn’t so much a prediction of string theory as a consequence of string theorists’ insistence that theirs a theory of everything.

Why then, does anyone take the multiverse seriously? Multiverse proponents usually offer the following four arguments in favor of the idea:

1. It’s falsifiable!

Our Bubble Universe.
Img: NASA/WMAP.
There are certain cases in which some version of the multiverse leads to observable predictions. The most commonly named example is that our universe could have collided with another one in the past, which could have left an imprint in the cosmic microwave background. There is no evidence for this, but of course this doesn’t rule out the multiverse. It just means we are unlikely to live in this particular version of the multiverse.

But (as I explained here) just because a theory makes falsifiable predictions doesn’t mean it’s scientific. A scientific theory should at least have a plausible chance of being correct. If there are infinitely many ways to fudge a theory so that the alleged prediction is no more, that’s not scientific. This malleability is a problem already with inflation, and extrapolating this to eternal inflation only makes things worse. Lumping the string landscape and/or many worlds on top of doesn’t help parsimony either.

So don’t get fooled by this argument, it’s just wrong.

2. Ok, so it’s not falsifiable, but it’s sound logic!

Step two is the claim that the multiverse is a logical consequence of well-established theories. But science isn’t math. And even if you trust the math, no deduction is better than the assumptions you started from and neither string theory nor inflation are well-established. (If you think they are you’ve been reading the wrong blogs.)

I would agree that inflation is a good effective model, but so is approximating the human body as a bag of water, and see how far that gets you making sense of the evening news.

But the problem with the claim that logic suffices to deduce what’s real runs deeper than personal attachment to pretty ideas. The much bigger problem which looms here is that scientists mistake the purpose of science. This can nicely be demonstrated by a phrase in Sean Carroll’s recent paper. In defense of the multiverse he writes “Science is about what is true.” But, no, it’s not. Science is about describing what we observe. Science is about what is useful. Mathematics is about what is true.

Fact is, the multiverse extrapolates known physics by at least 13 orders of magnitude (in energy) beyond what we have tested and then adds unproved assumptions, like strings and inflatons. That’s not science, that’s math fiction.

So don’t buy it. Just because they can calculate something doesn’t mean they describe nature.

3. Ok, then. So it’s neither falsifiable nor sound logic, but it’s still business as usual.

The gist of this argument, also represented in Sean Carroll’s recent paper, is that we can assess the multiverse hypothesis just like any other hypothesis, by using Bayesian inference.

Bayesian inference a way of probability assessment in which you update your information to arrive at what’s the most likely hypothesis. Eg, suppose you want to know how many people on this planet have curly hair. For starters you would estimate it’s probably less than the total world-population. Next, you might assign equal probability to all possible percentages to quantify your lack of knowledge. This is called a “prior.”

You would then probably think of people you know and give a lower probability for very large or very small percentages. After that, you could go and look at photos of people from different countries and count the curly-haired fraction, scale this up by population, and update your estimate. In the end you would get reasonably accurate numbers.

If you replace words with equations, that’s how Bayesian inference works.

You can do pretty much the same for the cosmological constant. Make some guess for the prior, take into account observational constraints, and you will get some estimate for a likely value. Indeed, that’s what Steven Weinberg famously did, and he ended up with a result that wasn’t too badly wrong. Awesome.

But just because you can do Bayesian inference doesn’t mean there must be a planet Earth for each fraction of curly-haired people. You don’t need all these different Earths because in a Bayesian assessment the probability represents your state of knowledge, not the distribution of an actual ensemble. Likewise, you don’t need a multiverse to update the likelihood of parameters when taking into account observations.

So to the extent that it’s science as usual you don’t need the multiverse.

4. So what? We’ll do it anyway.

The fourth, and usually final, line of defense is that if we just assume the multiverse exists, we might learn something, and that could lead to new insights. It’s the good, old Gospel of Serendipity.

In practice this means that multiverse proponents insist on interpreting probabilities for parameters as those of an actual ensemble of universes, ie the multiverse. Then they have the problem of where to get the probability distribution from, a thorny issue since the ensemble is infinitely large. This is known as the “measure problem” of the multiverse.

To solve the problem, they have to construct a probability distribution, which means they must invent a meta-theory for the landscape. Of course that’s just another turtle in the tower and will not help finding a theory of everything. And worse, since there are infinitely many such distributions you better hope they’ll find one that doesn’t need more assumptions than the standard model already has, because if that was so, the multiverse would be shaved off by Occam’s razor.

But let us assume the best possible outcome, that they find a measure for the multiverse according to which the parameters of the standard model are likely, and this measure indeed needs fewer assumptions than just postulating the standard model parameters. That would be pretty cool and I would be duly impressed. But even in this case we don’t need the multiverse! All we need is the equation to calculate what’s presumably a maximum of a probability distribution. Thus, again, Occam’s razor should remove the multiverse.

You could then of course insist that the multiverse is a possible interpretation, so you are allowed to believe in it. And that’s all fine by me. Believe whatever you want, but don’t confuse it with science.


The multiverse and other wild things that physicists believe in are subject of my upcoming book “Lost in Math” which is now available for preorder.

Wednesday, January 17, 2018

Pure Nerd Fun: The Grasshopper Problem

illustration of grasshopper.
[image: awesomedude.com]
It’s a sunny afternoon in July and a grasshopper lands on your lawn. The lawn has an area of a square meter. The grasshopper lands at a random place and then jumps 30 centimeters. Which shape must the lawn have so that the grasshopper is most likely to land on the lawn again after jumping?

I know, sounds like one of these contrived but irrelevant math problems that no one cares about unless you can get famous solving it. But the answer to this question is more interesting than it seems. And it’s more about physics than it is about math or grasshoppers.

It turns out the optimal shape of the lawn greatly depends on how far the grasshopper jumps compared to the square root of the area. In my opening example this ratio would have been 0.3, in which case the optimal lawn-shape looks like an inkblot

From Figure 3 of arXiv:1705.07621



No, it’s not round! I learned this from a paper by Olga Goulko and Adrian Kent, which was published in the Proceedings of the Royal Society (arXiv version here). You can of course rotate the lawn around its center without changing the probability of the grasshopper landing on it again. So, the space of all solutions has the symmetry of a disk. But the individual solutions don’t – the symmetry is broken.

You might know Adrian Kent from his work on quantum foundations, so how come his sudden interest in landscaping? The reason is that problems similar to this appear in certain types of Bell-inequalities. These inequalities, which are commonly employed to identify truly quantum behavior, often end up being combinatorial problems on the unit sphere. I can just imagine the authors sitting in front of this inequality, thinking, damn, there must be a way to calculate this.

As so often, the problem isn’t mathematically difficult to state but dang hard to solve. Indeed, they haven’t been able to derive a solution. In their paper, the authors offer estimates and bounds, but no full solution. Instead what they did (you will love this) is to map the problem back to a physical system. This physical system they configure so that it will settle on the optimal solution (ie optimal lawn-shape) at zero temperature. Then they simulate this system on the computer.

Concretely, the simulate the lawn of fixed area by randomly scattering squares over a template space that is much larger than the lawn. They allow a certain interaction between the little pieces of lawn, and then they calculate the probability for the pieces to move, depending on whether or not such a move will improve the grasshopper’s chance to stay on the green. The lawn is allowed to temporarily go into a less optimal configuration so that it will not get stuck in a local minimum. In the computer simulation, the temperature is then gradually decreased, which means that the lawn freezes and thereby approaches its most optimal configuration.

In the video below you see examples for different values of d, which is the above mentioned ratio between the distance the grasshopper jumps and the square root of the lawn-area:





For very small d, the optimal lawn is almost a disc (not shown in the video). For increasingly larger d, it becomes a cogwheel, where the number of cogs depends on d. If d increases above approximately 0.56 (the inverse square root of π), the lawn starts falling apart into disconnected pieces. There is a transition range in which the lawn doesn’t seem to settle on any particular shape. Beyond 0.65, there comes a shape which they refer to as a “three-bladed fan”, and after that come stripes of varying lengths.

This is summarized in the figure below, where the red line is the probability of the grasshopper to stay on the lawn for the optimal shape:
Figure 12 of arXiv:1705.07621

The authors did a number of checks to make sure the results aren’t numerical artifacts. For example, they checked that the lawn’s shape doesn’t depend on using a square grid for the simulation. But, no, a hexagonal grid gives the same results. They told me by email they are looking into the question whether the limited resolution might hide that the lawn shapes are actually fractal, but there doesn’t seem to be any indication for that.

I find this a super-cute example for how much surprises seemingly dull and simple math problems can harbor!

As a bonus, you can get a brief explanation of the paper from the authors themselves in this brief video.

Tuesday, January 16, 2018

Book Review: “The Dialogues” by Clifford Johnson

Clifford Johnson is a veteran of the science blogosphere, a long-term survivor, around already when I began blogging and one of the few still at it today. He is professor at the Department of Physics and Astronomy at the University of Southern California (in LA).

I had the pleasure of meeting Clifford in 2007. Who’d have thought back then that 10 years later we would both be in the midst of publishing a popular science book?

Clifford’s book was published by MIT Press just two months ago. It’s titled The Dialogues: Conversations about the Nature of the Universe and it’s not just a book, it’s a graphic novel! Yes, that’s right. Clifford doesn’t only write, he also draws.

His book is a collection of short stories which are mostly physics-themed, but also touch on overarching questions like how does science work or what’s the purpose of basic research to begin with. I would characterize these stories as conversation starters. They are supposed to make you wonder.

But just because it contains a lot of pictures doesn’t mean The Dialogues is a shallow book. In contrast, a huge amount of physics is packed into it, from electrodynamics to the multiverse, the cosmological constant, a theory of everything and to gravitational waves. The reader also finds references for further reading in case they wish to learn more.

I found the drawings were put to good use and often add to the explanation. The Dialogues is also, I must add, a big book. With more than 200 illustrated pages, it seems to me that offering it for less than $30 is a real bargain!

I would recommend this book to everyone who has an interest in the foundations of physics. Even if you don’t read it, it will still look good on your coffee table ;)




Win a copy!

I bought the book when it appeared, but later received a free review copy. Now I have two and I am giving one away for free!

The book will go to the first person who submits a comment to this blogpost (not elsewhere) listing 10 songs that use physics-themed phrases in the lyrics (not just in the title). Overly general words (such as “moon” or “light”) or words that are non-physics terms which just happen to have a technical meaning (such as “force” or “power”) don’t count.

The time-stamp of your comment will decide who was first, so please do not send your list to me per email. Also, please only make a submission if you are willing to provide me with a mailing address.

Good luck!

Update:
The book is gone.

Wednesday, January 10, 2018

Superfluid dark matter gets seriously into business

very dark fluid
Most matter in the universe isn’t like the stuff we are made of. Instead, it’s a thinly distributed, cold, medium which rarely interacts both with itself and with other kinds of matter. It also doesn’t emit light, which is why physicists refer to it as “dark matter.”

A recently proposed idea, according to which dark matter may be superfluid, has now become more concrete, thanks to a new paper by Justin Khoury and collaborators.

Astrophysicists invented dark matter because a whole bunch of observations of the cosmos do not fit with Einstein’s theory of general relativity.

According to general relativity, matter curves space-time and, in return, the curvature dictates the motion of matter. Problem is, if you calculate the response of space-time to all the matter we know, then the observed motions doesn’t fit the prediction from the calculation.

This problem exists for galactic rotation curves, velocity distributions in galaxy clusters, for the properties of the cosmic microwave background, for galactic structure formation, gravitational lensing, and probably some more that I’ve forgotten or never heard about in the first place.

But dark matter is only one way to explain the observation. We measure the amount of matter and we observe its motion, but the two pieces of information don’t match up with the equations of general relativity. One way to fix this mismatch is to invent dark matter. The other way to fix this is to change the equations. This second option has become known as “modified gravity.”

There are many types of modified gravity and most of them work badly. That’s because it’s easy to break general relativity and produce a mess that’s badly inconsistent with the high-precision tests of gravity that we have done within our solar system.

However, it has been known since the 1980s that some types of modified gravity explain observations that dark matter does not explain. For example, the effects of dark matter in galaxies become relevant not at a certain distance from the galactic center, but below a certain acceleration. Even more perplexing, this threshold of acceleration is related to the cosmological constant. Both of these features are difficult to account for with dark matter. Astrophysicists have also established a relation between the brightness of certain galaxies and the velocities of their outermost stars. Named “Baryonic Tully Fisher Relation” after its discoverers, it is also difficult to explain with dark matter.

On the other hand, modified gravity works badly in other cases, notably in the early universe where dark matter is necessary to get the cosmic microwave background right, and to set up structure formation so that the result agrees with what we see.

For a long time I have been rather agnostic about this, because I am more interested in the structure of fundamental laws than in the laws themselves. Dark matter works by adding particles to the standard model of particle physics. Modified gravity works by adding fields to general relativity. But particles are fields and fields are particles. And in both cases, the structure of the laws remains the same. Sure, it would be great to settle just exactly what it is, but so what if there’s one more particle or field.

It was a detour that got me interested in this: Fluid analogies for gravity, a topic I have worked on for a few years now. Turns out that certain kinds of fluids can mimic curved space-time, so that perturbations (say, density fluctuations) in the fluid travel just like they would travel under the influence of gravity.

The fluids under consideration here are usually superfluid condensates with an (almost) vanishing viscosity. The funny thing is now that if you look at the mathematical description of some of these fluids, they look just like the extra fields you need for modified gravity! So maybe, then, modified gravity is really a type of matter in the end?

I learned about this amazing link three years ago from a paper by Lasha Berezhiani and Justin Khoury. They have a type of dark matter which can condense (like vapor on glass, if you want a visual aid) if a gravitational potential is deep enough. This condensation happens within galaxies, but not in interstellar space because the potential isn’t deep enough. The effect that we assign to dark matter, then, comes partly from the gravitational pull of the fluid and partly from the actual interaction with the fluid.

If the dark matter is superfluid, it has long range correlations that give rise to the observed regularities like the Tully-Fisher relation and the trends in rotation curves. In galaxy clusters, on the other hand, the average density of (normal) matter is much lower and most of the dark matter is not in the superfluid phase. It then behaves just like normal dark matter.

The main reason I find this idea convincing is that it explains why some observations are easier to account for with dark matter and others with modified gravity: It’s because dark matter has phase transitions! It behaves differently at different temperatures and densities.

In solar systems, for example, the density of (normal) matter is strongly peaked and the gradient of the gravitational field near a sun is much larger than in a galaxy on the average. In this case, the coherence in the dark matter fluid is destroyed, which is why we do not observe effects of modified gravity in our solar system. And in the early universe, the temperature is too high and dark matter just behaves like a normal fluid.

In 2015, the idea with the superfluid dark matter was still lacking details. But two months ago, Khoury and his collaborators came out with a new paper that fills in some of the missing pieces.

Their new calculations take into account that in general the dark matter will be a mixture of superfluid and normal fluid, and both phases will make a contribution to the gravitational pull. Just what the composition is depends on the gravitational potential (caused by all types of matter) and the equation of state of the superfluid. In the new paper, the authors parameterize the general effects and then constrain the parameters so that they fit observations.

Yes, there are new parameters, but not many. They claim that the model can account for all the achievements of normal particle dark matter, plus the benefits of modified gravity on top.

And while this approach very much looks like modified gravity in the superfluid phase, it is immune to the constraint from the measurement of gravitational waves with an optical counterpart. That is because both gravitational waves and photons couple the same way to the additional stuff and hence should arrive at the same time – as observed.

It seems to me, however, that in the superfluid model one would in general get a different dark matter density if one reconstructs it from gravitational lensing than if one reconstructs it from kinetic measurements. That is because the additional interaction with the superfluid is felt only by the baryons. Indeed, this discrepancy could be used to test whether the idea is correct.

Khoury et al don’t discuss the possible origin of the fluid, but I like the interpretation put forward by Erik Verlinde. According to Verlinde, the extra-fields which give rise to the effects of dark matter are really low-energy relics of the quantum behavior of space-time. I will admit that this link is presently somewhat loose, but I am hopeful that it will become tighter in the next years. If so, this would mean that dark matter might be the key to unlocking the – still secret – quantum nature of gravity.

I consider this one of the most interesting developments in the foundations of physics I have seen in my lifetime. Superfluid dark matter is without doubt a pretty cool idea.

Tuesday, January 09, 2018

Me, elsewhere

Beginning 2018, I will no longer write for Ethan Siegel’s Forbes collection “Starts With a Bang.” Instead, I will write a semi-regular column for Quanta Magazine, the first of which -- about asymptotically safe gravity -- appeared yesterday.

In contrast to Forbes, Quanta Magazine keeps the copyright, which means that the articles I write for them will not be mirrored on this blog. You actually have to go over to their site to read them. But if you are interested in the foundations of physics, take my word that subscribing to Quanta Magazine is well worth your time, not so much because of me, but because their staff writers have so-far done an awesome job to cover relevant topics without succumbing to hype.

I also wrote a review of Jim Baggott’s book “Origins: The Scientific Story of Creation” which appeared in the January issue of Physics World. I much enjoyed Baggott’s writing and promptly bought another one of his books. Physics World  doesn’t want me to repost the review in text, but you can read the PDF here.

Finally, I wrote a contribution to the proceedings of a philosophy workshop I attended last year. In this paper, I summarize my misgivings with arguments from finetuning. You can now find it on the arXiv.

If you want to stay up to date on my writing, follow me on Twitter or on Facebook.

Wednesday, January 03, 2018

Sometimes I believe in string theory. Then I wake up.

They talk about me.
Grumpy Rainbow Unicorn.
[Image Source.]

And I can’t blame them. Because nothing else is happening on this planet. There’s just me and my attempt to convince physicists that beauty isn’t truth.

Yes, I know it’s not much of an insight that pretty ideas aren’t always correct. That’s why I objected when my editor suggested I title my book “Why Beauty isn’t Truth.” Because, duh, it’s been said before and if I wanted to be stale I could have written about how we’re all made of stardust, aah-choir, chimes, fade and cut.

Nature has no obligation to be pretty, that much is sure. But the truth seems hard to swallow. “Certainly she doesn’t mean that,” they say. Or “She doesn’t know what she’s doing.” Then they explain things to me. Because surely I didn’t mean to say that much of what goes on in the foundations of physics these days is a waste of time, did I? And even if, could I please not do this publicly, because some people have to earn a living from it.

They are “good friends,” you see? Good friends who want me to believe what they believe. Because believing has bettered their lives.

And certainly I can be fixed! It’s just that I haven’t yet seen the elegance of string theory and supersymmetry. Don’t I know that elegance is a sign of all successful theories? It must be that I haven’t understood how beauty has been such a great guide for physicists in the past. Think of Einstein and Dirac and, erm, there must have been others, right? Or maybe it’s that I haven’t yet grasped that pretty, natural theories are so much better. Except possibly for the cosmological constant, which isn’t pretty. And the Higgs-mass. And, oh yeah, the axion. Almost forgot about that, sorry.

But it’s not that I don’t think unified symmetry is a beautiful idea. It’s a shame, really, that we have these three different symmetries in particle physics. It would be so much nicer if we could merge them to one large symmetry. Too bad that the first theories of unification led to the prediction of proton decay and were ruled out. But there are a lot other beautiful unification ideas left to work on. Not all is lost!

And it’s not that I don’t think supersymmetry is elegant. It combines two different types of particles and how cool is that? It has candidates for dark matter. It alleviates the problem with the cosmological constant. And it aids gauge coupling unification. Or at least it did until LHC data interfered with our plans to prettify the laws of nature. Dang.

And it’s not that I don’t see why string theory is appealing. I once set out to become a string theorist. I do not kid you. I ate my way through textbooks and it was all totally amazing, how much you get out from the rather simple idea that particles shouldn’t be points but strings. Look how much consistency dictates you to construct the theory. And note how neatly it fits with all that we already know.

But then I got distracted by a disturbing question: Do we actually have evidence that elegance is a good guide to the laws of nature?

The brief answer is no, we have no evidence. The long answer is in my book and, yes, I will mention the-damned-book until everyone is sick of it. The summary is: Beautiful ideas sometimes work, sometimes they don’t. It’s just that many physicists prefer to recall the beautiful ideas which did work.

And not only is there no historical evidence that beauty and elegance are good guides to find correct theories, there isn’t even a theory for why that should be so. There’s no reason to think that our sense of beauty has any relevance for discovering new fundamental laws of nature.

Sure, if you ask those who believe in string theory and supersymmetry and in grand unification, they will say that of course they know there is no reason to believe a beautiful theory is more likely to be correct. They still work on them anyway. Because what better could they do with their lives? Or with their grants, respectively. And if you work on it, you better believe in it.

I consent, not all math is equally beautiful and not all math is equally elegant. I yet have to find anyone, for example, who thinks Loop Quantum Gravity is more beautiful than string theory. And isn’t it interesting that we share this sense of what is and isn’t beautiful? Shouldn’t it mean something that so many theoretical physicists agree beautiful math is better? Shouldn’t it mean something that so many people believe in the existence of an omniscient god?

But science isn’t about belief, it’s about facts, so here are the facts: This trust in beauty as a guide, it’s not working. There’s no evidence for grand unification. There’s no evidence for supersymmetry, no evidence for axions, no evidence for moduli, for WIMPs, or for dozens of other particles that were invented to prettify theories which work just fine without them. After decades of search, there’s no evidence for any of these.

It’s not working. I know it hurts. But now please wake up.

Let me assure you I usually mean what I say and know what I do. Could I be wrong? Of course. Maybe tomorrow we’ll discover supersymmetry. Not all is lost.

Monday, December 25, 2017

Merry Christmas!


We wish you all happy holidays! Whether or not you celebrate Christmas, we hope you have a peaceful time to relax and, if necessary, recover.

I want to use the opportunity to thank all of you for reading along, for giving me feedback, and for simply being interested in science in a time when that doesn’t seem to be normal anymore. A special “Thank you" to those who have sent donations. It is reassuring to know that you value this blog. It encourages me to keep my writing available here for free.

I’ll be tied up with family business during the coming week – besides the usual holiday festivities, the twins’ 7th birthday is coming up – so blogging will be sparse for some while.

Monday, December 18, 2017

Get your protons right!

The atomic nucleus consists of protons and neutrons. The protons and neutrons are themselves made of three quarks each, held together by gluons. That much is clear. But just how do the gluons hold the quarks together?

The quarks and gluons interact through the strong nuclear force. The strong nuclear force does not have only one charge – like electromagnetism – but three charges. The charges are called “colors” and often assigned the values red, blue, and green, but this is just a way to give names to mathematical properties. These colors have nothing to do with the colors that we can see.

Colors are a handy terminology because the charges blue, red, and green can combine to neutral (“white”) and so can a color and its anti-color (blue and anti-blue, green and anti-green, and so on). The strong nuclear force is mediated by gluons which each carry two types of colors. That the gluons themselves carry a charge means that, unlike the photon, they also interact among each other.

The strong nuclear force has the peculiar property that it gets stronger the larger the distance between two quarks, while it gets weaker on short distances. A handy metaphor for this is a rubber string – the more you stretch it, the stronger the restoring force. Indeed, this string-like behavior of the strong nuclear force is where string-theory originally came from.

The strings of the strong nuclear force are gluon flux-tubes, that are connections between two color-charged particles where the gluons preferably travel along. The energy of the flux-tubes is proportional to their length. If you have a particle (called a “meson”) made of a quark and an anti-quark, then the flux tube is focused on a straight line connecting the quarks. But what if you have three quarks, like inside a neutron or a proton?

According to the BBC, gluon flux-tubes (often depicted as springs, presumably because rubber is hard to illustrate) form a triangle.


This is almost identical to the illustration you find on Wikipedia:
Here is the proton on Science News:


Here is Alan Stonebreaker for the APS:



This is the proton according to Carole Kliger from the University of Maryland:

And then there is Christine Davies from the University of Glasgow who pictured the proton for Science Daily as an artwork resembling a late Kandinsky:


So which one is right?

At first sight it seems plausible that the gluons form a triangle because that requires the least stretching of strings that each connect two quarks. However, this triangular – “Δ-shaped” – configuration cannot connect three quarks and still maintain gauge-invariance. This means it violates the key principle of the strong force, which is bad and probably means this configuration is not physically possible. The Y-shaped flux-tubes on the other hand don’t suffer from that problem.

But we don’t have to guess around because this is physics and one can calculate it. This calculation cannot be done analytically but it is tractable by computer simulations. Bissey et al reported the results in a 2006 paper: “We do not find any evidence for the formation of a Delta-shaped flux-tube (empty triangle) distribution.” The conclusion is clear: The Y-shape is the preferred configuration.

And there’s more to learn! The quarks and gluons in the proton don’t sit still, and when they move then the center of the Y moves around. If you average over all possible positions you approximate a filled Δ-shape. (Though the temperature dependence is somewhat murky and subject of ongoing research.)

The flux-tubes also do not always exactly lie in the plane spanned by the three quarks but can move up and down into the perpendicular direction. So you get a filled Δ that’s inflated to the middle.

This distribution of flux tubes has nothing to do with the flavor of the quarks, meaning it’s the same for the proton and the neutron and all other particles composed of three quarks, such as the one containing two charm-quarks that was recently discovered at CERN. How did CERN picture the flux tubes? As a Δ:



Now you can claim you know quarks better than CERN! It’s either a Y or a filled triangle, but not an empty triangle.

I am not a fan of depicting gluons as springs because it makes me think of charged particles in a magnetic field. But I am willing to let this pass as creative freedom. I hope, however, that it is possible to get the flux-tubes right, and so I have summed up the situation in the image below :



Tuesday, December 12, 2017

Research perversions are spreading. You will not like the proposed solution.

The ivory tower from
The Neverending Story
Science has a problem. The present organization of academia discourages research that has tangible outcomes, and this wastes a lot of money. Of course scientific research is not exclusively pursued in academia, but much of basic research is. And if basic research doesn’t move forward, science by large risks getting stuck.

At the root of the problem is academia’s flawed reward structure. The essence of the scientific method is to test hypotheses by experiment and then keep, revise, or discard the hypotheses. However, using the scientific method is suboptimal for a scientist’s career if they are rewarded for research papers that are cited by as many of their peers as possible.

To the end of producing popular papers, the best tactic is to work on what already is popular, and to write papers that allow others to quickly produce further papers on the same topic. This means it is much preferable to work on hypotheses that are vague or difficult to falsify, and stick to topics that stay inside academia. The ideal situation is an eternal debate with no outcome other than piles of papers.

You see this problem in many areas of science. It’s origin of the reproducibility crisis in psychology and the life sciences. It’s the reason why bad scientific practices – like p-value hacking – prevail even though they are known to be bad: Because they are the tactics that keep researchers in the job.

It’s also why in the foundations of physics so many useless papers are written, thousands of guesses about what goes on in the early universe or at energies we can’t test, pointless speculations about an infinitude of fictional universes. It’s why theories that are mathematically “fruitful,” like string theory, thrive while approaches that dare introduce unfamiliar math starve to death (adding vectors to spinors, anyone?). And it is why physicists love “solving” the black hole information loss problem: because there’s no risk any of these “solutions” will ever get tested.

If you believe this is good scientific practice, you would have to find evidence that the possibility to write many papers about an idea is correlated with this idea’s potential to describe observation. Needless to say, there isn’t any such evidence.

What we witness here is a failure of science to self-correct.

It’s a serious problem.

I know it’s obvious. I am by no means the first to point out that academia is infected with perverse incentives. Books have been written about it. Nature and Times Higher Education seem to publish a comment about this nonsense every other week. Sometimes this makes me hopeful that we’ll eventually be able to fix the problem. Because it’s in everybody’s face. And it’s eroding trust in science.

At this point I can’t even blame the public for mistrusting scientists. Because I mistrust them too.

Since it’s so obvious, you would think that funding bodies take measures to limit the waste of money. Yes, sometimes I hope that capitalism will come and rescue us! But then I go and read things like that Chinese scientists are paid bonuses for publishing in high impact journals. Seriously. And what are the consequences? As the MIT technology review relays:
    “That has begun to have an impact on the behavior of some scientists. Wei and co report that plagiarism, academic dishonesty, ghost-written papers, and fake peer-review scandals are on the increase in China, as is the number of mistakes. “The number of paper corrections authored by Chinese scholars increased from 2 in 1996 to 1,234 in 2016, a historic high,” they say.”

If you think that’s some nonsense the Chinese are up to, look at what goes on in Hungary. They now have exclusive grants for top-cited scientists. According to a recent report in Nature:
    “The programme is modelled on European Research Council grants, but with a twist: only those who have published a paper in the past five years that counted among the top 10% most-cited papers in their discipline are eligible to apply.”
What would you do to get such a grant?

To begin with, you would sure as hell not work on any topic that is not already pursued by a large number of your colleagues, because you need a large body of people able to cite your work to begin with.

You would also not bother criticize anything that happens in your chosen research area, because criticism would only serve to decrease the topic’s popularity, hence working against your own interests.

Instead, you would strive to produce a template for research work that can easily and quickly be reproduced with small modifications by everyone in the field.

What you get with such grants, then, is more of the same. Incremental research, generated with a minimum of effort, with results that meander around the just barely scientifically viable.

Clearly, Hungary and China introduce such measures to excel in national comparisons. They don’t only hope for international recognition, they also want to recruit top researchers hoping that, eventually, industry will follow. Because in the end what matters is the Gross Domestic Product.

Surely in some areas of research – those which are closely tied to technological applications – this works. Doing more of what successful people are doing isn’t generally a bad idea. But it’s not an efficient method to discover useful new knowledge.

That this is not a problem exclusive to basic research became clear to me when I read an article by Daniel Sarewitz in The New Atlantis. Sarewitz tells the story of Fran Visco, lawyer, breast cancer survivor, and founder of the National Breast Cancer Coalition:
    “Ultimately, “all the money that was thrown at breast cancer created more problems than success,” Visco says. What seemed to drive many of the scientists was the desire to “get above the fold on the front page of the New York Times,” not to figure out how to end breast cancer. It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says.”
So, no, lemmings chasing after fruitless topics are not a problem only in basic research. Also, the above mentioned overproduction of useless models is by no means specific to high energy physics:
    “Scientists cite one another’s papers because any given research finding needs to be justified and interpreted in terms of other research being done in related areas — one of those “underlying protective mechanisms of science.” But what if much of the science getting cited is, itself, of poor quality?

    Consider, for example, a 2012 report in Science showing that an Alzheimer’s drug called bexarotene would reduce beta-amyloid plaque in mouse brains. Efforts to reproduce that finding have since failed, as Science reported in February 2016. But in the meantime, the paper has been cited in about 500 other papers, many of which may have been cited multiple times in turn. In this way, poor-quality research metastasizes through the published scientific literature, and distinguishing knowledge that is reliable from knowledge that is unreliable or false or simply meaningless becomes impossible.”

Sarewitz concludes that academic science has become “an onanistic enterprise.” His solution? Don’t let scientists decide for themselves what research is interesting, but force them to solve problems defined by others:
    “In the future, the most valuable science institutions […] will link research agendas to the quest for improved solutions — often technological ones — rather than to understanding for its own sake. The science they produce will be of higher quality, because it will have to be.”
As one of the academics who believe that understanding how nature works is valuable for its own sake, I think the cure that Sarewitz proposes is worse than the disease. But if Sarewitz makes one thing clear in his article, it’s that if we in academia don’t fix our problems soon, someone else will. And I don’t think we’ll like it.

Wednesday, December 06, 2017

The cosmological constant is not the worst prediction ever. It’s not even a prediction.

Think fake news and echo chambers are a problem only in political discourse? Think again. You find many examples of myths and falsehoods on popular science pages. Most of them surround the hype of the day, but some of them have been repeated so often they now appear in papers, seminar slides, and textbooks. And many scientists, I have noticed with alarm, actually believe them.

I can’t say much about fields outside my specialty, but it’s obvious this happens in physics. The claim that the bullet cluster rules out modified gravity, for example, is a particularly pervasive myth. Another one is that inflation solves the flatness problem, or that there is a flatness problem to begin with.

I recently found another myth to add to my list: the assertion that the cosmological constant is “the worst prediction in the history of physics.” From RealClearScience I learned the other day that this catchy but wrong statement has even made it into textbooks.

Before I go and make my case, please ask yourself: If the cosmological constant was such a bad prediction, then what theory was ruled out by it? Nothing comes to mind? That’s because there never was such a prediction.

The myth has it that if you calculate the cosmological constant using the standard model of particle physics the result is 120 orders of magnitude larger than what is observed due to contributions from vacuum fluctuation. But this is wrong on at least 5 levels:

1. The standard model of particle physics doesn’t predict the cosmological constant, never did, and never will.

The cosmological constant is a free parameter in Einstein’s theory of general relativity. This means its value must be fixed by measurement. You can calculate a contribution to this constant from the standard model vacuum fluctuations. But you cannot measure this contribution by itself. So the result of the standard model calculation doesn’t matter because it doesn’t correspond to an observable. Regardless of what it is, there is always a value for the parameter in general relativity that will make the result fit with measurement.

(And if you still believe in naturalness arguments, buy my book.)

2. The calculation in the standard model cannot be trusted.

Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies. If that is so, then any calculation of the contribution to the cosmological constant using the standard model is wrong anyway. If there are further particles, so heavy that we haven’t yet seen them, these will play a role for the result. And we don’t know if there are such particles.

3. It’s idiotic to quote ratios of energy densities.

The 120 orders of magnitude refers to a ratio of energy densities. But not only is the cosmological constant usually not quoted as an energy density (but as a square thereof), in no other situation do particle physicists quote energy densities. We usually speak about energies, in which case the ratio goes down to 30 orders of magnitude.

4. The 120 orders of magnitude are wrong to begin with.

The actual result from the standard model scales with the fourth power of the masses of particles, times an energy-dependent logarithm. At least that’s the best calculation I know of. You find the result in equation (515) in this (awesomely thorough) paper. If you put in the numbers, out comes a value that scales with the masses of the heaviest known particles (not with the Planck mass, as you may have been told). That’s currently 13 orders of magnitude larger than the measured value, or 52 orders larger in energy density.

5. No one in their right mind ever quantifies the goodness of a prediction by taking ratios.

There’s a reason physicists usually talk a about uncertainty, statistical significance, and standard deviations. That’s because these are known to be useful to quantify the match of a theory with data. If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero.

In summary: No prediction, no problem.

Why does it matter? Because this wrong narrative has prompted physicists to aim at the wrong target.

The real problem with the cosmological constant is not the average value of the standard model contribution but – as Niayesh Afshordi elucidated better than I ever managed to – that the vacuum fluctuations, well, fluctuate. It’s these fluctuations that you should worry about. Because these you cannot get rid of by subtracting a constant.

But of course I know the actual reason you came here is that you want to know what is “the worst prediction in the history of physics” if not the cosmological constant...

I’m not much of a historian, so don’t take my word for it, but I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.

In this case, you can calculate the likelihood for observing a universe like our own. But the larger and the less noisy the observed universe, the less likely it is to originate from a fluctuation. Hence, the mere fact that you have a fairly ordered memory of the past and a sense of a reasonably functioning reality would be exceedingly tiny in such a case. So tiny, I’m not interested enough to even put in the numbers. (Maybe ask Sean Carroll.)

I certainly wish I’d never have to see the cosmological constant myth again. I’m not yet deluded enough to believe it will go away, but at least I now have this blogpost to refer to when I encounter it the next time.

Thursday, November 30, 2017

If science is what scientists do, what happens if scientists stop doing science?

“Is this still science?” has become a recurring question in the foundations of physics. Whether it’s the multiverse, string theory, supersymmetry, or inflation, concerns abound that theoreticians have crossed a line.

Science writer Jim Baggott called the new genre “fairy-tale science.” Historian Helge Kragh coined the term “higher speculations,” and Peter Woit, more recently, suggested the name “fake physics.” But the accused carry on as if nothing’s amiss, arguing that speculation is an essential part of science. And I? I have a problem.

On the one hand, I understand the concerns about breaking with centuries of tradition. We used to followed up each hypothesis with experimental test, and the longer the delay between hypothesis and test, the easier for pseudoscience to take foothold. On the other hand, I agree that speculation is a necessary part of science and new problems sometimes require new methods. Insisting on ideals of the past might mean getting stuck, maybe forever.

Even more important, I think it’s a grave mistake to let anyone define what we mean by doing science. Because who gets to decide what’s the right thing to do? Should we listen to Helge Kragh? Peter Woit? George Ellis? Or to the other side, to people like Max Tegmark, Sean Carroll, and David Gross, who claim we’re just witnessing a little paradigm change, nothing to worry about? Or should we, heaven forbid, listen to some philosophers and their ideas about post-empirical science?

There have been many previous attempts to define what science is, but the only definition that ever made sense to me is that science is what scientists do, and scientists are people who search for useful descriptions of nature. “Science,” then, is an emergent concept that arises in communities of people with a shared work practices. “Communities of practice,” as the sociologists say.

This brings me to my problem. If science is what scientists do, then how can anything that scientists do not be science? For a long time it seemed to me that in the end we won’t get around settling on a definition for science and holding on to it, regardless of how much I’d prefer a self-organized solution.

But as I was looking for a fossil photo to illustrate my recent post about what we mean by “explaining” something, I realized that we witness the self-organized solution right now: It’s a lineage split.

If some scientists insist on changing the old-fashioned methodology, the communities will fall apart. Let us call the two sectors “conservatives” and “progressives.” Each of them will insist they are the ones pursuing the more promising approach.

Based on this little theory, let me make a prediction what will happen next: The split will become more formally entrenched. Members of the community will begin taking sides, if they haven’t already, and will make an effort to state their research philosophy upfront.

In the end, only time will tell which lineage will survive and which one will share the fate of the Neanderthals.

So, if science is what scientists do, what happens if some scientists stop doing science? You see it happening as we speak.