Friday, May 26, 2017

Can we probe the quantization of the black hole horizon with gravitational waves?

Tl;dr: Yes, but the testable cases aren’t the most plausible ones.

It’s the year 2017, but we still don’t know how space and time get along with quantum mechanics. The best clue so far comes from Stephen Hawking and Jacob Bekenstein. They made one of the most surprising finds that theoretical physics saw in the 20th century: Black holes have entropy.

It was a surprise because entropy is a measure for unresolved microscopic details, but in general relativity black holes don’t have details. They are almost featureless balls. That they nevertheless seem to have an entropy – and a gigantically large one in addition – indicates strongly that black holes can be understood only by taking into account quantum effects of gravity. The large entropy, so the idea, quantifies all the ways the quantum structure of black holes can differ.

The Bekenstein-Hawking entropy scales with the horizon area of the black hole and is usually interpreted as a measure for the number of elementary areas of size Planck-length squared. A Planck-length is a tiny 10-35 meters. This area-scaling is also the basis of the holographic principle which has dominated research in quantum gravity for some decades now. If anything is important in quantum gravity, this is.

It comes with the above interpretation that the area of the black hole horizon always has to be a multiple of the elementary Planck area. However, since the Planck area is so small compared to the size of astrophysical black holes – ranging from some kilometers to some billion kilometers – you’d never notice the quantization just by looking at a black hole. If you got to look at it to begin with. So it seems like a safely untestable idea.

A few months ago, however, I noticed an interesting short note on the arXiv in which the authors claim that one can probe the black hole quantization with gravitational waves emitted from a black hole, for example in the ringdown after a merger event like the one seen by LIGO:
    Testing Quantum Black Holes with Gravitational Waves
    Valentino F. Foit, Matthew Kleban
    arXiv:1611.07009 [hep-th]

The basic idea is simple. Assume it is correct that the black hole area is always a multiple of the Planck area and that gravity is quantized so that it has a particle – the graviton – associated with it. If the only way for a black hole to emit a graviton is to change its horizon area in multiples of the Planck area, then this dictates the energy that the black hole loses when the area shrinks because the black hole’s area depends on the black hole’s mass. The Planck-area quantization hence sets the frequency of the graviton that is emitted.

A gravitational wave is nothing but a large number of gravitons. According to the area quantization, the wavelengths of the emitted gravitons is of the order of the order of the black hole radius, which is what one expects to dominate the emission during the ringdown. However, so the authors’ argument, the spectrum of the gravitational wave should be much narrower in the quantum case.

Since the model that quantizes the black hole horizon in Planck-area chunks depends on a free parameter, it would take two measurements of black hole ringdowns to rule out the scenario: The first to fix the parameter, the second to check whether the same parameter works for all measurements.

It’s a simple idea but it may be too simple. The authors are careful to list the possible reasons for why their argument might not apply. I think it doesn’t apply for a reason that’s a combination of what is on their list.

A classical perturbation of the horizon leads to a simultaneous emission of a huge number of gravitons, and for those there is no good reason why every single one of them must fit the exact emission frequency that belongs to an increase of one Planck area as long as the total energy adds up properly.

I am not aware, however, of a good theoretical treatment of this classical limit from the area-quantization. It might indeed not work in some of the more audacious proposals we have recently seen, like Gia Dvali’s idea that black holes are condensates of gravitons. Scenarios such like Dvali’s might be testable indeed with the ringdown characteristics. I’m sure we will hear more about this in the coming years as LIGO accumulates data.

What this proposed test would do, therefore, is to probe the failure of reproducing general relativity for large oscillations of the black hole horizon. Clearly, it’s something that we should look for in the data. But I don’t think black holes will release their secrets quite as easily.

Friday, May 19, 2017

Can we use gravitational waves to rule out extra dimensions – and string theory with it?

Gravitational Waves,
Computer simulation.

Credits: Henze, NASA
Tl;dr: Probably not.

Last week I learned from New Scientist that “Gravitational waves could show hints of extra dimensions.” The article is about a paper which recently appeared on the arxiv:

The claim in this paper is nothing but stunning. Authors Andriot and Gómez argue that if our universe has additional dimensions, no matter how small, then we could find out using gravitational waves in the frequency regime accessible by LIGO.

While LIGO alone cannot do it because the measurement requires three independent detectors, soon upcoming experiments could either confirm or forever rule out extra dimensions – and kill string theory along the way. That, ladies and gentlemen, would be the discovery of the millennium. And, almost equally stunning, you heard it first from New Scientist.

Additional dimensions are today primarily associated with string theory, but the idea is much older. In the context of general relativity, it dates back to the work of Kaluza and Klein the 1920s. I came across their papers as an undergraduate and was fascinated. Kaluza and Klein showed that if you add a fourth space-like coordinate to our universe and curl it up to a tiny circle, you don’t get back general relativity – you get back general relativity plus electrodynamics.

In the presently most widely used variants of string theory one has not one, but six additional dimensions and they can be curled up – or ‘compactified,’ as they say – to complicated shapes. But a key feature of the original idea survives: Waves which extend into the extra dimension must have wavelengths in integer fractions of the extra dimension’s radius. This gives rise to an infinite number of higher harmonics – the “Kaluza-Klein tower” – that appear like massive excitations of any particle that can travel into the extra dimensions.

The mass of these excitations is inversely proportional to the radius (in natural units). This means if the radius is small, one needs a lot of energy to create an excitation, and this explains why he haven’t yet noticed the additional dimensions.

In the most commonly used model, one further assumes that the only particle that experiences the extra-dimensions is the graviton – the hypothetical quantum of the gravitational interaction. Since we have not measured the gravitational interaction on short distances as precisely as the other interactions, such gravity-only extra-dimensions allow for larger radii than all-particle extra-dimensions (known as “universal extra-dimensions”.) In the new paper, the authors deal with gravity-only extra-dimensions.

From the current lack of observation, one can then derive bounds on the size of the extra-dimension. These bounds depend on the number of extra-dimensions and on their intrinsic curvature. For the simplest case – the flat extra-dimensions used in the paper – the bounds range from a few micrometers (for two extra-dimensions) to a few inverse MeV for six extra dimensions (natural units again).

Such extra-dimensions do more, however, than giving rise to a tower of massive graviton excitations. Gravitational waves have spin two regardless of the number of spacelike dimensions, but the number of possible polarizations depends on the number of dimensions. More dimensions, more possible polarizations. And the number of polarizations, importantly, doesn’t depend on the size of the extra-dimensions at all.

In the new paper, the authors point out that the additional polarization of the graviton affects the propagation even of the non-excited gravitational waves, ie the ones that we can measure. The modified geometry of general relativity gives rise to a “breathing mode,” that is a gravitational wave which expands and contracts synchronously in the two (large) dimensions perpendicular to the direction of the wave. Such a breathing mode does not exist in normal general relativity, but it is not specific to extra-dimensions; other modifications of general relativity also have a breathing mode. Still, its non-observation would indicate no extra-dimensions.

But an old problem of Kaluza-Klein theories stands in the way of drawing this conclusion. The radii of the additional dimensions (also known as “moduli”) are unstable. You can assume that they have particular initial values, but there is no reason for the radii to stay at these values. If you shake an extra-dimension, its radius tends to run away. That’s a problem because then it becomes very difficult to explain why we haven’t yet noticed the extra-dimensions.

To deal with the unstable radius of an extra-dimension, theoretical physicists hence introduce a potential with a minimum at which the value of the radius is stuck. This isn’t optional – it’s necessary to prevent conflict with observation. One can debate how well-motivated that is, but it’s certainly possible, and it removes the stability problem.

Fixing the radius of an extra-dimension, however, will also make it more difficult to wiggle it – after all, that’s exactly what the potential was made to do. Unfortunately, in the above mentioned paper the authors don’t have stabilizing potentials.

I do not know for sure what stabilizing the extra-dimensions would do to their analysis. This would depend not only on the type and number of extra-dimension but also on the potential. Maybe there is a range in parameter-space where the effect they speak of survives. But from the analysis provided so far it’s not clear, and I am – as always – skeptical.

In summary: I don’t think we’ll rule out string theory any time soon.

[Updated to clarify breathing mode also appears in other modifications of general relativity.]

Tuesday, May 16, 2017

“Not a Toy” - New Video about Symmetry Breaking

Here is the third and last of the music videos I produced together with Apostolos Vasilidis and Timo Alho, sponsored by FQXi. The first two are here and here.

In this video, I am be-singing a virtual particle pair that tries to separate, and quite literally reflect on the inevitable imperfection of reality. The lyrics of this song went through an estimated ten thousand iterations until we finally settled on one. After this, none of us was in the mood to fight over a second verse, but I think the first has enough words already.

With that, I have reached the end of what little funding I had. And unfortunately, the Germans haven’t yet figured out that social media benefits science communication. Last month I heard a seminar on public outreach that didn’t so much as mention the internet. I do not kid you. There are foundations here who’d rather spend 100k on an event that reaches 50 people than a tenth of that to reach 100 times as many people. In some regards, Germans are pretty backwards.

This means from here on you’re back to my crappy camcorder and the always same three synthesizers unless I can find other sponsors. So, in your own interest, share the hell out of this!

Also, please let us know which video was your favorite and why because among the three of us, we couldn’t agree.

As previously, the video has captions which you can turn on by clicking on CC in the YouTube bottom bar. For your convenience, here are the lyrics:

Not A Toy

We had the signs for taking off,
The two of us we were on top,
I had never any doubt,
That you’d be there when things got rough.

We had the stuff to do it right,
As long as you were by my side,
We were special, we were whole,
From the GUT down to the TOE.

But all the harmony was wearing off,
It was too much,
We were living in a fiction,
Without any imperfection.

Every symmetry
Has to be broken,
Every harmony
Has to decay.

Leave me alone, I’m
Tired of talking,
I’m not a toy,
I’m not a toy.

Leave alone now,
I’m not a token,
I’m not a toy,
I’m not a toy.

We had the signs for taking off
Harmony was wearing off
We had the signs for taking off
Tired of talking
Harmony was wearing off
I’m tired of talking.

[Repeat Bridge]
[Repeat Chorus]

Thursday, May 11, 2017

A Philosopher Tries to Understand the Black Hole Information Paradox

Is the black hole information loss paradox really a paradox? Tim Maudlin, a philosopher from NYU and occasional reader of this blog, doesn’t think so. Today, he has a paper on the arXiv in which he complains that the so-called paradox isn’t and physicists don’t understand what they are talking about.
So is the paradox a paradox? If you mean whether black holes break mathematics, then the answer is clearly no. The problem with black holes is that nobody knows how to combine them with quantum field theory. It should really better be called a problem than a paradox, but nomenclature rarely follows logical argumentation.

Here is the problem. The dynamics of quantum field theories is always reversible. It also preserves probabilities which, taken together (assuming linearity), means the time-evolution is unitary. That quantum field theories are unitary depends on certain assumptions about space-time, notably that space-like hypersurfaces – a generalized version of moments of ‘equal time’ – are complete. Space-like hypersurfaces after the entire evaporation of black holes violate this assumption. They are, as the terminology has it, not complete Cauchy surfaces. Hence, there is no reason for time-evolution to be unitary in a space-time that contains a black hole. What’s the paradox then, Maudlin asks.

First, let me point out that this is hardly news. As Maudlin himself notes, this is an old story, though I admit it’s often not spelled out very clearly in the literature. In particular the Susskind-Thorlacius paper that Maudlin picks on is wrong in more ways than I can possibly get into here. Everyone in the field who has their marbles together knows that time-evolution is unitary on “nice slices”– which are complete Cauchy-hypersurfaces – at all finite times. The non-unitarity comes from eventually cutting these slices. The slices that Maudlin uses aren’t quite as nice because they’re discontinuous, but they essentially tell the same story.

What Maudlin does not spell out however is that knowing where the non-unitarity comes from doesn’t help much to explain why we observe it to be respected. Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. For all these Earthlings know, there are lots of black holes throughout the universe and their current hypersurface hence isn’t complete. Worse still, in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then, according to Maudlin’s argument, we’d have no reason to even expect a unitary evolution because the mathematical requirements for the necessary proof aren’t fulfilled. But we do.

So that’s what irks physicists: If black holes would violate unitarity all over the place how come we don’t notice? This issue is usually phrased in terms of the scattering-matrix which asks a concrete question: If I could create a black hole in a scattering process how come that we never see any violation of unitarity.

Maybe we do, you might say, or maybe it’s just too small an effect. Yes, people have tried that argument, which is the whole discussion about whether unitarity maybe just is violated etc. That’s the place where Hawking came from all these years ago. Does Maudlin want us to go back to the 1980s?

In his paper, he also points out correctly that – from a strictly logical point of view – there’s nothing to worry about because the information that fell into a black hole can be kept in the black hole forever without any contradictions. I am not sure why he doesn’t mention this isn’t a new insight either – it’s what goes in the literature as a remnant solution. Now, physicists normally assume that inside of remnants there is no singularity because nobody really believes the singularity is physical, whereas Maudlin keeps the singularity, but from the outside perspective that’s entirely irrelevant.

It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense. The most commonly named objection to remnants – the pair production problem – has no justification because – as Maudlin writes – it presumes that the volume inside the remnant is small for which there is no reason. This too is hardly news. Lee and I pointed this out, for example, in our 2009 paper. You can find more details in a recent review by Chen et al.

The other objection against remnants is that this solution would imply that the Bekenstein-Hawking entropy doesn’t count microstates of the black hole. This idea is very unpopular with string theorists who believe that they have shown the Bekenstein-Hawking entropy counts microstates. (Fyi, I think it’s a circular argument because it assumes a bulk-boundary correspondence ab initio.)

Either way, none of this is really new. Maudlin’s paper is just reiterating all the options that physicists have been chewing on forever: Accept unitarity violation, store information in remnants, or finally get it out.

The real problem with black hole information is that nobody knows what happens with it. As time passes, you inevitably come into a regime where quantum effects of gravity are strong and nobody can calculate what happens then. The main argument we are seeing in the literature is whether quantum gravitational effects become noticeable before the black hole has shrunk to a tiny size.

So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers.

Saturday, May 06, 2017

Away Note

I'm in Munich next week, playing with the philosophers. Be good while I'm away, back soon. (Below, the girls, missing a few teeth.)

Thursday, May 04, 2017

In which I sing about Schrödinger’s cat

You have all been waiting for this. The first ever song about quantum entanglement, Boltzmann brains, and the multiverse:

This is the second music video produced in collaboration with Apostolos Vasilidis and Timo Alho, supported by FQXi. (The first is here.) I think these two young artists are awesomely talented! Just by sharing this video you can greatly support them.

In this video too, I’m the one to blame for the lyrics, and if you think this one’s heavy on the nerdism, wait for the next ;)

Here, I sing about entanglement and the ability of a quantum system to be in two different states at the same time. Quantum states don’t have to decide, so the story, but humans have to. I have some pessimistic and some optimistic future visions, contrast determinism with many worlds, and sum it up with a chorus that says: Whatever we do, we are all in this together. And since you ask, we are all connected, because ER=EPR.

The video has subtitles, click on the “CC” icon in the YouTube bottom bar to turn on.


(The cat is dead)

We will all come back
At the end of time
As a brain in a vat
Floating around
And purely mind.

I’m just back from the future and I'm here to report
We’ll be assimilated, we’ll all join the Borg
We’ll be collectively stupid, if you like that or not
Resistance is futile, we might as well get started now

I never asked to be part of your club
So shut up
And leave me alone

But we are all connected
We will never die
Like Schrödinger’s cat
We will all be dead
And still alive

[repeat Chorus]

We will never forget
And we will never lie
All our hope,
Our fear and doubt
Will be far behind.

But I’m not a computer and I'm not a machine
I am not any other, let me be me
If the only pill that you have left
Is the blue and not the red
It might not be so bad to be
Somebody’s pet

[repeat chorus 2x]

Since you ask, the cat is doing fine
Somewhere in the multiverse it’s still alive
Think that is bad? If you trust our math,
The future is as fixed, as is the past.
Since you ask. Since you ask.

[Repeat chorus 2x]

Monday, May 01, 2017

May-day Pope-hope

Pope Francis meets Stephen Hawking.
[Photo: Piximus.]
My husband is a Roman Catholic, so is his whole family. I’m a heathen. We’re both atheists, but dear husband has steadfastly refused to leave the church. That he throws out money with the annual “church tax” (imo a great failure of secularization) has been a recurring point of friction between us. But as of recently I’ve stopped bitching about it – because the current Pope is just so damn awesome.

Pope Francis, born in Argentina, is the 266th leader of the Catholic Church. The man’s 80 years old, but within only two years he has overhauled his religion. He accepts Darwinian evolution as well as the Big Bang theory. He addresses ecological problems – loss of biodiversity, climate change, pollution – and calls for action, while worrying that “international politics has [disregarded] well-founded scientific opinion about the state of our planet.” He also likes exoplanets:
“How wonderful would it be if the growth of scientific and technological innovation would come along with more equality and social inclusion. How wonderful would it be, while we discover faraway planets, to rediscover the needs of the brothers and sisters orbiting around us.”
I find this remarkable, not only because his attitude flies in the face of those who claim religion is incompatible with science. More important, Pope Francis succeeds where the vast majority of politicians fail. He listens to scientists, accepts the facts, and bases calls for actions on evidence. Meanwhile, politicians left and right bend facts to mislead people about what’s in whose interest.

And Pope Francis is a man whose word matters big time. About 1.3 billion people in the world are presently members of his Church. For the Catholics, the Pope is the next best thing to God. The Pope is infallible, and he can keep going until he quite literally drops dead. Compared to Francis, Tweety-Trump is a fly circling a horse’s ass.

Global distribution of Catholics.
[Source: Wikipedia. By Starfunker226CC BY-SA 3.0Link.]

This current Pope is demonstrably not afraid of science, and this gives me hope for the future. Most of the tension between science and religion that we witness today is caused by certain aspects of monotheistic religions that are obviously in conflict with science – if taken literally. But it’s an unnecessary tension. It would be easy enough to throw out what are basically thousand years old stories. But this will only happen once the religious understand it will not endanger the core of their beliefs.

Science advocates like to argue that religion is incompatible with science for religion is based on belief, not reason. But this neglects that science, too, is ultimately based on beliefs.

Most scientists, for example, believe in an external reality. They believe, for the biggest part, that knowledge is good. They believe that the world can be understood, and that this is something humans should strive for.

In the foundations of physics I have seen more specific beliefs. Many of my colleagues, for example, believe that the fundamental laws of nature are simple, elegant, even beautiful. They believe that logical deduction can predict observations. They believe in continuous functions and that infinities aren’t real.

None of this has a rational basis, but physicists rarely acknowledge these beliefs as what they are. Often, I have found myself more comfortable with openly religious people, for at least they are consciously aware of their beliefs and make an effort to prevent it from interfering with research. Even my own discipline, I think, would benefit from a better awareness of the bounds of human rationality. Even my own discipline, I think, could learn from the Pope to tell Is from Ought.

You might not subscribe to the Pope’s idea that “tenderness is the path of choice for the strongest, most courageous men and women.” Honestly, to me doesn’t sound so different from believing that love will quantize gravity. But you don’t have to share the values of the Catholic Church to appreciate here is a world leader who doesn’t confuse facts with values.

Wednesday, April 26, 2017

Not all publicity is good publicity, not even in science.

“Any publicity is good publicity” is a reaction I frequently get to my complaints about flaky science coverage. I find this attitude disturbing, especially when it comes from scientists.

[img src:]

To begin with, it’s an idiotic stance towards journalism in general – basically a permission for journalists to write nonsense. Just imagine having the same attitude towards articles on any other topic, say, immigration: Simply shrug off whether the news accurately reports survey results or even correctly uses the word “immigrant.” In that case I hope we agree that not all publicity is good publicity, neither in terms of information transfer nor in terms of public engagement.

Besides, as United Airlines and Pepsi recently served to illustrate, sometimes all you want is that they stop talking about you.

But, you may say, science is different. Scientists have little to lose and much to win from an increased interest in their research.

Well, if you think so, you either haven’t had much experience with science communication or you haven’t paid attention. Thanks to this blog, I have a lot first-hand experience with public engagement due to science writers’ diarrhea. And most of what I witness isn’t beneficial for science at all.

The most serious problem is the awakening after overhype. It’s when people start asking “Whatever happened to this?” Why are we still paying string theorists? Weren’t we supposed to have a theory of quantum gravity by 2015? Why do physicists still don’t know what dark matter is made of? Why can I still not have a meaningful conversation with my phone, where is my quantum computer, and whatever happened to negative mass particles?

That’s a predictable and wide-spread backlash from disappointed hope. Once excitement fades, the consequence is a strong headwind of public ridicule and reduced trust. And that’s for good reasons, because people were, in fact, fooled. In IT development, it goes under the (branded but catchy) name Hype Cycle

[Hype Cycle. Image: Wikipedia]

There isn’t much data on it, but academic research plausibly goes through the same “through of disillusionment” when it falls short of expectations. The more hype, the more hangover when promises don’t pan out, which is why, eg, string theory today takes most of the fire while loop quantum gravity – though in many regards even more of a disappointment – flies mostly under the radar. In the valley of disappointment, then, researchers are haunted both by dwindling financial support as well as by their colleagues’ snark. (If you think that’s not happening, wait for it.)

This overhype backlash, it’s important to emphasize, isn’t a problem journalists worry about. They’ll just drop the topic and move on to the next. We, in science, are the ones who pay for the myth that any publicity is good publicity.

In the long run the consequences are even worse. Too many never-heard-of-again breakthroughs leave even the interested layman with the impression that scientists can no longer be taken seriously. Add to this a lack of knowledge about where to find quality information, and inevitable some fraction of the public will conclude scientific results can’t be trusted, period.

If you have a hard time believing what I say, all you have to do is read comments people leave on such misleading science articles. They almost all fall into two categories. It’s either “this is a crappy piece of science writing” or “mainstream scientists are incompetent impostors.” In both cases the commenters doubt the research in question is as valuable as it was presented.

If you can stomach it, check the I-Fucking-Love-Science facebook comment section every once in a while. It's eye-opening. On recent reports from the latest LHC anomaly, for example, you find gems like “I wish I had a job that dealt with invisible particles, and then make up funny names for them! And then actually get a paycheck for something no one can see! Wow!” and “But have we created a Black Hole yet? That's what I want to know.” Black Holes at the LHC were the worst hype I can recall in my field, and it still haunts us.

Another big concern with science coverage is its impact on the scientific community. I have spoken about this many times with my colleagues, but nobody listens even though it’s not all that complicated: Our attention is influenced by what ideas we are repeatedly exposed to, and all-over-the-news topics therefore bring a high risk of streamlining our interests.

Almost everyone I ever talked to about this simply denied such influence exists because they are experts and know better and they aren’t affected by what they read. Unfortunately, many scientific studies have demonstrated that humans pay more attention to what they hear about repeatedly, and we perceive something as more important the more other people talk about it. That’s human nature.

Other studies that have shown such cognitive biases are neither correlated nor anti-correlated with intelligence. In other words, just because you’re smart doesn’t mean you’re not biased. Some techniques are known to alleviate cognitive biases but the scientific community does not presently used these techniques. (Ample references eg in “Blind Spot,” by Banaji, Greenwald, and Martin.)

I have seen this happening over and over again. My favorite example is the “OPERA anomaly” that seemed to show neutrinos could travel faster than the speed of light. The data had a high statistical significance, and yet it was pretty clear from the start that the result had to be wrong – it was in conflict with other measurements.

But the OPERA anomaly was all over the news. And of course physicists talked about it. They talked about it on the corridor, and at lunch, and in the coffee break. And they did what scientists do: They thought about it.

The more they talked about it, the more interesting it became. And they began to wonder whether not there might be something to it after all. And if maybe one could write a paper about it because, well, we’ve been thinking about it.

Everybody who I spoke to about the OPERA anomaly began their elaboration with a variant of “It’s almost certainly wrong, but...” In the end, it didn’t matter they thought it was wrong – what mattered was merely that it had become socially acceptable to work on it. And every time the media picked it up again, fuel was added to the fire. What was the result? A lot of wasted time.

For physicists, however, sociology isn’t science, and so they don’t want to believe social dynamics is something they should pay attention to. And as long as they don’t pay attention to how media coverage affects their objectivity, publicity skews judgement and promotes a rich-get-richer trend.

Ah, then, you might argue, at least exposure will help you get tenure because your university likes it if their employees make it into the news. Indeed, the “any publicity is good” line I get to hear mainly as justification from people whose research just got hyped.

But if your university measures academic success by popularity, you should be very worried about what this does to your and your colleagues’ scientific integrity. It’s a strong incentive for sexy-yet-shallow, headline-worthy research that won’t lead anywhere in the long run. If you hunt after that incentive, you’re putting your own benefit over the collective benefit society would get from a well-working academic system. In my view, that makes you a hurdle to progress.

What, then, is the result of hype? The public loses: Trust in research. Scientists lose: Objectivity. Who wins? The news sites that place an ad next to their big headlines.

But hey, you might finally admit, it’s just so awesome to see my name printed in the news. Fine by me, if that's your reasoning. Because the more bullshit appears in the press, the more traffic my cleaning service gets. Just don’t say I didn’t warn you.

Friday, April 21, 2017

No, physicists have not created “negative mass”

Thanks to BBC, I will now for several years get emails from know-it-alls who think physicists are idiots not to realize the expansion of the universe is caused by negative mass. Because that negative mass, you must know, has actually been created in the lab:

The Independent declares this turns physics “completely upside down”

And if you think that was crappy science journalism, The Telegraph goes so far to insists it’s got something to do with black holes

Not that they offer so much as a hint of an explanation what black holes have to do with anything.

These disastrous news items purport to summarize a paper that recently got published in Physics Review Letters, one of the top journals in the field:
    Negative mass hydrodynamics in a Spin-Orbit--Coupled Bose-Einstein Condensate
    M. A. Khamehchi, Khalid Hossain, M. E. Mossman, Yongping Zhang, Th. Busch, Michael McNeil Forbes, P. Engels
    Phys. Rev. Lett. 118, 155301 (2017)
    arXiv:1612.04055 [cond-mat.quant-gas]

This paper reports the results of an experiment in which the physicists created a condensate that behaves as if it has a negative effective mass.

The little word “effective” does not appear in the paper’s title – and not in the screaming headlines – but it is important. Physicists use the preamble “effective” to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention.

The “effective radius” of a galaxy, for example, is not its radius. The “effective nuclear charge” is not the charge of the nucleus. And the “effective negative mass” – you guessed it – is not a negative mass.

The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.

The condensate in question here is a supercooled cloud of about ten thousand Rubidium atoms. To derive its effective mass, you look at the dispersion relation – ie the relation between energy and momentum – of the condensate’s constituents, and take the second derivative of the energy with respect to the momentum. That thing you call the inverse effective mass. And yes, it can take on negative values.
If you plot the energy against the momentum, you can read off the regions of negative mass from the curvature of the resulting curve. It’s clear to see in Fig 1 of the paper, see below. I added the red arrow to point to the region where the effective mass is negative.
Fig 1 from arXiv:1612.04055 [cond-mat.quant-gas]

As to why that thing is called effective mass, I had to consult a friend, David Abergel, who works with cold atom gases. His best explanation is that it’s a “historical artefact.” And it’s not deep: It’s called an effective mass because in the usual non-relativistic limit E=p2/m, so if you take two derivatives of E with respect to p, you get the inverse mass. Then, if you do the same for any other relation between E and p you call the result an inverse effective mass.

It's a nomenclature that makes sense in context, but it probably doesn’t sound very headline-worthy:
“Physicists created what’s by historical accident still called an effective negative mass.”
In any case, if you use this definition, you can rewrite the equations of motion of the fluid. They then resemble the usual hydrodynamic equations with a term that contains the inverse effective mass multiplied by a force.

What this “negative mass” hence means is that if you release the condensate from a trapping potential that holds it in place, it will first start to run apart. And then no longer run apart. That pretty much sums up the paper.

The remaining force which the fluid acts against, it must be emphasized, is then not even an external force. It’s a force that comes from the quantum pressure of the fluid itself.

So here’s another accurate headline:
“Physicists observe fluid not running apart.”
This is by no means to say that the result is uninteresting! Indeed, it’s pretty cool that this fluid self-limits its expansion thanks to long-range correlations which come from quantum effects. I’ll even admit that thinking of the behavior as if the fluid had a negative effective mass may be a useful interpretation. But that still doesn’t mean physicists have actually created negative mass.

And it has nothing to do with black holes, dark energy, wormholes, and the like. Trust me, physics is still upside up.

Wednesday, April 19, 2017

Catching Light – New Video!

I have many shortcomings, like leaving people uncertain whether they’re supposed to laugh or not. But you can’t blame me for lack of vision. I see a future in which science has become a cultural good, like sports, music, and movies. We’re not quite there yet, but thanks to the Foundational Questions Institute (FQXi) we’re a step closer today.

This is the first music video in a series of three, sponsored by FQXi, for which I’ve teamed up with Timo Alho and Apostolos Vasileiadis. And, believe it or not, all three music videos are about physics!

You’ve met Apostolos before on this blog. He’s the one who, incredibly enough, used his spare time as an undergraduate to make a short film about gauge symmetry. I know him from my stay in Stockholm, where he completed a masters degree in physics. Apostolos then, however, decided that research wasn’t for him. He has since founded a company – Third Panda  – and works as freelance videographer.

Timo Alho is one of the serendipitous encounters I’ve made on this blog. After he left some comments on my songs (mostly to point out they’re crappy) it turned out not only is he a theoretical physicist too, but we were both attending the same conference a few weeks later. Besides working on what passes as string theory these days, Timo also plays the keyboard in two bands and knows more than I about literally everything to do with songwriting and audio processing and, yes, about string theory too.

Then I got a mini-grant from FQXi that allowed me to coax the two young men into putting up with me, and five months later I stood in the hail, in a sleeveless white dress, on a beach in Crete, trying to impersonate electromagnetic radiation.

This first music video is about Einstein’s famous thought experiment in which he imagined trying to catch light. It takes on the question how much can be learned by introspection. You see me in the role of light (I am part of the master plan), standing in for nature more generally, and Timo as the theorist trying to understand nature’s working while barely taking notice of it (I can hear her talk to me at night).

The two other videos will follow early May and mid of May, so stay tuned for more!

Update April 21: 

Since several people asked, here are the lyrics. The YouTube video has captions - to see them, click on the CC icon in the bottom bar.

I am part of the master plan
Every woman, every man
I have seen them come and go
Go with the flow

I have seen that we all are one
I know all and every one
I was here when the sun was born
Ages ago

In my mind
I have tried
Catching light
Catching light

In my mind
I have left the world behind

Every time I close my eyes
All of nature's open wide
I can hear her
Talk to me at night

In my mind I have been trying
Catching light outside of time
I collect it in a box
Collect it in a box

Every time I close my eyes
All of nature's open wide
I can hear her
Talk to me at night

[Repeat Chorus]

[Interlude, Einstein recording]
The scientific method itself
would not have led anywhere,
it would not even have been formed
Without a passionate striving for a clear understanding.
Perfection of means
and confusion of goals
seem in my opinion
to characterize our age.

[Repeat Chorus]

Monday, April 17, 2017

Book review: “A Big Bang in a Little Room” by Zeeya Merali

A Big Bang in a Little Room: The Quest to Create New Universes
Zeeya Merali
Basic Books (February 14, 2017)

When I heard that Zeeya Merali had written a book, I expected something like a Worst Of New Scientist compilation. But A Big Bang in A Little Room turned out to be both interesting and enjoyable, if maybe not for the reason the author intended.

If you follow the popular science news on physics foundations, you almost certainly have come across Zeeya’s writing before. She was the one to break news about the surfer dude’s theory of everything and brought black hole echoes to Nature News. She also does much of the outreach work for the Foundational Questions Institute (FQXi).

Judged by the comments I get when sharing Zeeya’s articles, for some of my colleagues she embodies the decline of science journalism to bottomless speculation. Personally, I think what’s decaying to speculation is my colleagues’ research, and if so then Nature’s readership deserves to know about this. But, yes, Zeeya is frequently to be found on the wild side of physics. So, a book about creating universes in the lab seems in line.

To get it out of the way, the idea that we might grow a baby universe has, to date, no scientific basis. It’s an interesting speculation but the papers that have been written about it are little more than math-enriched fiction. To create a universe, we’d first have to understand how our universe began, and we don’t. The theories necessary for this – inflation and quantum gravity – are not anywhere close to being settled. Nobody has a clue how to create a universe, and for what I am concerned that’s really all there is to say about it.

But baby universes are a great excuse to feed real science to the reader, and if that’s the sugar-coating to get medicine down, I approve. And indeed, Zeeya’s book is quite nutritious: From entanglement to general relativity, structure formation, and inflation, to loop quantum cosmology and string theory, it’s all part of her story.

The narrative of A Big Bang in A Little Room starts with the question whether there might be a message encoded in the cosmic microwave background, and then moves on to bubble- and baby-universes, the multiverse, mini-black holes at the LHC, and eventually – my pet peeve! – the hypothesis that we might be living in a computer simulation.

Thankfully, on the latter issue Zeeya spoke to Seth Lloyd who – like me – doesn’t buy Bostrom’s estimate that we likely live in a computer simulation:
“Arguments such as Bostrom’s that hinge on the assumption that in the future physically evolved cosmoses will be outnumbered by a plethora of simulated universes, making it vastly more likely that we are artificial intelligences rather than biological beings, also fail to take into account the immense resources needed to create even basic simulations, says Lloyd.”
So, I’ve found nothing to complain even about the simulation argument!

Zeeya has a PhD in physics, cosmology more specifically, so she has all the necessary background to understand the topics she writes about. Her explanations are both elegant and, for all I can tell, almost entirely correct. I’d have some quibbles on one or the other point, eg her explanation of entanglement doesn’t make clear what’s the difference between classical and quantum correlations, but then it doesn’t matter for the rest of the book. Zeeya is also careful to state that neither inflation nor string theory are established theories, and the book is both well-referenced and has useful endnotes for the reader who wants more details.

Overall, however, Zeeya doesn’t offer the reader much guidance, but rather presents one thought-provoking idea after the other – like that there are infinitely many copies of each of us in the multiverse, making every possible decision – and then hurries on.

Furthermore, between the chapters there are various loose ends that she never ties together. For example, if the creator of our universe could write a message into the cosmic microwave background, then why do we need inflation to solve the horizon problem? How do baby universes fit together with string theory, or AdS/CFT more specifically, and why was the idea mostly abandoned? It’s funny also that Lee Smolin’s cosmological natural selection – an idea according to which we should live in a universe that amply procreates and is hence hugely supportive of the whole universe-creation issue  – is mentioned merely as an aside, and when it comes to loop quantum gravity, both Smolin and Rovelli are bypassed as Ashtekhar’s “collaborators,” (which I’m sure the two gentlemen will just love to hear).

For what I am concerned, the most interesting aspect of Zeeya’s book is that she spoke to various scientists about their creation beliefs: Anthony Zee, Stephen Hsu, Abhay Ashtekar, Joe Polchinski, Alan Guth, Eduardo Guendelman, Alexander Vilenkin, Don Page, Greg Landsberg, and Seth Lloyd are familiar names that appear on the pages. (The majority of these people are FQXi members.)

What we believe to be true is a topic physicists rarely talk about, and I think this is unfortunate. We all believe in something – most scientists, for example believe in an external reality – but fessing up to the limits of our rationality isn’t something we like to get caught with. For this reason I find Zeeya’s book very valuable.

About the value of discussing baby universes I’m not so sure. As Zeeya notes towards the end of her book, of the physicists she spoke to, besides Don Page no one seems to have thought about the ethics of creating new universes. Let me offer a simple explanation for this: It’s that besides Page no one believes the idea has scientific merit.

In summary: It’s a great book if you don’t take the idea of universe-creation too seriously. I liked the book as much as you can possibly like a book whose topic you think is nonsense.

[Disclaimer: Free review copy.]

Wednesday, April 12, 2017

Why doesn’t anti-matter anti-gravitate?

Flying pig.
Why aren’t there any particles that fall up in the gravitational field of Earth? It would be so handy – If I had to move the couch, rather than waiting for the husband to flex his muscles, I’d just tie an anti-gravitating weight to it and the couch would float to the other side of the room.

Newton’s law of gravity and Coulomb’s law for the electric force between two charges have the same mathematical form, so how come we have both positive and negative electric charges but not both negative and positive gravitational masses?

The quick answer to the question is, well, we’ve never seen anything fall up. But if there was anti-gravitating matter, it would be repelled by our planet. So maybe it’s not so surprising we don’t see any of it here. Might there be anti-gravitating matter elsewhere?

It’s a difficult question, more difficult than even most physicists appreciate. The difference between gravity and the electromagnetic interaction – which gives rise to Coulomb’s law – is the type of messenger field. Interactions between particles are mediated by fields. For electromagnetism the mediator is a vector-field. For gravity it’s a more complicated field, a 2nd rank tensor-field, which describes space-time itself.

In case an interaction is quantized, the interaction’s field is accompanied by a particle: For electromagnetism that’s the photon, for gravity it’s the (hypothetical) graviton. The particles share the properties of the field, but for the question of whether or not there’s anti-gravity the quantization of the field doesn’t play a role.

The major difference between the two cases comes down to a sign. For a vector-field, as in the case of electromagnetism, like charges repel and unlike charges attract. For a 2nd rank tensor field, in contrast, like charges attract and unlike charges repel. This already tells us that an anti-gravitating particle would not be repelled by everything. It would be repelled by normally gravitating mass – which we may agree to call “positive” – but be attracted by gravitational masses of its own kind – which we may call “negative.”

The question then becomes: Where are the particles of negative gravitational mass?

To better understand the theoretical backdrop, we must distinguish between inertial mass and gravitational mass. The inertial mass is what gives rise to an object’s inertia, ie its resistance to acceleration, and is always positive valued. The gravitational mass, on the other hand, is what creates the gravitational field of the object. In usual general relativity, the two masses are identical by assumption: This is Einstein’s equivalence principle in a nutshell. In more detail, we’d not only talk about the equivalence for masses, but for all types of energies, collected in what is known as the stress-energy-tensor. Again, the details get mathematical very fast, but aren’t so relevant to understand the general structure.

All the particles we presently know of are collected in the standard model of particle physics, which is in agreement with data to very high precision. The standard model also includes all anti-particles, which are identical to their partner-particles except for having opposite electric charge. Is it possible that the anti-particles also anti-gravitate?

Theory clearly answer this question with “No.” From the standard model, we can derive how anti-matter gravitates – it gravitates exactly the same way as normal matter. And observational evidence supports this conclusion as follows.

We don’t normally see anti-particles around us because they annihilate when they come in contact with normal matter, leaving behind merely a flash of light. Why there isn’t the same amount of matter and anti-matter in the universe nobody really knows – it’s a big mystery that goes under the name “baryon asymmetry” – but evidence shows the universe is dominated by matter. If we see anti-particles – in cosmic rays or in particle colliders – it’s usually as single particles, which are both too light and too short-lived to reliably measure their gravitational mass.

That, however, doesn’t mean we don’t know how anti-matter behaves under the influence of gravity. Both matter and anti-matter particles hold together the quarks that make up neutrons and protons. Indeed, the anti-particles’ energy makes a pretty large contribution to the total mass of neutrons and protons, and hence to the total mass of pretty much everything around us. This means if anti-matter had a negative gravitational mass, the equivalence principle would be badly violated. It isn’t, and so we already know anti-matter doesn’t anti-gravitate.

Those with little faith in theoretical arguments might want to argue that maybe it’s possible to find a way to make anti-matter anti-gravitate only sometimes. I am not aware of any theorem which strictly proves this to be impossible, but neither is there – to my best knowledge – any example of a consistent theory in which this has been shown to work.

And if that still wasn’t enough to convince you, the ALPHA experiment at CERN has not only created neutral anti-hydrogen, made of an anti-proton and a positron (an anti-electron), but has taken great strides towards measuring exactly how anti-hydrogen behaves in Earth’s gravitation field. Guess what? So far there is no evidence that anti-hydrogen falls upwards – though the present measurement precision only rules out that the anti-hydrogen’s gravitational mass is not larger than (minus!) 65 times its inertial mass.

[Correction added April 19: There is not one but three approved experiments at CERN to measure the free fall of anti hydrogen: AEGIS, ALPHA-g and GBAR.]

So, at least theoretical physicists are pretty sure that none of the particles we know anti-gravitates. But could there be other particles, which we haven’t yet discovered, that anti-gravitate?

In principle, yes, but there is no observational evidence for this. In contrast to what is often said, dark energy does not anti-gravitate. The distinctive property of dark energy is that the ratio of energy-density over pressure is negative. For anti-gravitating matter, however, both energy-density and pressure change sign, so the ratio stays positive. This means anti-gravitating matter, if it exists, behaves just the same way as normal matter does, except that the two types of matter repel each other. It also doesn’t give rise to anything like dark matter, because negative gravitational mass would have the exact opposite effect as needed to explain dark matter.

To be fair, I also don’t know of any experiment that explicitly looks for signatures of anti-gravitational matter, like for example concave gravitational lensing. So, strictly speaking, it hasn’t been ruled out, but it’s a hypothesis that also hasn’t attracted much professional interest. Many theoretical physicists who I have talked to believe that negative gravitational masses would induce vacuum-decay because particle pairs could be produced out of nothing. This argument, however, doesn’t take into account that the inertial masses remain positive which prohibits pair production. (On a more technical note, it is a little appreciated fact that the canonical stress-energy tensor isn’t the same as the gravitational stress-energy tensor.)

Even so, let us suppose that the theoretically possible anti-gravitating matter is somewhere out there. What would it be good for? Not for much, it turns out. The stuff would interact with our normal matter even more weakly than neutrinos. This means even if we’d manage to find some of it in our vicinity – which is implausible already – we wouldn’t be able to catch it and use it for anything. It would simply pass right through us.

The anti-gravitating weight that I’d want to tie to the couch, therefore, will unfortunately remain fiction.

[This post previously appeared on Starts With A Bang.]

Friday, April 07, 2017

Book review reviewed: “The Particle Zoo” by Gavin Hesketh

The Particle Zoo: The Search for the Fundamental Nature of Reality
By Gavin Hesketh
Paperback Edition
Quercus (15 Jun. 2017)

A few weeks ago, I reviewed Gavin Heskeths book The Particle Zoo. I found his introduction to quantum field theory very well done. Considering that he can’t rely on equations, Hesketh gets across a lot of details (notably, what Feynman diagrams do and don’t depict).

However, I was quite unhappy with various inaccuracies in the book, particularly concerning the search for physics beyond the standard model.

But then something amazing happened! Hesketh sent me an email a few days ago, saying he read my review and revised the manuscript for the paperback edition to address the criticism. While the changes between the two editions will not be large, it usually doesn’t take more than a sentence or two to add some context or a word of caution. And so, I’m happy to endorse the paperback edition of The Particle Zoo which (according to amazon) will appear on June 15th.

Thursday, April 06, 2017

Dear Dr. B: Why do physicists worry so much about the black hole information paradox?

    “Dear Dr. B,

    Why do physicists worry so much about the black hole information paradox, since it looks like there are several, more mundane processes that are also not reversible? One obvious example is the increase of the entropy in an isolated system and another one is performing a measurement according to quantum mechanics.

    Regards, Petteri”

Dear Petteri,

This is a very good question. Confusion orbits the information paradox like accretion disks orbit supermassive black holes. A few weeks ago, I figured even my husband doesn’t really know what the problem is, and he doesn’t only have a PhD in physics, he has also endured me rambling about the topic for more than 15 years!

So, I’m happy to elaborate on why theorists worry so much about black hole information. There are two aspects to this worry: one scientific and one sociological. Let me start with the scientific aspect. I’ll comment on the sociology below.

In classical general relativity, black holes aren’t much trouble. Yes, they contain a singularity where curvature becomes infinitely large – and that’s deemed unphysical – but the singularity is hidden behind the horizon and does no harm.

As Stephen Hawking pointed out, however, if you take into account that the universe – even vacuum – is filled with quantum fields of matter, you can calculate that black holes emit particles, now called “Hawking radiation.” This combination of unquantized gravity with quantum fields of matter is known as “semi-classical” gravity, and it should be a good approximation as long as quantum effects of gravity can be neglected, which means as long as you’re not close by the singularity.

Illustration of black hole with jet and accretion disk.
Image credits: NASA.

Hawking radiation consists of pairs of entangled particles. Of each pair, one particle falls into the black hole while the other one escapes. This leads to a net loss of mass of the black hole, ie the black hole shrinks. It loses mass until entirely evaporated and all that’s left are the particles of the Hawking radiation which escaped.

Problem is, the surviving particles don’t contain any information about what formed the black hole. And not only that, information of the particles’ partners that went into the black hole is also lost. If you investigate the end-products of black hole evaporation, you therefore can’t tell what the initial state was; the only quantities you can extract are the total mass, charge, and angular momentum- the three “hairs” of black holes (plus one qubit). Black hole evaporation is therefore irreversible.

Irreversible processes however don’t exist in quantum field theory. In technical jargon, black holes can turn pure states into mixed states, something that shouldn’t ever happen. Black hole evaporation thus gives rise to an internal contradiction, or “inconsistency”: You combine quantum field theory with general relativity, but the result isn’t compatible with quantum field theory.

To address your questions: Entropy increase usually does not imply a fundamental irreversibility, but merely a practical one. Entropy increases because the probability to observe the reverse process is small. But fundamentally, any process is reversible: Unbreaking eggs, unmixing dough, unburning books – mathematically, all of this can be described just fine. We merely never see this happening because such processes would require exquisitely finetuned initial conditions. A large entropy increase makes a process irreversible in practice, but not irreversible in principle.

That is true for all processes except black hole evaporation. No amount of finetuning will bring back the information that was lost in a black hole. It’s the only known case of a fundamental irreversibility. We know it’s wrong, but we don’t know exactly what’s wrong. That’s why we worry about it.

The irreversibility in quantum mechanics, which you are referring to, comes from the measurement process, but black hole evaporation is irreversible already before a measurement was made. You could argue then, why should it bother us if everything we can possibly observe requires a measurement anyway? Indeed, that’s an argument which can and has been made. But in and by itself it doesn’t remove the inconsistency. You still have to demonstrate just how to reconcile the two mathematical frameworks.

This problem has attracted so much attention because the mathematics is so clear-cut and the implications are so deep. Hawking evaporation relies on the quantum properties of matter fields, but it does not take into account the quantum properties of space and time. It is hence widely believed that quantizing space-time is necessary to remove the inconsistency. Figuring out just what it would take to prevent information loss would teach us something about the still unknown theory of quantum gravity. Black hole information loss, therefore, is a lovely logical puzzle with large potential pay-off – that’s what makes it so addictive.

Now some words on the sociology. It will not have escaped your attention that the problem isn’t exactly new. Indeed, its origin predates my birth. Thousands of papers have been written about it during my lifetime, and hundreds of solutions have been proposed, but theorists just can’t agree on one. The reason is that they don’t have to: For the black holes which we observe (eg at the center of our galaxy), the temperature of the Hawking radiation is so tiny there’s no chance of measuring any of the emitted particles. And so, black hole evaporation is the perfect playground for mathematical speculation.

[Lots of Papers. Img: 123RF]
There is an obvious solution to the black hole information loss problem which was pointed out already in early days. The reason that black holes destroy information is that whatever falls through the horizon ends up in the singularity where it is ultimately destroyed. The singularity, however, is believed to be a mathematical artifact that should no longer be present in a theory of quantum gravity. Remove the singularity and you remove the problem.

Indeed, Hawking’s calculation breaks down when the black hole has lost almost all of its mass and has become so small that quantum gravity is important. This would mean the information would just come out in the very late, quantum gravitational, phase and no contradiction ever occurs.

This obvious solution, however, is also inconvenient because it means that nothing can be calculated if one doesn’t know what happens nearby the singularity and in strong curvature regimes which would require quantum gravity. It is, therefore, not a fruitful idea. Not many papers can be written about it and not many have been written about it. It’s much more fruitful to assume that something else must go wrong with Hawking’s calculation.

Sadly, if you dig into the literature and try to find out on which grounds the idea that information comes out in the strong curvature phase was discarded, you’ll find it’s mostly sociology and not scientific reasoning.

If the information is kept by the black hole until late, this means that small black holes must be able to keep many different combinations of information inside. There are a few papers which have claimed that these black holes then must emit their information slowly, which means small black holes would behave like a technically infinite number of particles. In this case, so the claim, they should be produced in infinite amounts even in weak background fields (say, nearby Earth), which is clearly incompatible with observation.

Unfortunately, these arguments are based on an unwarranted assumption, namely that the interior of small black holes has a small volume. In GR, however, there isn’t any obvious relation between surface area and volume because space can be curved. The assumption that such small black holes, for which quantum gravity is strong, can be effectively described as particles is equally shaky. (For details and references, please see this paper I wrote with Lee some years ago.)

What happened, to make a long story short, is that Lenny Susskind wrote a dismissive paper about the idea that information is kept in black holes until late. This dismissal gave everybody else the opportunity to claim that the obvious solution doesn’t work and to henceforth produce endless amounts of papers on other speculations.

Excuse the cynicism, but that’s my take on the situation. I’ll even admit having contributed to the paper pile because that’s how academia works. I too have to make a living somehow.

So that’s the other reason why physicists worry so much about the black hole information loss problem: Because it’s speculation unconstrained by data, it’s easy to write papers about it, and there are so many people working on it that citations aren’t hard to come by either.

Thanks for an interesting question, and sorry for the overly honest answer.

Friday, March 31, 2017

Book rant: “Universal” by Brian Cox and Jeff Forshaw

Universal: A Guide to the Cosmos
Brian Cox and Jeff Forshaw
Da Capo Press (March 28, 2017)
(UK Edition, Allen Lane (22 Sept. 2016))

I was meant to love this book.

In “Universal” Cox and Forshaw take on astrophysics and cosmology, but rather than using the well-trodden historic path, they offer do-it-yourself instructions.

The first chapters of the book start with every-day observations and simple calculations, by help of which the reader can estimate eg the radius of Earth and its mass, or – if you let a backyard telescope with a 300mm lens and equatorial mount count as every-day items – the distance to other planets in the solar system.

Then, the authors move on to distances beyond the solar system. With that, self-made observations understandably fade out, but are replaced with publicly available data. Cox and Forshaw continue to explain the “cosmic distance ladder,” variable stars, supernovae, redshift, solar emission spectra, Hubble’s law, the Herzsprung-Russell diagram.

Set apart from the main text, the book has “boxes” (actually pages printed white on black) with details of the example calculations and the science behind them. The first half of the book reads quickly and fluidly and reminds me in style of school textbooks: They make an effort to illuminate the logic of scientific reasoning, with some historical asides, and concrete numbers. Along the way, Cox and Forshaw emphasize that the great power of science lies in the consistency of its explanations, and they highlight the necessity of taking into account uncertainty both in the data and in the theories.

The only thing I found wanting in the first half of the book is that they use the speed of light without explaining why it’s constant or where to get it from, even though that too could have been done with every-day items. But then maybe that’s explained in their first book (which I haven’t read).

For me, the fascinating aspect of astrophysics and cosmology is that it connects the physics of the very small scales with that of the very large scales, and allows us to extrapolate both into the distant past and future of our universe. Even though I’m familiar with the research, it still amazes me just how much information about the universe we have been able to extract from the data in the last two decades.

So, yes, I was meant to love this book. I would have been an easy catch.

Then the book continues to explain the dark matter hypothesis as a settled fact, without so much as mentioning any shortcomings of LambdaCDM, and not a single word on modified gravity. The Bullet Cluster is, once again, used as a shut-up argument – a gross misrepresentation of the actual situation, which I previously complained about here.

Inflation gets the same treatment: It’s presented as if it’s a generally accepted model, with no discussion given to the problem of under-determination, or whether inflation actually solves problems that need a solution (or solves the problems period).

To round things off, the authors close the final chapter with some words on eternal inflation and bubble universes, making a vague reference to string theory (because that’s also got something to do with multiverses you see), and then they suggest this might mean we live in a computer simulation:

“Today, the cosmologists responsible for those simulations are hampered by insufficient computing power, which means that they can only produce a small number of simulations, each with different values for a few key parameters, like the amount of dark matter and the nature of the primordial perturbations delivered at the end of inflation. But imagine that there are super-cosmologists who know the String Theory that describes the inflationary Multiverse. Imagine that they run a simulation in their mighty computers – would the simulated creatures living within one of the simulated bubble universes be able to tell that they were in a simulation of cosmic proportions?”
Wow. After all the talk about how important it is to keep track of uncertainty in scientific reasoning, this idea is thrown at the reader with little more than a sentence which mentions that, btw, “evidence for inflation” is “not yet absolutely compelling” and there is “no firm evidence for the validity of String Theory or the Multiverse.” But, hey, maybe we live in a computer simulation, how cool is that?

Worse than demonstrating slippery logic, their careless portrayal of speculative hypotheses as almost settled is dumb. Most of the readers who buy the book will have heard of modified gravity as dark matter’s competitor, and will know the controversies around inflation, string theory, and the multiverse: It’s been all over the popular science news for several years. That Cox and Forshaw don’t give space to discussing the pros and cons in a manner that at least pretends to be objective will merely convince the scientifically-minded reader that the authors can’t be trusted.

The last time I thought of Brian Cox – before receiving the review copy of this book – it was because a colleague confided to me that his wife thinks Brian is sexy. I managed to maneuver around the obviously implied question, but I’ll answer this one straight: The book is distinctly unsexy. It’s not worthy of a scientist.

I might have been meant to love the book, but I ended up disappointed about what science communication has become.

[Disclaimer: Free review copy.]

Monday, March 27, 2017

Book review: “Anomaly!” by Tommaso Dorigo

Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab
Tommaso Dorigo
World Scientific Publishing Europe Ltd (November 17, 2016)

Tommaso Dorigo is a familiar name in the blogosphere. Over at “A Quantum’s Diary’s Survivor”, he reliably comments on everything going on in particle physics. Located in Venice, Tommaso is a member of the CMS collaboration at CERN and was part of the CDF collaboration at Tevatron – a US particle collider that ceased operation in 2011.

Anomaly! Is Tommaso’s first book and it chronicles his time in the CDF collaboration from the late 1980s until 2000. This covers the measurement of the mass of the Z-boson, the discovery of the top-quark and the – eventually unsuccessful – search for supersymmetric particles. In his book, Tommaso weaves together the scientific background about particle physics with brief stories of the people involved and their – often conflict-laden – discussions.

The first chapters of the book contain a brief summary of the standard model and quantum field theory and can be skipped by those familiar with these topics. The book is mostly self-contained in that Tommaso provides all the knowledge necessary to understand what’s going on (with a few omissions that I believe don’t matter much). But the pace is swift. I sincerely doubt a reader without background in particle physics will be able to get through the book without re-reading some passages many times.

It is worth emphasizing that Tommaso is an experimentalist. I think I hadn’t previously realized how much the popular science literature in particle physics has, so-far, been dominated by theorists. This makes Anomaly! a unique resource. Here, the reader can learn how particle physics is really done! From the various detectors and their designs, to parton distribution functions, to triggers and Monte Carlo simulations, Tommaso doesn’t shy away from going into all the details. At the same time, his anecdotes showcase how a large collaboration like CDF – with more than 500 members – work.

That having been said, the book is also somewhat odd in that it simply ends without summary, or conclusion, or outlook. Given that the events Tommaso writes about date back 30 years, I’d have been interested to hear whether something has changed since. Is the software development now better managed? Is there still so much competition between collaborations? Is the relation to the media still as fraught? I got the impression an editor pulled the manuscript out under Tommaso’s still typing fingers because no end was in sight 😉

Besides this, I have little to complain about. Tommaso’s writing style is clear and clean, and also in terms of structure – mostly chronological – nothing seems amiss. My major criticism is that the book doesn’t have any references, meaning the reader is stuck there without any guide for how to proceed in case he or she wants to find out more.

So should you, or should you not buy the book? If you’re considering to become a particle physicist, I strongly recommend you read this book to find out if you fit the bill. And if you’re a science writer who regularly reports on particle physics, I also recommend you read this book to get an idea of what’s really going on. All the rest of you I have to warn that while the book is packed with information, it’s for the lovers. It’s about how the author tracked down a factor of 1.25^2 to explain why his data analysis came up with 588 rather than 497 Z \to b\bar b decays. And you’re expected to understand why that’s exciting.

On a personal note, the book brought back a lot of memories. All the talk of Herwig and Pythia, of Bjorken-x, rapidity and pseudorapidity, missing transverse energy, the CTEQ tables, hadronization, lost log-files, missed back-ups, and various fudge-factors reminded me of my PhD thesis – and of all the reasons I decided that particle physics isn’t for me.

[Disclaimer: Free review copy.]

Wednesday, March 22, 2017

Academia is fucked-up. So why isn’t anyone doing something about it?

A week or so ago, a list of perverse incentives in academia made rounds. It offers examples like “rewarding an increased number of citations” that – instead of encouraging work of high quality and impact – results in inflated citation lists, an academic tit-for-tat which has become standard practice. Likewise, rewarding a high number of publications doesn’t produce more good science, but merely finer slices of the same science.

Perverse incentives in academia.
Source: Edwards and Roy (2017). Via.

It’s not like perverse incentives in academia is news. I wrote about this problem ten years ago, referring to it as the confusion of primary goals (good science) with secondary criteria (like, for example, the number of publications). I later learned that Steven Pinker made the same distinction for evolutionary goals, referring to it as ‘proximate’ vs ‘ultimate’ causes.

The difference can be illustrated in a simple diagram (see below). A primary goal is a local optimum in some fitness landscape – it’s where you want to go. A secondary criterion is the first approximation for the direction towards the local optimum. But once you’re on the way, higher-order corrections must be taken into account, otherwise the secondary criterion will miss the goal – often badly.

The number of publications, to come back to this example, is a good first-order approximation. Publications demonstrate that a scientist is alive and working, is able to think up and finish research projects, and – provided the paper are published in peer reviewed journals – that their research meets the quality standard of the field.

To second approximation, however, increasing the number of publications does not necessarily also lead to more good science. Two short papers don’t fit as much research as do two long ones. Thus, to second approximation we could take into account the length of papers. Then again, the length of a paper is only meaningful if it’s published in a journal that has a policy of cutting superfluous content. Hence, you have to further refine the measure. And so on.

This type of refinement isn’t specific to science. You can see in many other areas of our lives that, as time passes, the means to reach desired goals must be more carefully defined to make sure they still lead where we want to go.

Take sports as example. As new technologies arise, the Olympic committee has added many additional criteria on what shoes or clothes athletes are admitted to wear, which drugs make for an unfair advantage, and they’ve had to rethink what distinguishes a man from a woman.

Or tax laws. The Bible left it at “When the crop comes in, give a fifth of it to Pharaoh.” Today we have books full of ifs and thens and whatnots so incomprehensible I suspect it’s no coincidence suicide rates peak during tax season.

It’s debatable of course whether current tax laws indeed serve a desirable goal, but I don’t want to stray into politics. Relevant here is only the trend: Collective human behavior is difficult to organize, and it’s normal that secondary criteria to reach primary goals must be refined as time passes.

The need to quantify academic success is a recent development. It’s a consequence of changes in our societies, of globalization, increased mobility and connectivity, and is driven by the increased total number of people in academic research.

Academia has reached a size where accountability is both important and increasingly difficult. Unless you work in a tiny subfield, you almost certainly don’t know everyone in your community and can’t read every single publication. At the same time, people are more mobile than ever, and applying for positions has never been easier.

This means academics need ways to judge colleagues and their work quickly and accurately. It’s not optional – it’s necessary. Our society changes, and academia has to change with it. It’s either adapt or die.

But what has been academics’ reaction to this challenge?

The most prevalent reaction I witness is nostalgia: The wish to return to the good old times. Back then, you know, when everyone on the committee had the time to actually read all the application documents and was familiar with all the applicants’ work anyway. Back then when nobody asked us to explain the impact of our work and when we didn’t have to come up with 5-year plans. Back then, when they recommended that pregnant women smoke.

Well, there’s no going back in time, and I’m glad the past has passed. I therefore have little patience for such romantic talk: It’s not going to happen, period. Good measures for scientific success are necessary – there’s no way around it.

Another common reaction is the claim that quality isn’t measurable – more romantic nonsense. Everything is measurable, at least in principle. In practice, many things are difficult to measure. That’s exactly why measures have to be improved constantly.

Then, inevitably, someone will bring up Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” But that is clearly wrong. Sorry, Goodhard. If you want to indeed optimize the measure, you get exactly what you asked for. The problem is that often the measure wasn’t what you wanted to begin with.

With use of the terminology introduced above, Goodhard’s Law can be reformulated as: “When people optimize a secondary criterion, they will eventually reach a point where further optimization diverts from the main goal.” But our reaction to this should be to improve the measure, not throw the towel and complain “It’s not possible.”

This stubborn denial of reality, however, has an unfortunate consequence: Academia has gotten stuck with the simple-but-bad secondary criteria that are currently in use: number of publications, the infamous h-index, the journal impact factor, renown co-authors, positions held at prestigious places, and so on. 

We all know they’re bad measures. But we use them anyway because we simply don’t have anything better. If your director/dean/head/board is asked to demonstrate how great your place is, they’ll fall back on the familiar number of publications, and as a bonus point out who has recently published in Nature. I’ve seen it happen. I just had to fill in a form for the institute’s board in which I was asked for my h-index and my paper count.

Last week, someone asked me if I’d changed my mind in the ten years since I wrote about this problem first. Needless to say, I still think bad measures are bad for science. But I think that I was very, very naïve to believe just drawing attention to the problem would make any difference. Did I really think that scientists would see the risk to their discipline and do something about it? Apparently that’s exactly what I did believe.

Of course nothing like this happened. And it’s not just because I’m a nobody who nobody’s listening to. Similar concerns like mine have been raised with increasing frequency by more widely known people in more popular outlets, like Nature and Wired. But nothing’s changed.

The biggest obstacle to progress is that academics don’t want to admit the problem is of their own making. Instead, they blame others: policy makers, university administrators, funding agencies. But these merely use measures that academics themselves are using.

The result has been lots of talk and little action. But what we really need is a practical solution. And of course I have one on offer: An open-source software that allows every researcher to customize their own measure for what they think is “good science” based on the available data. That would include the number of publications and their citations. But there is much more information in the data which currently isn’t used.

You might want to know whether someone’s research connects areas that are only loosely connected. Or how many single-authored papers they have. You might want to know how well their keyword-cloud overlaps with that of your institute. You might want to develop a measure for how “deep” and “broad” someone’s research is – two terms that are often used in recommendation letters but that are extremely vague.

Such individualized measures wouldn’t only automatically update as people revise criteria, but they would also counteract the streamlining of global research and encourage local variety.

Why isn’t this happening? Well, besides me there’s no one to do it. And I have given up trying to get funding for interdisciplinary research. The inevitable response I get is that I’m not qualified. Of course it’s correct – I’m not qualified to code and design a user-interface. But I’m totally qualified to hire some people and kick their asses. Trust me, I have experience kicking ass. Price tag to save academia: An estimated 2 million Euro for 5 years.

What else has changed in the last ten years? I’ve found out that it’s possible to get paid for writing. My freelance work has been going well. The main obstacle I’ve faced is lack of time, not lack of opportunity. And so, when I look at academia now, I do it with one leg outside. What I see is that academia needs me more than I need academia.

The current incentives are extremely inefficient and waste a lot of money. But nothing is going to change until we admit that solving the problem is our own responsibility.

Maybe, when I write about this again, ten years from now, I’ll not refer to academics as “us” but as “they.”