The Not So Golden Helix

My review of Crick, the new biography by Matthew Cobb, published by Profile. A slightly different version appeared in the Literary Review.

Crick

Matthew Cobb

John Gribbin

Francis Crick is best known as half of the team that received the Nobel Prize for their part in determining the structure of DNA.  72 years after he made that discovery it is still the main reason most prospective readers might be interested in this book.  They will not be disappointed, although in terms of Crick’s scientific achievements this was, as Matthew Cobb makes clear, the beginning, not the end of the story.  It does, however, provide the archetypal example of the way Crick worked.  As Cobb puts it, Crick’s way of working was essentially collaborative, and involved “asking fundamental questions and pursuing them them through intense encounters with others.”  It’s just unfortunate that one of his key “collaborators” in the DNA story didn’t know that she was making a vital contribution to his success.

     The version of that story presented here is a clear and comprehensive account of the not entirely ethical way that Crick and his collaborator James Watson used data obtained by Rosalind Franklin and Raymond Gosling, without their knowledge, to come up with their idea of the double helix — but it is also clear that the Watson-Crick model was a leap worthy of a Nobel Prize.  Franklin and Gosling had already realised that DNA is a double helix, but that is only half the story.  Crick and Watson explained how the two strands of a DNA molecule are held together by weak forces called hydrogen bonds, and how this makes it possible for the strands to unravel and copy themselves.

      When he made that leap, Crick was already 36, and had only recently started to work in molecular biology, partly because of the way his career had been disrupted by the war.  But he more than made up for lost time with major contributions over the next dozen years, especially to the cracking of the genetic code which explains how the cell stores information in DNA molecules.  In later life he developed an interest in consciousness and how the brain works, where, hardly surprisingly, he failed to achieve the same level of success.

     But what of his private life?  Cobb brings out many of the puzzles about Crick the man.  He was a low “c” conservative, but almost evangelically anti-religion, to the point where he turned down the offer of a Fellowship from Churchill College because the then-new college proposed to build a chapel; he also made known his objection to the original rule of the brand-new all-male college that women were not even allowed to dine there (this in the 1960s!).  He disliked being interviewed or having his photo taken for publicity purposes, but was egotistical enough to name his house in Cambridge The Golden Helix, and used to host wild parties there.  He was, to use the term appropriate to his generation, a notorious womaniser, but championed the role of women in science, working with, among others, Rosalind Franklin after any dust from the DNA situation had settled.  He was at one time something of an outspoken eugenicist, arguing that the better classes (people like him) should be encouraged to reproduce, while the hoi polloi should be discouraged (or worse) from having children.  Although he became more circumspect about expressing these views, he never seems to have given them up.  He was literally not he kind of man you would want your daughter to marry — the parents, in particular the mother, of his second wife, Odile, certainly felt that way, which didn’t stop the marriage being successful in its own unconventional way.

     Although the book does an excellent job of explaining the science behind Crick’s most significant work, there is one area where Crick seems to have been wrong, but the author fails to appreciate how the situation has changed in recent years.  In the cells of complex creatures like ourselves, much of the DNA does not directly carry the instructions for the operation of the cell, and is sometimes said to be non-coding.  This seemed at one time to be the ultimate example of selfish DNA, strands of the molecule that did nothing except copy themselves, and was referred to as “junk” DNA.  That was Crick’s view.  In his own language, in a paper published in 1980,  “it [junk DNA] makes no specific contribution to the phenotype”.  The latest evidence, however, suggests that, far from being junk, much if not all of this non-coding DNA is what controls which bits of the genetic code are switched on or off as required to carry out the workings of the cell.  It is no surprise that Crick was wrong 45 years ago, but it is a surprise that in 2025 Cobb should not mention that the situation has changed.

     But the really remarkable thing is that we know anything at all about DNA.  It is hard to grasp just how small molecules are, but Crick gave us a neat analogy in a talk broadcast by the BBC in 1961:

Suppose we took all the [human] genetic material in the world — that is, if we extracted all the DNA from one cell of every human being alive today, and packed all this DNA together, to give a file-copy of the blue-prints for the human race – it would fill a space about the size of a rather small drop of water.

     Of the two previous biographies of Crick, the one by historian Robert Olby (Francis Crick: Hunter of Life’s Secrets, 2009) is comprehensive but too dense with science for a general reader, while Matt Ridley’s Francis Crick: Discoverer of the Genetic Code (2006) is a great read but too short to tell the whole story.  Cobb’s offering fits neatly in to the gap.  It is readable (if not as instantly accessible as Ridley) but also mostly sound on science (if less detailed than Olby), with the added benefit that the author has had access to some material unavailable to the earlier biographers.  As such, it is an excellent introduction to the man and his work, which can be recommended to the scientifically inquisitive reader.

John Gribbin is an Honorary Research Fellow at the University of Sussex, and author of Against the Odds: Women Pioneers of Science.

The Michael Palin of Volcanology

My review of

Mountains of Fire

By
Clive Oppenheimer

Mountains of Fire is ostensibly a book about volcanoes, but really it is about the adventures of volcanologist Clive Oppenheimer on his travels to explore these phenomena. He is now Professor of Volcanology at the University of Cambridge, and has made a couple of acclaimed documentaries about his work. This, however, is not a dry academic account of his research, nor even a standard “popularisation”. Instead, it weaves together science, history and culture into a tapestry that is far greater than the sum of its parts, and is also a darn good read.
We meet Oppenheimer as a young student taking foolish risks to make measurements that ended up proving useless, visit North Korea with him, find him at risk of being kidnapped or worse on the border between Ethiopia and Eritrea, and end up on Mount Erebus in Antarctica. This makes for a real page-turner of a book, because of the author’s gripping style and way with a descriptive narrative. If Michael Palin had been a volcanologist, this is the book he might have written.
But there are two other threads to the story. The science is not forgotten, but along with other details that might break up the flow of the story many details are confined to the 82 pages of notes which follow the main text (and can be totally ignored if all you are after is a good read). And best of all Oppenheimer includes many accounts of the adventures of his predecessors, with stories of eruptions in recorded history, as well as those that shaped the world in the far distant past. So we meet Robert Bunsen, of burner fame, on an expedition to Iceland in the mid-nineteenth century, and Charles Darwin being shaken by an earthquake in Chile a couple of decades earlier. Darwin’s first major published work, we learn, was a theory of volcanism that “came remarkably close to pre-empting a central plank of plate tectonic theory.”
My favourite section of the book, which pulls together all the threads, deals with the history of eruptions in Iceland, an island which sits astride a major crack in the Earth’s crust known as the Mid-Atlantic Ridge. Iceland is actually getting wider, by a couple of centimetres a year, as crust spreads out on either side of the ridge, which shows up as a steep-sided canyon crossing the island. “Tectonically speaking,” as Oppenheimer puts it, you can stand there “planting one foot in America and the other in Europe.” As well as Iceland’s own volcanic activity, the nearby Greenland ice cap carries buried in the layers of snow falling each year a dusty record of enormous eruptions from around the globe, including an eruption of the Korean volcano Paektu in the year 946 CE. Identification of this layer in the Greenland ice in turn enabled him to count backwards down the layers to pinpoint the date of a major Icelandic eruption as occurring in the spring of 939 CE. Which ties it neatly to accounts of crop failures across Europe and the Sun showing red in the daytime as far away as Rome. Tree ring data show that the following year, 940 CE, was one of the coldest Northern Hemisphere summers of the past two millennia, as particles from the volcano spread high in the stratosphere around the globe and blocked sunlight. But if you want to know how this ties in with Icelandic medieval poetry and the Norse vision of the end of the world, Ragnarok, you will have to read the book.
There are, however, two omissions that deserved a place in the book, one specific and one general. The catastrophic eruption of Thera, on the island of Santorini, which is thought to have caused the collapse of Minoan civilisation around 1600 BCE surely deserves a mention, and although Oppenheimer discusses the relationships between volcanoes, the environment and life I would have liked to learn his take (for or against) on Gaia theory, the idea that all these processes are linked to make a kind of planetary super-organism.
His early experience on Mount Stromboli is worth sharing:
“A metallic whiff like a struck match smarted my eyes. I was seized by the realisation that volatile molecules just unfettered from the inner Earth, and tasting like sour milk at the back of my throat, were now in my lungs, in my bloodstream . . . I was discovering [that] fieldwork on an active volcano is a profoundly embodying experience.” And when lumps of molten lava start falling at his feet, his notebook laconically records, “working here extremely hazardous.” It’s a wonder that he lived to tell the tale, and the rest of the tales in the book, but all lovers of adventure stories, travel stories, and the science of our living planet can rejoice in the fact that he did. Most of the books I review get passed on, one way or another, fairly quickly; but this one is definitely a keeperi

His Last Bow: Truth stranger than fiction?

A sort of review/essay about:
So Here’s Our Leo
D G Compton
Wildside Books

My good friend and colleague David Compton, who I worked with on our novel Ragnarok, and who taught me a lot about writing, died recently, shortly after producing what turned out to be his last book. Here is my appraisal of that book, in a science-fictiony context.

There’s a school of thought which holds that all fiction is science fiction, because it doesn’t happen in “our” world. On that basis, novels such as Pride and Prejudice, or Bridget Jones’ Diary, have to be regarded as set in parallel worlds, other parts of the multiverse, just as much as, say, Sliding Doors (although that is one of the many “mainstream” stories that fails to acknowledge its debt to Sf), or The Man in the High Castle. If it is “real”, then it is history, or biography. But there is a lot more to this way of thinking than meets the eye at first sight, and leads to deeper speculations about the nature of reality. I was recently led into those murky depths by reading the latest book from D. G. Compton, so bear with me while I fill in that background before dragging you down into the depths with me.
Compton’s name is probably only known to those readers of science fiction with long memories, and even those readers may be surprised to learn that he was writing as well as ever at the age of 92. His forte was always to write thought-provoking fiction containing real scientific or technological ideas, addressing complex ethical and moral issues through the behaviour of realistically fleshed-out characters, in worlds which all too plausibly might be a small step away from our reality. He lost none of this skill as time passed, and So Here’s Our Leo deserves a much wider readership than it is likely to get — the kind of readership it might get if it were the first novel from a bright new talent, rather than the last from a bright old talent.
But maybe the old talent needs putting in perspective. What is possibly Compton’s most famous novel, The Continuous Katherine Mortenhoe (1974) now reads as uncomfortably prescient. The title character is a woman dying from an incurable disease, who is followed by a journalist who has surgically implanted TV cameras that record and broadcast her anguish. Fifty-odd years on, it looks all too plausible. His more recent book Nomansland (1994) addresses the problems of a world in which no more male babies are being born, a result of the effect on the environment of pollutants entering the food chain. And in one of my minor footnotes to Sf history, I collaborated with David on a book (Ragnarok, 1991) recounting how a group of well-meaning eco-activists do (avoiding spoilers) more harm than good.
Which brings me on to the latest from Compton. So Here’s Our Leo is marketed as mainstream, but has very clear and directly acknowledged science-fictional connections. The acknowledgments come from the narrator of the story, who is telling us the tale of “our Leo” and often does the literary equivalent of speaking direct to camera (shades of Tristram Shandy). Leo, we learn right at the beginning, has done a bad thing. He has killed somebody. Don’t worry; this is not a spoiler since the plot concerns the effect of that event on Leo’s life, what he means to do about it, and what others might do to him if they find out. But part of the storytelling conceit is that although the story is largely set in Cheltenham, the narrator acknowledges that this is not the Cheltenham of our reality. There are, he tells us, deliberate inconsistencies put in to the story (such as the presence of worked-out coal mines) to prevent us (the readers) identifying the “real” Leo. Another way of looking at it, though, is that Compton is acknowledging that fiction is never set in our world. The events of his novel, like those of all other novels, have to take place elsewhere. Which raises the question, still debated by physicists, of what we mean by “elsewhere” and how it relates to our reality.
So let’s look at the fuzzy boundary between science fact and science fiction, as applied to the Multiverse. In the realm of “popular science” credit for stirring interest in what is officially known as the Many Worlds Interpretation of Quantum Mechanics (MWI) is usually given to Hugh Everett, an American theorist who published his version of the idea in the mid-1950s. That version presents the claim that every time the Universe (or universes) is faced with a “choice” at the quantum level — such as a photon choosing which of two pinholes to go through in a sheet of paper — the world divides into as many copies as there are choices. In this case, one universe in which the photon goes through hole A, and another in which the photon goes through hole B. Scaling this up, we get the idea in Sf that when a person chooses between different courses of action the universe divides accordingly.
Of course, science fiction investigated the idea long before Everett came on the scene. The earliest version I have come across (but I don’t claim to have made an exhaustive search) is a tale by David Daniels called “The Branches of Time”, which appeared in Wonder Stories in August 1935. In this tale, a time traveller muses on the futility of trying to change the past:

Terrible things have happened in history, you know. But it isn’t any use [trying to fix them]. Think, for instance, of the martyrs and the things they suffered. I could go back and save them those wrongs. And yet . . . they would still have known their unhappiness and their agony, because in this world-line those things happened.

He is suggesting that the changes made in “our” past do not affect our present, but cause a “new” universe to split off from our timeline. L. Sprague de Camp developed the idea more satisfyingly in “Lest Darkness Fall”, published in 1939, but refers to the new universe created by his hero’s actions as a branch growing from the main trunk of history. There is, though, no reason except chauvinism to regard “our” history as special, or the main timeline.
Which brings me to my pet gripe about the worst of the many plot holes in the first Back To The Future movie. By going back in time, Marty ensures that his parents get together. Fair enough. But either this is the way history always was, in which case he will return to the identical future he left, or he has caused a new timeline to split off, in which case he will “return” to a completely different future where nobody will know who he is. This interpretation resolves the famous “granny paradox”. If a time traveller goes back and kills granny before she has any children, this happens in a different timeline from the one where (or when) the killer started. De Camp would have said that a new timeline branches off from the moment of the assassination. But does this resolution of the so-called paradox have to involve splitting at all?
My preferred version of the MWI predates Everett, involves no splitting, and was suggested by one of the greatest physicists of the 20th century, Erwin Schrödinger. Schrödinger came up with the idea, when he was working in Dublin, and described it in a scientific paper titled “Are There Quantum Jumps?”, published in 1952. If I am going to mention Schrödinger, it is, of course, obligatory that I mention his famous cat puzzle, which at least gives me the opportunity to correct some misconceptions about the so-called “paradox”. The first is the idea that the unfortunate cat is stuffed into a box. This misconception arises from a mistranslation of the German word for “chamber” in Schrödinger’s original paper (published in 1935); he was actually referring to a comfortable room, well furnished with the necessities of kitty life. Plus, of course, what he called a “diabolical device,” based on quantum physics principles, which might or might not have killed the cat.
This is where the second misconception comes in. What was the standard version of quantum theory at the time Schrödinger came up with the puzzle, the so-called Copenhagen Interpretation, said that the quantum device, and therefore the cat, remained in a kind of suspended animation (called a superposition of states) until someone opened the door and looked inside, at which point it “collapsed” into one or other state, with either dead cat or a live cat. Schrödinger did NOT believe this! He was poking fun at people who DID believe it. The whole point of Schrödinger’s parable was to highlight the ludicrousness of this idea; he did not for one minute believe that the world is really like that. Indeed, in the 1952 paper he said explicitly that it is “patently absurd” that the outcome of quantum events depends on “direct interference of the observer”.
The idea of splitting “solves” the puzzle by saying that when the observer opens the door and takes a look (the crucial bit is taking a look), the world divides into two universes, one with dead cat and one with a live cat, which again involves “direct interference of the observer”. But Schrödinger’s 1952 solution to the puzzle seems much more appealing, at least to me. He points out that the equations of quantum physics, which he helped to develop, don’t say anything at all about “collapse”, let alone observers. All the quantum options are equally valid — or real — all the time. In the cat example, this means that there were two identical universes (“fungible” is the technical term) up to the point where the quantum “choice” is made. In one universe, the cat lives, in the other the cat dies, so after that moment they are no longer fungible. By opening the door and taking a look, all the observer is doing is finding out which universe they are living in. They are not influencing the quantum “choice” at all.
My brother has his own take on all this, and refers to the speculation “that I’m alive [in this universe] only because I cannot (by definition) still be alive in all of the universes where I’m already dead — like when I came off my motor scooter and inexplicably hit a springy paling fence when I was moments before (in my view) flying towards a very solid wooden fence that would have snapped me in two.” In many universes, he implies, “he” did hit a solid fence and was snapped in two, so didn’t survive to tell the tale.
According to Schrödinger, all quantum choices “may not be alternatives but all really happen simultaneously”. it’s worth pausing to look at that again. One of the greatest physicists of the 20th century said that all possible universes “really happen simultaneously”. Wow! Of course, science fiction got there first, but without the physics. One of my favourite examples is Murray Leinster’s “Sidewise in Time”, which appeared in Astounding in 1934, a year before Schrödinger published his cat puzzle. The story takes place in a fictional 1935, where sections of the Earth’s surface have changed places with their counterparts in alternative realities, or timelines. Leinster focusses on the implications for the North American continent. There are Viking settlements in parts of North America, Czarist Russia has colonised California, there are regions of the continent where the Confederates won the American Civil War, and so on. A more modern treatment of the idea of parallel worlds can be found in the Long Earth series by Terry Pratchett and Stephen Baxter, and I developed Schrödinger’s specific theme in my own short story Untanglement (reprinted in my collection Don’t Look Back). Leaving aside the (im)practicalities of stepping “sidewise” (to use Leinster’s term) in time, the crucial point is that respectable physicists, including David Deutsch, of the University of Oxford, fully accept the implication from standard quantum theory that all possible universes exist as part of the Multiverse. Such physicists are in a minority, but they by no means constitute a lunatic fringe. On the contrary, they tend, like Schrödinger, to be what Ian Dury called “some clever bastards”.
The caveat is what we mean by “possible”. A world in which the Vikings colonised the eastern part of North America while the Russians or Chinese colonised the west coast is certainly possible, because it obeys the laws of physics (and other scientific certainties) as we know them. But one in which men fly to the Moon with the aid of a gravity-shielding metal called Cavorite is not. This has profound implications for writers and readers of science fiction — or any fiction. Long before I was familiar with quantum theory, I used to argue the point made at the beginning of this essay, that all fiction is science fiction, because events such as those described in Jane Austen’s novels clearly did not take place in our world, so they must be set in a fictional parallel reality. Little did I realise then that science would support my argument.
According to the equations, it is indeed true that any world, or story, you can imagine which obeys the laws of physics really does exist somewhere (or somewhen) in the Multiverse, but fantasies in which those laws are broken do not correspond to genuine alternative realities. So, there really is at least one (arguably, an infinity of) “Jane Austen” world(s) in which the events portrayed in her novels actually happened, exactly as she described. But there are no Harry Potter worlds, or Neil Gaiman Stardust worlds, because they do not obey the laws of physics (the question of other worlds with slightly different laws of physics is one I will not dive into here).
This raises many speculations. Not least, is there anyone (or any thing) writing our story? If these ideas are correct, there must be! I’m reminded of Alice’s musing at the end of her adventure in Wonderland: “He was part of my dream, of course — but then I was part of his dream, too.” Which brings us back to Leo, in his plausible Cheltenham where all the laws of physics are very much being obeyed, leading among other things to the unfortunate death of Leo’s friend Declan. David Compton was understandably nonplussed when I put it to him that his novel might actually be describing real events in a real world. He asked if I really believe that. Which is a difficult question to answer. I usually say, in response to such questions, that a good scientist doesn’t “believe” anything, in the everyday sense. Blind faith is the prerogative of religion (I once argued this point with Malcolm Muggeridge on Radio 4!). Scientists ought to be agnostic, accepting the evidence available at present but willing to take on board new ideas (and if necessary reject old ideas) when new evidence turns up. So, although (for example) I “believe” in the Big Bang in a sense, what I mean by that is that I find the present evidence for an early hot phase of the Universe compelling. In that sense, I also “believe” in the reality of fictional worlds in the Multiverse. The laws of physics say that the world(s) must be like that, and the same laws of physics explain, among other things, how the Sun keeps shining, and how how the twin strands of the DNA double helix are held together by hydrogen bonds.
The laws of physics don’t come in a variety pack from which you can pick and choose the bits you want to accept. I often have to explain this to people who try to convince me that, say, time dilation cannot possibly be true, and want to reject this bit of the special (and general) theory of relativity, while (perhaps) keeping E = mc2. But you can’t! The theory is a seamless whole, and it has been tested by experiment (among other things, incidentally, it does not forbid time travel). Quantum physics is also a seamless whole. In which case, if quantum physics passes all the tests that have been applied to it, and also tells me that Jane Austen world is real, I have to believe it.
Apparently, Robert Heinlein worried about this in later life. He was quite concerned about the possibility that the beings of his imagination really suffered the lives that he put them through. But, does the author create those “imaginary” people and societies or is (s)he simply recounting their tales?
So maybe I had my opening thought backwards. It isn’t so much that all mainstream fiction is science fiction, rather all of that kind of story is non-fiction Something to ponder when you are reading the so-called fiction in the latest magazine. Which of the stories obey the laws of physics and describe real events in alternative realities? And which ones are undeniably fantasy?

Although John Gribbin is best known as the author of books about deep science, including In Search of Schrödinger’s Cat, he has also delved into science fiction. His Nine Musings on Time (Icon) looks at the science and fiction of time travel, and his SF collection Don’t Look Back is published by Elsewhen Press. So Here’s Our Leo, which is strongly recommended whatever you make of his musings here, is published by Wildside Press.

The Talkative Ape

Speech!


Simon Prentis,
Published by Hogsaloft, http://www.hogsaloft.com

The idea behind Speech! is simple. Simon Prentis argues that speech is what made us human, and has been the driving force of human evolution for roughly 3 million years, since our ancestors started to develop in quite different ways from our siblings, the chimpanzees. Specifically, and crucially, he suggests that instead of our large brain creating the opportunity for speech to evolve, it was the “invention” of speech that triggered the dramatic growth in brain size of our species, with all that implies.
But this key idea is not plucked from the air. The scenario is set up by an absorbing tale of all the things that people do and are, written in accessible language and based on a wealth of carefully researched source material. The personal side of the story is equally fascinating. Prentis is English, but spent so long in Japan that he absorbed the language and the culture, becoming fascinated by the way quite different languages affect — or, indeed, do not always affect — the way people view the world.
He also takes a view of the future of our species which may seem optimistic in the light of recent events in Ukraine, but should not be dismissed., “Language,” he says, “opens up the possibility of being able to discuss better ways of exploiting resources without having to physically fight for them”. But he cautions that “our brains evolved to fight long before we could talk.”
Whether you are an optimist or a pessimist, however, you will be intrigued by this book, which stands alongside the works of Steven Pinker and James Lovelock as a “must read” for our times.

Hunting the Elusive Higgs

My latest book review, from the Literary Review

Elusive
Frank Close

Frank Close is a particle physicist who during a distinguished career developed a sideline in accessible popular books about the subatomic world, long before anyone outside the halls of academe had heard of Carlo Rovelli. After retiring from the day job, he took to writing even more fascinating biographical studies of the “atom spies” who provided the Soviet Union with the information that kick-started their nuclear weapons programme during and after World War Two. Now he has combined these skills in a semi-biographical account of Nobel-laureate Peter Higgs and the particle named after him — the particle which is responsible for giving other particles mass, and which determines the rate at which the Sun burns its nuclear fuel and thereby maintains conditions suitable for life on Earth.
The title of the book applies to both the man and the particle. Higgs is famously (even notoriously) self-effacing and avoids the limelight to the extent that on the day the anticipated Nobel Prize was to be announced he pretended he was on holiday in the Scottish Highlands, sending reporters off on a wild goose chase while he sat in a quiet wine bar in Edinburgh. The particle proved even more elusive. Predicted in the mid-1960s, it was not identified, or discovered, for more than 40 years, and then only after the the construction of the largest “atom smasher” in the world, the Large Hadron Collider (LHC) at CERN, in a tunnel with roughly the dimensions of London’s Circle Line, straddling the Swiss-French border. Close tells these intertwined tales with the aid of a deep understanding of the physics, and many meetings with Peter Higgs himself. There have been other books on the same theme, but this is far and away the best.
Where Close excels is in explaining the fundamental principles of particle physics in language anyone likely to pick up the book can understand. His unpicking of technical terms such as “renormalization”, “gauge theory” and “symmetry breaking” is superb, and I fully intend to steal some of his analogies for my own use. This leads on to a brief history of the development of particle physics in the twentieth century, which may be familiar in outline to some readers, but benefits from the author’s status as an insider.
At the other end of the book, there is the story of the background to the construction of the LHC, its significance for our understanding of the Universe, and a fascinating account of the way the discovery of the Higgs (hailed, to the irritation of most physicists, as “the God particle”) was first achieved and then presented to the world. This, for me, was the highlight of the book.
In between these delights, there is some material which non-physicists may find daunting. I enjoyed it, and learned things — but then, I have a background in physics. Taken out of context (as I am about to do), passages such as :

Gauge symmetry in the theory of QED implies that if you change the phases of electrons’ quantum waves at different places in space and time, the implications of the equations for the electrons’ behaviours will remain unchanged.
can be quite intimidating. Actually, for the uninitiated it is pretty intimidating even in context. My advice is to let such passages (fortunately there are not many of them) wash over you like a soothing wave, and focus on the more familiar English that forms the bulk of the book. But whatever you do, don’t give up, because what follows the sticky bit is most of the good stuff.
One curiosity of the biographical story is that Peter Higgs (who was born in 1929) attended the same school in Bristol where another physics Nobel Laureate, Paul Dirac, had been a pupil. Curious to know what this famous old boy had done, Higgs found out when he chose as the prize for his own achievements at school a book called “Marvels and Mysteries of Science”, which introduced him to the then-new theory of quantum mechanics that Dirac had pioneered. The two even shared (30 years apart) the same physics teacher, one Mr Willis, who must have done something right.
Higgs’ achievements were not quite in the same league as those of Dirac — a reflection on how good Dirac, widely regarded as the greatest physicist born in the twentieth century, was, rather than how feeble Higgs’ contribution was. Higgs’ career followed a fairly conventional route through the academic ranks to end up as a Professor in Edinburgh, publishing his quota of scientific papers along the way. His epiphany, beautifully explained by Close, came in 1964, and was presented in papers published that year and in 1966. It was, according to him, the only really good scientific idea he ever had. He told Close that “the portion of my life for which I am known is rather small — three weeks in the summer of 1964.” But the discovery made in those three weeks was based on a lifetime developing an understanding of his subject, and one good idea is all you need, if it is a really good idea, to win a Nobel Prize.
Elusive works as a biography of Peter Higgs, as a chronicle of one of the greatest intellectual advances in human history, and best of all as an answer to anyone who asks why we should bother to carry out experiments like those at CERN. Buy it.

John Gribbin is a Senior Research Fellow at the University of Sussex and author of Six Impossible Things: The Mysteries of the Subatomic World.

The Orogins of the Big Bang Idea

A review of mine for the Washington Post; the style quirks (eg “Mr”) are theirs!

Flashes of Creation
Paul Halpern

The idea that there was a definite beginning to the universe as we know it, at a time we can calculate, is well established as part of what “everybody knows.” It may therefore come as a shock to be reminded that the idea itself—the Big Bang—is less than a century old, and that the acceptance of the idea as the best explanation for our observations of the cosmos is little more than 50.
For many years, the idea of a beginning seemed so ludicrous to many astronomers that a rival idea, that of an eternal or “steady-state” universe which has always existed and always would exist, seemed much more attractive. With “Flashes of Creation,” Paul Halpern, a professor of physics at Philadelphia’s University of Science and author of multiple books on the history of his field, has had the bright idea of explaining how the Big Bang concept became established by weaving together the biographical stories of the larger-than-life characters who carried on the debate. The rival ideas were promoted in the 1950s by George Gamow in the United States (favoring the Big Bang) and Fred Hoyle in Britain (favoring the steady state). The Odessa-born Gamow was a pioneer in the study of atomic nuclei, while Hoyle famously theorized the formation of elements in stellar furnaces. Both reached far beyond the academic world via popular writing and broadcasts.
Two things make the book stand out, apart from the clear and accessible writing that we have come to expect from Mr. Halpern. First, it rehabilitates the steady-state idea, which is sometimes looked back on with the benefit of hindsight as a cranky notion that flew in the face of the evidence. Far from it: The two rival cosmologies were for a long time on an approximately equal footing, and until the early 1960s the evidence tilted the balance in favor of the steady state. The historical perspective of “Flashes of Creation” highlights the importance of debating scientific issues and not jumping to premature conclusions. The second standout feature of this book deals with the way ideas were developed in those simpler days, only a couple of generations ago, when important insights could come from individuals essentially working alone.
What is now seen as the decisive factor in tilting the balance of cosmological thinking in favor of the Big Bang idea came in 1964, when radio astronomers Arno Penzias and Robert Wilson accidentally discovered the weak hiss of radio noise from everywhere in the sky that became regarded as the “echo” of the Big Bang. This background radiation had actually been predicted by two of Gamow’s younger colleagues, Ralph Alpher and Robert Herman, almost twenty years earlier, but their work had been forgotten. Ironically, Penzias and Wilson had themselves been supporters of the steady-state model!
Halpern suggests this discovery produced the triumphant instant recognition of the accuracy of the Big Bang model by all except a few die-hard steady-staters, but the truth is a little more subtle. It took a while for the experts to be fully persuaded, and one of the key additional pieces of evidence, which Halpern mentions but does not give due emphasis to, came a little later in the 1960s, when a team headed by Robert Wagoner calculated how the lightest elements could have been manufactured from hydrogen in the Big Bang. I was present in 1967 at a talk in Cambridge where Wagoner explained these results, and this was just as significant a breakthrough as the work of Penzias and Wilson.
Which is where there is another twist in the tale. This work on how those elements were first created in the moments after the Big Bang—what is now known as “primordial nucleosynthesis”—was developed from studies of the way heavier elements (carbon and everything heavier than carbon) are built up by nuclear reactions inside stars. That work came to be called “stellar nucleosynthesis”. The key insight had come from Fred Hoyle, who had predicted a certain property of carbon which made all this possible. He needed someone to test whether his prediction was right, and suggested a suitable experiment to physicist William Fowler at Caltech. As Halpern puts it “Hoyle consulted Fowler to see whether the idea might be testable. Fowler agreed to try . . .” He makes it sound so simple! The truth, however, is much more interesting. Fowler initially thought Hoyle was crazy, and declined in no uncertain terms to waste time carrying out the experiment. But Hoyle kept badgering him and in the end, as Fowler recalled to me, “I said I would do it just to shut Fred up,” not expecting that the prediction would be confirmed. But it was.
Over the next few years, Hoyle, Fowler and the husband and wife team of Margaret and Geoffrey Burbidge worked out how almost all the elements have been manufactured inside stars. Their work is why we know that we are all literally made of stardust. This is clearly a discovery that deserved a Nobel Prize. One of the many errors made by the Nobel committee over the years, however, is that Fowler (who initially told Hoyle his idea was crazy and to go away) alone received the physics prize for this work, while Hoyle, who had the key insight, was ignored.
The main problem with “Flashes of Creation” is that it is far too short to do justice to such a big story, and this is presumably the reason why some of its details are handled rather superficially. In particular, the work which first made people notice Gamow, a process known as quantum tunneling, is handled in a surprisingly confusing fashion for a writer with a training in science. Mr. Halpern also buys in to a few items of popular mythology which have long since been debunked. For example, Edwin Hubble, the codiscoverer of the expansion of the universe, polished his image with exaggerated stories of his prowess as a sportsman, which Halpern has swallowed. More seriously, Halpern repeats the canard that Hoyle came up with the name Big Bang as a term of derision for the rival to his favored idea. In fact, as Hoyle confirmed to me, while writing the script for a radio broadcast he needed a snappy expression to balance “steady state,” and came up with this alliterative pair—an account which rings true to any broadcaster. I was, though, particularly pleased to see due prominence given here to the recent discovery (by Irish researcher Cormac O’Raifeartaigh) of an unpublished paper by Einstein, written in 1931, which contained the first mathematical description of what became the steady-state model of the universe. So much for it being a cranky idea not worth taking seriously.

“Flashes of Creation” is a readable and mostly accurate account of one of the most significant eras in the development of our understanding of the universe. But independent of its actual subject matter, the most important message to take away is that science proceeds not as an orderly progression of insights and discoveries, but as an often messy confrontation with the complexity of the universe.

John Gribbin is a Senior Research Fellow in Astronomy at the University of Sussex. His many books include In Search of the Big Bang.

Another review

The Man From the Future:
The Visionary Life of John Von Neumann
Ananyo Bhattacharya
Allen Lane

John Gribbin

John von Neumann is widely regarded by his scientific peers as the greatest genius born in the twentieth century. A combination of his intellect and his Hungarian origins (he started life, in 1903, as Neumann János Lajos) led colleagues to jokingly refer to him as a Martian, or a time traveller from the future. He made seminal contributions to mathematics, quantum theory, the development of nuclear weapons, the birth of the modern computer, game theory, and evolutionary biology, while living through the turbulent decades involving two hot wars and one cold war. Yet to the wide public he is not as well known as these achievements justify — certainly not as well known as Richard Feynman, although von Neumann was an equally colourful character. Ananyo Bhattacharya attempts to rectify this, and succeeds on one level while just missing the target on another.
The success is in the science. The author is a first-class science writer with an impeccable pedigree embracing stints at the Economist and Nature, and he does the best job I have seen of explaining the significance of von Neumann’s work across so many different fields. He is so enthusiastic about the science, however, that he often goes far beyond von Neumann’s direct contribution to bring the story of — say — game theory up to date, so that the reader almost forgets that this is supposed to be a book about von Neumann. So the near miss is the failure to bring his human subject to life. We get the facts, and some familiar anecdotes, but no real feel for the man himself. A more accurate subtitle would be “The Visionary Science of John Von Neumann”.
Here, one of those anecdotes will suffice to set von Neumann’s genius in context. When he left school in 1921, his father wanted to steer him to a career where he could make a living, but he wanted to study mathematics, perceived by his father as an impractical waste of time. So he did both, shuttling between the University of Berlin (and later the Swiss Federal Institute of Technology in Zurich), where he studied for a degree in chemistry, and the University of Budapest, where he worked for a PhD in mathematics. He moved to the United States in 1930, and became known as John von Neumann at the age of 29. He was initially on a short-term appointment at the Institute of Advanced Study in Princeton, but happy to make his move permanent when Hitler came to power in Germany,
If you are interested in the development in the twentieth century of science that impacts all our lives today, there is no better place to seek for information. This is also a good place to get the details on von Neumann’s most famous mistake. In 1932, the great man wrote a book summarising the state of knowledge about quantum mechanics, and among other things ruling out a whole class of explanations of quantum phenomena. This ruling out was based on what by his standards was a trivial error, which was spotted by one junior German mathematical philosopher, Grete Hermann. But the physics world was so in awe of von Neumann that for three decades everybody else accepted his word, without checking the arithmetic. It was only when John Bell drew attention to the mistake that quantum physics took off in a new direction, leading directly to modern quantum computers.
But long before then, von Neumann himself had moved on. His own interest in computers came through the need for such machines in calculating the design of nuclear weapons, and Bhattacharya does an excellent job of tracing this development, in which von Neumann led the way to the construction of the first American programmable computer. In one of my few quibbles with the book, however, I can find no mention of the earlier British machine COLOSSUS, built at Bletchley Park for the codebreakers. Alan Turing, though, does feature in his rightful place in the story.
Turing and von Neumann shared another common interest, literally the meaning of life. Turing’s tragic death cut short his contribution to this field, but von Neumann introduced the idea of “self-reproducing automata”, which are now very near to becoming reality in the form of 3D printers that can print the parts to make other 3D printers. This is tantalisingly close to one of my favourite science-fiction tropes, spacefaring probes that can travel to the planets of other stars where they build replicas of themselves to continue the exploration of the Galaxy. Such spaceprobes are now known as “von Neumann machines”; starting with just one such machine, limited to travelling at much less than the speed of light, it would be possible to explore every planetary system in our Galaxy in a few million years. Which raises the “Fermi paradox” — why haven’t we been visited? One bleak possible answer is highlighted in Philip K. Dick’s story, referenced by Bhattacharya, about automatic factories consuming the Earth’s resources to make things nobody needs, including more automatic factories.
Von Neumann died in February 1957, failing by a few months to live to see the launch of what may one day be remembered as the ancestor of those Galaxy-exploring self-reproducing automata — if Dick’s “autofacs” don’t consume everything first. This book is a fine tribute to his genius and his contributions to science, but it contains far more science than life. If that floats your boat, I strongly recommend it.

John Gribbin is an Honorary Senior Research Fellow at the University of Sussex and author of Computing With Quantum Cats (Black Swan)

From the Literary Review, October 2021

How far can we see?

This is the slightly adapted text of an article I wrote for the magazine Popular Astronomy,   https://www.popastro.com/

How far can we see?

John Gribbin overturns some misconceptions about the size of the Universe and the way we measure it.

How far can we see? I don’t mean with the unaided human eye, but how far can astronomers “see” using the best detectors available? The most remote electromagnetic radiation reaching the Earth is the cosmic background radiation, which derives from hot gas at a temperature about the same as that of the surface of the Sun today, and was emitted a couple of hundred thousand years after the Big Bang. As the Big Bang happened 13.8 billion years ago, that radiation has been travelling across space to us for very nearly 13.8 billion years. And since electromagnetic radiation travels at the speed of light, that must mean that those hot clouds are just under 13.8 billion light years away, must it not? Well, no, actually. During all the time the radiation has been on its way to us, the Universe has been expanding. So those clouds are now a lot farther away than this naive calculation suggests.
A simple analogy gives an image of what is going on. Imagine that you are standing somewhere in the middle of one of those long travelling walkways found in places such as airports, but the travellator has not been switched on. Mark the spot where you are standing with a blob of paint. I am standing at the end of the travellator, and when it is turned on the pavement starts to carry you away from me. But you walk briskly towards me, overcoming the speed of the walkway, and when you get to me the travellator is turned off. By that time, the blob of paint is much farther away from me than it was when you set out. That is the “proper distance” to the paint blob now, not the distance it was from me when you started walking. And the reason light from distant objects is redshifted is that it has become stretched by space expanding as it moves through it. The cosmological redshift is not a Doppler effect.

When the appropriate numbers are put in to the cosmological equations (including allowance for the fact that until recently, in cosmic time terms, the expansion of the Universe was slowing down but now it seems to be speeding up) we find that the particles from which the cosmic background radiation were emitted are now about 45.7 billion light years away; that is their proper distance (also called the comoving radial distance). So that is as far as astronomers can see today.

If we could detect the Big Bang itself (which might one day be possible using gravitational radiation), we would be looking at something 46.6 billion light years away. There are slight differences depending on which cosmological model you prefer, but in round numbers this tells us that the distance to the edge of the observable Universe is 46 billion light years. That defines the cosmological horizon (also called the comoving horizon, or the particle horizon). So the bubble of space that we can observe is about 92 billion light years across.

The comoving horizon is the farthest distance from which we can receive information. But it does not mean that we are at the centre of the Universe, any more than the existence of a visible horizon for people on board a ship somewhere in the Pacific Ocean means that the ship is at the centre of the ocean. As with mariners in the Pacific, an observer near the edge of “our” bubble would be able to see as far as us in our direction, and the same distance in the opposite direction, beyond our horizon. If there were any observers looking in our direction from near the edge of our horizon, however, they would not see the Milky Way as it is today, but as it was nearly 13.8 billion years ago – they would detect the cosmic background radiation from the clouds of hot gas from which galaxies, stars, planets and ourselves would eventually form. And unlike the horizon seen from a ship in the Pacific Ocean, the cosmological horizon continues to move outward as the Universe expands – the “travellator” has not stopped moving.

There is, however, another horizon that limits what even the most technologically advanced future civilization would be able to see. This is the cosmic event horizon, and it is defined as the farthest distance from us now from which light being emitted today will ever reach us (sometimes called “the future visibility limit”). This is about 61 billion light years. Anything farther away from us today will be carried away by the expansion of the Universe faster than the speed of light, so no information from that region will ever reach us. The present horizon is at about three-quarters of this distance. Leaving aside any practical problems of observation, this means that the total number of galaxies that will ever be detectable from Earth is only about twice the number already observed.

Even this is not the end of the story. If the expansion of the Universe is indeed accelerating at present as observations suggest, and the acceleration continues indefinitely, even the galaxies we see today will fade away as they become increasingly redshifted – the light waves become stretched flat. Everything with redshifts in the range from 5 to 10 today (see the box – The meaning of z) will be invisible within four to six billion years, roughly by the time the Sun becomes a red giant. If life is now just getting started on a planet like the Earth orbiting a star like the Sun, and evolution proceeds at the same pace as on Earth, by the time there is a civilization on that planet capable of studying the Universe at large, there will be a lot less for them to study, no matter how good their telescopes are.

All of which brings me back to where I started this story, and to a pet hate. When astronomers identify the distance to extremely remote objects, the distance to the object is widely misrepresented. For example, there is an object called GRB 09423, a gamma-ray burst detected in 2009 in a galaxy at a redshift of 8.1. This object will be undetectable when the Sun is a red giant. Redshift is used as a standard measure of distance, but for such large redshifts it is better interpreted as a measure of how close in time the event was to the Big Bang. The gamma-ray burst’s redshift implies that it occurred about 13.2 billion years ago. So the popular media (and some naughty professional astronomers) interpreted this as meaning a distance of 13.2 billion light years. In fact, the object is now about 32 billion light years away (and a little bit farther away in 2021 than it was in 2009). I have no hope of persuading the world at large to change their habits in this regard, but I trust that readers of Popular Astronomy will now be wary of falling into this trap. If, however, you want to correct such mistakes for yourself when you see misleading announcements of record-breaking redshifts in lesser publications, check out this nifty online resource: http://www.astro.ucla.edu/~wright/ACC.html, Ned Wright’s proper distance calculator. This allows you to change all the relevant factors; but leave all other figures as they are shown and just change the value of z for an object and click on “General” to get its age after the Big Bang.

John Gribbin is an Honorary Senior Research Fellow in Astronomy at the University of Sussex. His many books include 13.8: The Quest to Find the True Age of the Universe (Icon)

Box copy
The meaning of z
Redshift is a result of the stretching of wavelengths of light (or other electromagnetic radiation), which for visible light moves features such as spectral lines from the blue end of the spectrum towards the red end – hence the name. One way to produce a redshift is if the object emitting the light is moving away from you. This is a Doppler effect, similar to the way the sound of a siren on an emergency vehicle changes as it passes you. It is caused by objects moving through space. The redshifts seen in the light from distant galaxies are, however, produced in a different way, not by the Doppler effect. They are caused by the space between us and those galaxies stretching as the Universe expands, and thereby stretching the light waves on their way to us.

Cosmologists use the symbol z to refer to redshift. It is defined in terms of how much the spectral lines are shifted towards the red. The great value of this parameter is that the distance to a galaxy is related to its redshift, so by measuring redshift cosmologists can work out how far away galaxies are; the bigger the redshift, the more distant a galaxy is. And it is relatively easy to measure. The cosmological redshift is equal to the observed wavelength of a particular line in the spectrum, minus the wavelength of the equivalent line for an object at rest in the laboratory, all divided by the “rest” wavelength. In a standard notation:
1 + z = λobs/λrest
Although the cosmological redshift is not a Doppler effect, cosmologists use the term “recession velocity” to refer to how fast galaxies seem to be moving away from us. If such “velocities” are much less than the speed of light (c), which is true for nearby galaxies, the cosmological redshift (z) is related to recession velocity (v) by the simple equation:

z~v/c

(where the wiggly symbol means “approximately equal to”).

But for larger redshifts, corresponding to more distant galaxies, the relationship is more complicated (too complicated to go into here) and velocities have to be calculated using the general theory of relativity. The calculation gives us both the velocities and the distances to distant galaxies in terms of z. Equally important, because of the time it takes for light to travel across space we are seeing distant objects as they were long ago, and the redshift also tells us the “lookback time” to objects such as GRB 09423.

One of the superficially surprising things about this kind of calculation is that it can give “velocities” greater than the speed of light. But this doesn’t break the famous ultimate speed limit of c in the special theory of relativity because nothing is moving through space at that speed. So beware of anyone who tells you that cosmological redshift is a Doppler effect!

A Matter of Not So Much Gravity

Another of my book reviews

Life After Gravity

Patricia Fara

This book does not begin well.  On the second page of the Prologue we find a reference to “1665, when Newton was a toddler”.  Since he was born in 1643, this does not instil in the reader confidence in the authority of the writer.  A few pages further on, regarding Isaac Newton’s time at the Mint, we are told “unlike now, the standard was set not by gold but by silver”.  It would appear that Patricia Fara is unaware that Britain abandoned the gold standard in 1931.  Fortunately, the book gets better, although these early alarms leave a nagging doubt about the scrupulousness of the author’s research.

The conceit around which Life After Gravityis structured is the idea that although Isaac Newton is famous as a scientist and mathematician, “biographers often glide over” his later life in London as a prominent member of fashionable society.  Up to a point, although even some of the biographies cited in this text give full space to this second phase of his life.  Nevertheless, this is an excuse for an account of life in early eighteenth century London which is interesting in its own right, and does not really need the Newton connection as an excuse.

There is nothing here which is new, but by weaving together old threads into a different tapestry Fara has produced an enjoyable little book which provides an easy introduction for anyone unfamiliar with the story.  By dividing the story into thematic sections on topics such as family, the Royal Society, the Mint, and lifestyle she has made it easy to dip in to, at the minor cost of sometimes obscuring the chronology.  The obvious missing ingredient, although it is covered briefly, is a detailed discussion of the probable connection between Newton’s dramatic career change and his mental breakdown in the early 1690s, which may itself have been linked to an unrequited homosexual love affair and which produced a change in personality preceding the move to London.

A major advantage of the thematic structure is that this provides an excuse early on to tell the story of Newton’s niece and housekeeper, “pretty Kitty” Barton, who was a favourite of London society and almost certainly the mistress of Charles Montagu (Earl Halifax), who left her a fortune in his will as a token of (to the amusement of his contemporaries) “the great Esteem he had for her Wit and most exquisite Understanding.”

Newton’s own fortune at the time he died is estimated as £32,000, which, Fara suggests, would be equivalent to £80,000,000 today, on an income-based index.  This, astute readers will have gleaned, was not derived from his income as a Cambridge academic.  Much of it came from his work at the Mint; but Voltaire, who met Kitty, suggested that this was not all.  “Newton had a very charming niece  . .  .  who made a conquest of the minister Halifax.  Fluxions [calculus] and gravitation would have been of no use without a pretty niece”.

If this hints at a darker side to Newton’s character than popular mythology might suggest, there is plenty more to indicate that he was not a person you would want to get on the wrong side of.  At the Mint, he was responsible for a complete recoinage of the silver specie, in the course of which he ruthlessly pursued counterfeiters and took great delight in ensuring that they were punished with the full weight of the law – if anything, Fara downplays his enthusiasm for this task.  It was, incidentally, Newton who recommended to Parliament in 1717 that the value of a gold Guinea piece should be set at 21 shillings (£1.05 in today’s money, although the guinea was abolished as a unit of British currency with decimalisation in 1971).  His knighthood, of course, was not conferred in recognition of his scientific achievements, nor for his work at the Mint, but as part of a sordid political ploy in an unsuccessful attempt by Queen Anne to get him elected as an MP representing the party she favoured.

Newton’s dark side is more familiar in a scientific context, where, Fara notes, he “was a serial slanderer: as soon as he had vanquished one opponent he moved on to the next”.  The list includes Robert Hooke, whose reputation probably suffered more than anyone’s from this treatment, John Flamsteed (the first Astronomer Royal), and Gottfried Leibnitz, the independent co-discoverer of the mathematics of calculus (fluxions).  Newton was able to pursue these vendettas from a position of power as President of the Royal Society, and I would have liked to see more detail of his contributions, both good and bad, in this capacity.  But the account given here of the birth of the Society is so superficial (Oxford, we are told, was “a buzzy city”) and confusing that perhaps it is just as well that there isn’t more.

For me, the most interesting part of the book (alas, far too short) is the section which draws parallels between the Royal Society and the “Company of Royal Adventurers Trading into Africa” (the Royal Company).  “The twinned royal sisters were linked together by their political leaders and their global aims.  To gain knowledge was to gain power – and that was a national aspiration.”  Now thereis the basis for a cracking good book about the rise of Britain as a global power.

Fara claims in an Epilogue to be “presenting novel arguments about such an iconic figurehead” at the risk of arousing “bitter antagonism”.  As far as I am concerned, she has nothing to fear.  The arguments do not seem particularly novel, nor likely to rouse antagonism. The fact, for example, that some of Newton’s financial investments benefited from the slave trade is hardly surprising. As she says, “[He] was just one among many wealthy Europeans who were complicit in exploiting other peoples and places for their own financial gain”.  Plus ça change.

     I didn’t learn anything about Newton from Life After Gravity, but I enjoyed being reminded of things half-remembered.  I wouldn’t urge anyone to lay out £25 for it; it is too expensive for a “popular” book, and not learned enough for an academic tome.  But if you don’t know much about upper-class London life between about 1700 and 1750 and have the patience to wait for a paperback, give it a go.

John Gribbin is an Honorary Senior Research Fellow in Astronomy at the University of Sussex and author of The Fellowship (Penguin).

Easy as Pi

An old story of mine, posted here in response to a request from a friend.  Also available (with its prequel) in my collection “Don’t Look Back”.

Easy as Pi
John Gribbin
This story is self-contained, but set in the same universe as Artifact, published in the first volume of Tales From the Perseus Arm. Put the two together, and what you get should be greater than the sum of the parts. And, yes, there is an “author’s message” here; see my book In Search of the Multiverse.

The student knocked on the open door and hesitated, waiting for permission to cross the threshold.
“It’s open,” came the exasperated response.
He shuffled inside, holding out the Turing.
“Uh, I thought you ought to see this. It’s, well, weird . . . ”
The grey-haired man pushed his chair back from the desk and reached for the Turing. He peered at it over the top of his old-fashioned glasses, muttering inaudibly. The University, for reasons he had never been able to fathom, required all arts students to carry out a science project, which meant someone in the science faculty had to supervise them. A complete waste of time for both parties. As ridiculous as the requirement for science students to do a critical analysis of a novel. So Timmins had come up with a painless (for him) solution. The pi project. Give him a student with time to waste, and he would set the victim the task of calculating the next hundred thousand or so digits of pi. You couldn’t deny it was scientific, and at a pinch you could even say it was original, since each student carried on where the previous one had left off. And since pi was irrational, every student got a different set of digits to play with, even though they were now well into the trillions – he neither knew nor cared how far into the trillions.
Every student got a different set of digits, except this idiot.
The string of numbers filled the display, but the beginning was all too familiar
3141592653589793 . . .
He leaned back in his chair, pushing his glasses up in order to rub his eyes.
“You were supposed, Omero, to start where Phillips left off. Not at the beginning.”
“But I did. This string starts about 87,000 places into the run. And it carries on like that. What does it mean?”
“It means you pressed the wrong button. Go away and check.”
“I did.” This one was stubborn. “It repeats from the beginning, at least ten thousand digits.”
“Then check it again. Check a hundred thousand digits, And don’t come back until you’ve found the mistake.”
Reluctantly, the student turned to go, automatically reaching for the door handle.
“And don’t shut the door!”
Timmins turned back to the screen in front of him, pushing the glasses back up his nose to their proper position. He had his own computer code to worry about. Simulating star formation, if only he could make it work. There was a problem with truncation in the core collapse code. It had a tendency towards chaos – if you made a tiny change in the value of the parameters, it had a big effect on the outcome. That’s the trouble with simulations, he thought – nature “knows” the values to an infinite number of places, we have to truncate the parameters.
Infinity. Something was nagging at the back of his mind. The Book of Infinity – who was it wrote that? Graves?
He opened a new window and did a search – Graves, infinity, simulation, universe.
There it was. The paragraph sat there innocently, its message unambiguous.
How could we tell if the Universe we live in is a computer simulation, like the world of the Matrix movies? The difference between a simulation and what we call reality is that simulations are approximate. They can be made as good as you like, if you have enough memory, but they can never be made perfect. An irrational number like pi can only be perfectly expressed as an infinite string of digits, which would fill up the memory of any computer on its own, and leave no room for anything else. Even the best computer does not have infinite capacity, so the programmers of a universal simulation would have to make approximations. For example, they might truncate the values of the constants used in fundamental calculations – things like e, or pi. Or the simulation might become regular instead of irrational after a high number of digits. If anyone ever finds such a regularity in one of these constants, it will be the smoking gun that tells us that nothing is real.
The smoking gun. Of course, Graves was a notorious joker. He hadn’t meant to be taken seriously. Had he?
Reluctantly, Timmins accessed Omero’s project, watching the numbers ticking up as they were computed. How many should he let accumulate before checking the string against the first – what, million? – digits of pi? And suppose somebody else was on to it. He suddenly felt a sense of urgency. Even a simulated Nobel Prize would be worth having, if you were a simulation yourself.