Monday, 8 February 2016

Black hole firewall paradox? Frankly, my dear, I don't give a damn

Image based on NASA image, credit ESA/NASA/SOHO
As someone who writes about physics and cosmology I occasionally get asked my opinion on something like the black hole firewall paradox. If I'm brutally honest (which I rarely am, because I'm far too polite) I will reply: 'I don't know. I don't care. It bores me stiff.'

In case you aren't sure what the paradox is, it emerges from a combination of quantum theory and general relativity (which don't go together, but hey), and relies on piling about four levels of mathematical supposition on top of each other to come to the conclusion that the information that could be considered to exist on the event horizon of a black hole can't (as it was hypothesised it did) represent all the information in the 3D interior with gravity included, and 'therefore' something passing through the event horizon would burn up. Simples.

This topic involves theorising about a phenomenon that almost certainly doesn't exist in the real universe, using physics that almost certainly doesn't apply. Now, medieval theologians are often maligned by suggesting they wasted their careers arguing how many angels could dance on the head of a pin. They didn't - it's a myth. But physicists really have spent a reasonable amount of time and effort on this modern day equivalent. 

Personally I'm much more interested in science that helps us understand phenomena we know exist than I am in mathematically driven flights of fantasy. Show me some observational or experimental evidence for a firewall and I will get excited. But stare at your navel and make it up and I really don't care. 

Don't get me wrong. I'm not saying that theoreticians should be prevented from playing around with these ideas, just as mathematicians shouldn't be stopped from thinking about mathematical structures that have no relevance to the real world. But I do think us science writers give far too much exposure to this kind of thing.

So, how many angels do you reckon could dance on the head of a pin?

Monday, 1 February 2016

What would you add?

I have just been moved into a new office at Bristol, with the luxury of a whiteboard, which I felt I ought to fill with amusing/meaningful quotes on writing etc. What would you add?

The strange case of ethnicity and nationality on the screen

I was thinking on my walk to university about how different modern screen actors are from those in my youth. Back then, any attempt at a different accent was fraught with difficulties. I have to confess to having a bit of a thing for Hayley Mills when I was about 11, but I found it hard to forgive her for her attempts at Yorkshire and American accents. And who can forget the 'delights' of Dick van Dyke's cockney? Yet now you never know if an actor is Australian, American or British - they all seem to do accents near-perfectly.

However, that is only indirectly the topic of this post. We rightly are now repelled by white actors 'blacking up' for non-white roles. Try watching Peter Sellers or Spike Milligan doing 'Indian', for instance. And I can totally understand the raised eyebrows when a white actor was recently cast as Michael Jackson. But why, I wonder, do we ignore other situations where actors pretend to be of a race or nationality that they aren't?

This is where we get back to those accents. Okay, modern actors mostly can do very good accents from different countries - but is it acceptable for them to do the equivalent of 'blacking up' in this way? There's an even stronger argument with red hair. As someone (formerly) with red hair, I'm well aware that it has ethnic origins. Yet actors with no appropriate ethnicity often dye their hair red in films. Is that acceptable?

Let's be clear. I'm not saying this as an apologist for white people playing black roles (or vice-versa). I don't think that's usually acceptable (there should surely be exceptions for parody etc.). But I genuinely ask, assuming that this isn't in the best possible taste, why it doesn't also apply to Americans playing Brits or brown-hairs playing redheads.

Friday, 29 January 2016

Another competitor for 'overblown science headline of the year'

Thanks to Ian Bald for pointing out the impressive headline 'The Death of Relativity Lurks in a Black Hole's Shadow' in Wired.

What's so impressive here is just how much it's possible to get wrong in a single headline. Black holes, of course, don't have 'shadows.' I think what they mean is its event horizon, though the article is so fuzzy it's difficult to be sure.

However, the real shocker is the apparent claim that general relativity is dead. Here's the thing. No it isn't. What the article actually says is that if a black hole's 'shadow' (event horizon?) isn't perfectly spherical or isn't just the right size for it's mass, then general relativity's predictions would be wrong. Well, duh. This would also be true if it were pink or singing the Stars and Stripes. Note however, that no one has discovered that its shape or size is different from prediction. (Or that it's pink.) They're just saying that we might be close to being able to make a measurement to see if it lives up to prediction. That's all.

Even if there is a disparity, as the article says 'If Einstein is wrong, general relativity won’t go away—it’s too good at what it does. It just won’t be the whole story anymore.' Right. And that fits with the headline how?

I appreciate editors want headlines that grab people's attention, but if they are going to deviate so far from the facts in order to do so, why not go the whole hog? I look forward to the headline on an article about a new extrasolar planet, where the story is that it's about the right size for life to read:


Why not? It makes as much sense.

Thursday, 28 January 2016

Hover cars, tri-vees and v-phones

I've just re-read a science fiction book I quite liked as a teenager. Called Prisoner of Fire, it's by the now largely forgotten British science fiction author Edmund Cooper. Back in the 70s he was quite popular and wrote a whole string of SF novels, but, to be honest, I can see why he's largely forgotten. The writing style seems from a different era, mannered and dated.

However, Cooper's ideas are still interesting. The topic of the book is the common enough SF trope of paranormals - it features a number of children with mental abilities, able to read minds, to block other telepaths, or to kill remotely. The reason it's interesting is that Cooper examines what such a capability would mean for governments, both in terms of protecting themselves and espionage, and how it could lead to an 'end justifies the means' attitude to the young telepaths as weapons.

That's not why I bring the book up now, though, so forgive the long introduction. Prisoner of Fire, written in the 1970s and set in the 1990s, wildly over-predicts technological advances. This was a common problem in the 60s and 70s. Things had moved on so much since the Second World War that it was assumed changes would be even more dramatic in the next 20-30 years. (Think how much 2001, A Space Odyssey from the late 60s overshoots what 2001 was like.) So Cooper merrily deployed hover cars, tri-vees and v-phones.

As I point out in Ten Billion Tomorrows, the problem with hover cars and tri-vees (in case you hadn't guessed, TVs that project a 3D image into empty space, a bit like the Princess Leia bit in the first Star Wars), the problem is that the authors weren't really thinking through the technological advances required. We just don't know how to practically make a car that floats or to project a hologram onto thin air. However, the v-phone is a more interesting one because we effectively have the technology but rarely use it.

I assume v-phones were video phones (it's never explicitly explained). These days, pretty well any smartphone or internet-connected computer is, in effect, a video phone. Using Skype or FaceTime, we can make video calls. And occasionally, for example, when family is on the other side of the world, they are very effective. But for 99% of our calling we don't use them. Because they feel strangely unnatural. The assumption has always been (video phones have been talked about since Edison and Tesla's day) that we would get the same benefits from a video call as a face-to-face conversation. And this can work with a sophisticated video conferencing setup. But for a chat on the phone it's a disaster.

There seem to be a number of reasons for this. One is that a smartphone is too up close and personal. It doesn't seem quite as bad on a computer where you can sit well back from the camera, because the view will take in a fair amount of the room. But on a smartphone video call, the other person's face pretty much fills the screen. To have an actual conversation with a person whose face fills your vision to that extent you would need to be around 10cm away from their face - a position that we just don't have conversations in, even with intimates, let alone strangers.

Another difficulty is focus. Although good listeners spend a lot of time looking at the person they are talking to, they also look away a fair amount, if only for fractions of a second. A solid focus on someone's face is intimidating. But in a video call, the other person is pretty much constantly looking straight at you.

Finally (I'm sure there are more issues, but these are the three that occurred to me), we don't always want the exposure that comes with being seen. A telephone is useful for many conversations precisely because we don't want to give too much away, whether it's because we're answering the phone in our pyjamas, because the room is a mess, or because it makes it easier to lie.

So poor old Edmund failed on all three, but the v-phones were arguably the most interesting fail because it's technology we can use, yet mostly choose not to.

Wednesday, 27 January 2016

Fascinating mangling of falsification

I have just read an article (don't ask me why - this is the wonder of Facebook) which tried to defend Mormonism from the worrying details of its origins. The piece included this:
Many intellectuals argue that “negative evidence” is supreme. To understand what they mean by this, consider the hypothesis that “all swans are white.” According to these intellectuals, it doesn’t matter how many white swans you find, you never really prove that “all” swans are white. However, as soon as you find one black swan, you have disproved the theory that “all swans are white.” They conclude that positive evidence doesn’t ever really prove anything, but negative evidence can. And it’s easy to see why they think that way. 
This is the approach that ex-Mormons have taken to their faith. In the face of unsettling information, they disregard all of the positive evidence because they think that a few points of negative evidence is sufficient to end the discussion. And given how logical the above reasoning seems to be, it is no wonder why. But they are still wrong. 
To understand why, consider another example. After first discovering the planet Uranus, astronomers attempted to predict its orbit by using Sir Isaac Newton’s laws of physics. They could observe the orbit of Uranus with their own eyes, but when they used Newton’s mathematical models to predict that orbit, they failed time and again. It made no sense. Newton’s laws had been right about so many things, but astronomers had found a case in which Newton’s laws did not work. So, was Newton wrong? Were his laws not quite as infallible as they had seemed? In light of this “negative evidence,” it would have been easy to conclude just that. 
However, years later, astronomers discovered another planet, Neptune. And as it turns out, when astronomers accounted for the mass of this newly discovered planet, Newton’s laws predicted the orbit of Uranus perfectly. So, as it turned out, it wasn’t that Newton’s laws of physics didn’t work. It was that they didn’t seem to work. And that’s because the astronomers simply didn’t have all the relevant information and context.
There's so much to get your teeth into here, but we'll pick out two key points. First there's the ad hominem attack. 'Many intellectuals argue... According to these intellectuals... and it's easy to see why they think this way.' Implication: intellectuals don't know what they are talking about. Don't listen to them. Note particularly 'According to these intellectuals, it doesn't matter how many white swans you find.' Forget 'According to intellectuals.' It's just true. It doesn't matter how many white swans you find. All swans are not white. Are they arguing otherwise?

However, no one suggests that falsification is usefully applicable to everything. Which is why it's odd that they then give an example where it isn't properly used. All scientific evidence is provisional. The black swan disproves the 'all swans are white' hypothesis, and that is the best data at the time and the only sensible viewpoint. But should it later prove that the 'black swan' was an unusual variant of goose and not a swan at all, the hypothesis could recover. However, the Newton example used in the extract from the article above fails on a number of counts.

First, the orbit of Uranus didn't show that 'Newton's laws of physics don't work' it showed that they didn't apply in that circumstance. There are plenty of other examples (Mercury's orbit, for instance) where they will never apply. As it happened, in the case of Uranus, it was because the astronomers didn't take into account the full situation. But there was nothing wrong with the assertion that Newton's law of gravitation didn't correctly describe the orbit of Uranus in the known solar system of the time. And until other factors were brought in, one possibility was that this was a case (like the orbit of Mercury) where Newton's law wasn't appropriate.

This argument is then used to suggest that yes, there are worrying aspects of the early history of Mormonism that cast its basis into doubt. Until you can show why that negative evidence is misleading - and that isn't happening - you can have all the positive evidence you like (which is what, exactly?) and the negative evidence still stands. Even in the Uranus example, the results showed their was something wrong with the astronomers' assumptions. Falsification remains a powerful tool, and a valuable one in cases like this.

Tuesday, 26 January 2016

The Sex Life of a Comedian - review

Written by stand-up comedian Dave Thompson, The Sex Life of a Comedian delivers some home truths about the life on the road. It's hard not to believe that the main character Doug Tucker's last minute arrivals at venues, or the way he spends hours crossing the country for underpaid gigs in unpleasant dives, don't have some inspiration in reality. As, I suspect, do the way that the various comedians who come into Tucker's life become huge successes or fail in ways that are little connected to their talent.

However, this is a novel, not a memoir, for which we can assume Thompson is thankful, because Doug Tucker's life is no bed of roses. Admittedly Doug seems to enjoy (with occasional regret and shame) the huge amount of (explicit) sex and drug taking he encounters, but there is a murky unpleasantness to his existence that shades into outright gangland violence. And Doug's luck rarely stays positive for long, while the car crash events that pull his life apart come with painful regularity.

To begin with, as we hear of Doug's extremely colourful sex life, it's hard not to think of this as a modern version of those 1970s 'Confessions of a Window Cleaner' type books. I never read one, but my suspicion is that they would have had the same, rather simplistic narrating style, with a series of sexual escapades (though I doubt if the content of these were as explicitly portrayed as are Doug's). But as the Tucker story develops, I was reminded much more of the books of the once extremely famous Leslie Thomas.

In part it was the period feel - the first part of the book is set in the nineties, but it feels more like the seventies - but mostly it was the similarity to Leslie Thomas's classic story arc of a likeable but weak-willed central character who is manipulated sexually and practically by unpleasant people to produce a trajectory that begins with a degree of success but that ends in a disastrous spiral of destruction. Like the best of Thomas's central characters, Doug Tucker has an element of an innocent, introduced to a dark world that seems first enticing and then destructive. And he has a hint of mystery about him with his rabid dislike of children and frequent reference to his mummy's knife.

I had been warned about the sex scenes, which are definitely not suitable for reading on the train if the person next to you glances at your book (as I experienced), but I found the rampant drug taking more disturbing, while the ending seemed rushed and not entirely satisfactory. I certainly wouldn't buy this if you are easily shocked or looking for a jolly romp, rather than a gut-wrenching story. However, by the time I was a quarter of the way in I had to discover Doug Tucker's fate, and it's a book that I won't forget in quite a while.

The Sex Life of a Comedian is available as an ebook on and, or as a paperback from

Monday, 25 January 2016

I am not a number

I've just read The End of Average for review, and I couldn't help letting out a little whoop of joy when it totally trashed psychometric testing.

I am talking about mechanisms like the Myers Briggs type profile, along with a whole host of rivals, all used by businesses in recruiting and team building to analyse a personality and assess how an individual will work with others. 

The problems I have always had with the approach are several-fold. It's based primarily on Jungian theory which has little scientific basis. Your personality type is self-determined, so, while it's not surprising it often feels right, that doesn't make it accurate. And I was always doubtful about the cultural norms of the mostly US-devised tests being applied worldwide. Infamously there used to be a question about whether you preferred a gun or something constructive (I can't remember what) - which clearly would have different resonance in the US and Europe. 

Now, though, there are much stronger grounds for concern. The End of Average points out that personality profiles don't reflect the behaviour of individuals, but rather they predict the average behaviour of a group of people, which isn't the same thing. If you are an ENTP like me, it doesn't say how you will behave, but how, on average, people with the same profile will behave. As the book says 'In fact, correlations between personality traits and behaviours that should be related - such as aggression and getting into fights, or extroversion and going to parties - are rarely stronger than 0.3.' The same applies to academic achievement and professional accomplishments. This means your personality traits, as identified by the test should reflect around 9 per cent of your actual behaviour, while getting over 90 per cent wrong.

Underlying this is the relatively recent (if entirely obvious) discovery that we don't have one personality/behaviour but it varies depending on the situation. A teenager, for instance, behaves very differently with a group of peers and with his or her grandmother. That's obvious. So why do we expect a single score on a handful of dimensions to reflect how we will behave in all circumstances? It's bizarre.

I don't expect companies to stop using these tests any time soon. Come on - some still use 'graphology', expecting handwriting to give insights into personality. But employers and academics should at least be thinking twice about what they are testing and why.

Wednesday, 20 January 2016

Beam up a bug

When I wrote The God Effect about entanglement around 10 years ago it seemed that many of the remarkable possibilities that emerged from this strangest effect of quantum physics were close to practical applications. As it happens, there's nothing in the book that's gone out of date - but we do keep getting incremental announcements in the field. Most recently we had the more over-the-top media sites telling us 'Scientists teleport bacteria' while the more careful came up with the confusing sounding 'Physicists propose the first scheme to teleport the memory of an organism.'

I'd need a whole book to go into quantum entanglement (:-)), but the summary version is that quantum physics predicts that, for instance, you can get a pair of quantum particles into an entangled state where making a measurement of a property of one (its spin, for instance) will instantly influence the other particle, however far apart they are. And this has been experimentally verified many times since the 1980s. Entanglement can't be used for what seems the obvious application - sending an instantaneous message - as the 'information' transmitted is random. However it can act as a linking mechanism to do things that would otherwise be impossible.

In the case of the impressive-sounding teleportation, the apparent impossibility that entanglement can help with is the so-called 'no cloning theorem.' It was proved reasonably early that it is impossible to make a duplicate of a quantum particle while preserving the original. However, with a mix of entanglement and conventional information transfer, it is possible to transfer a property of a quantum particle to a similar particle elsewhere, in effect making a remote copy of at least one aspect of the particle. In the process, the original particle's properties are altered (so you don't end up with an identical pair) and you never find out what the property's value was.

Despite these provisos, if you could do this for all the significant properties of a particle - or a collection of particles - it would be as if the original particle had been teleported from its original position to the location of the modified particle. In effect, to use the inevitable simile, it would be like putting the particle through a Star Trek transporter. This mechanism in its simplest form is already valuable for applications like quantum computing, but inevitably there was interest in doing it for real - all the significant properties - and with something bigger than a single particle.

It ought to be stressed this is never going to produce a Star Trek transporter, whether as Amazon's latest way to deliver goods or to avoid the rigours of air travel. This is because of the sheer number of particles in an sizeable object, which would take thousands of years to scan and reassemble. If we're talking a product, you don't need an atomic level duplicate - you can just send the instructions for a factory to make it. If we're talking a person, even if you got over the fact that the original 'you' would be disintegrated and only a perfect copy was produced, that timescale is simply impractical.

Over the years we've seen various properties of particles and simple molecules teleported. And it would be fascinating if it were possible to teleport a virus or bacterium. However, it should be stressed that this is not what has happened here. Firstly, nothing has actually happened. It's a proposed mechanism, not an experiment that has been carried out. And secondly we have to be clear what's meant by that 'teleport the memory' headline. In more detail, Tongcang Li at Purdue University and Zhang-qi Yin at Tsinghua University have suggested a way to use electromechanical oscillators and superconducting circuits to teleport the internal quantum state and center-of-mass motion state of a microorganism.

What it essentially means is that they may be able to transfer as a package some of the states of molecules in the bacterium to another organism. As these states are a form of information, they are described as teleporting memories. There are a few provisos, however. To make the system work, the organisms would have to be cryogenically frozen. They would not be alive. And what isn't made clear is how the setup would deal with the reality that any two bacteria are not identical in their molecular makeup. But the  theoretical experiment is interesting in the way it accesses internal properties of the organism for teleportation, rather than expecting it to be stripped down, particle by particle.

You can, in principle, see more in the original paper, but unfortunately it is a pay access.

Tuesday, 19 January 2016

Whatever happened to Second Life?

I've just read Snow Crash, which features a virtual environment, the Metaverse, so like a super-version of Second Life that I'm almost certain the designers of SL must have been inspired by the book. And reading about it reminded me just how seriously people were taking this online virtual reality environment a few years ago - yet now Second Life appears to have dropped off the radar.

When I first started blogging in the Nature Network, that august publication was arranging seminars in Second Life, companies were holding meetings in it, and people were making fortunes selling Second Life wares. I thought the whole concept of meeting up in a tacky virtual environment was crazy - surely video was far better - yet the media and many big companies were convinced that the hip audience would flock to this kind of thing. But now it's all gone rather quiet.

I've never bothered with Second Life myself, and a straw poll on Facebook got me no response from anyone who uses it seriously, but from what I've read by those who do still frequent it, the main section of the Second Life world has become like a ghost-town after the makers set up an adults only continent - apparently that's still very lively, but obviously its dubious attractions are not why all those big names of science, technology and business were setting up SL presences - and their idea clearly has fallen apart.

I can't say I'm sad - it always seemed more a collective delusion than a sensible way forward. I'd love to hear from anyone who was an SL fan or involved in corporate use of it, like that at Nature. Do let us know the whys and wherefores - and whether you still think it was a good idea.

Monday, 18 January 2016

Snow Crash - Review

I've enjoyed several of Neal Stephenson's books, but find many of them far too long, suffering from bestselling author bloatitis, so I thought it would be interesting to get hold of a copy of his classic, Snow Crash - and I'm very glad I did.

Although not a pastiche, it depends heavily on four classics of science fiction. The obvious one is William Gibson's Neuromancer, because of the net-based cyberpunk aspects that are central to Snow Crash. (The snow crash of the title is nothing to do with skiing and everything to do with computers crashing.) However, the pace and glitteriness owes a huge amount to Alfred Bester's Tiger Tiger (that's the UK title - it was originally The Stars my Destination), while the corporate-run world has a distinct feel of Pohl and Kornbluth's Gladiator at Law,  though interestingly here it's a world without any laws whatsoever. And finally there's a touch of Samuel Delaney's Babel-17, where a language is capable of doing more than simply describe things. In Delaney's book, the language is so specific that if you name something, you can construct it given only that name - here, language is capable of re-programming the human brain.

These influences, though, are only for those who are interested. If you like the kind of science fiction that hits you between the eyes and flings you into a high-octane cyber-world, particularly if you have an IT background, this is a masterpiece. Once you get over the odd name of the hero/protagonist (he's called Hiro Protagonist. Really) it is a joy to read. And despite being over two decades old, the technology really doesn't grate. Okay, Stephenson set it too early for the level of virtual reality capability, and there are too many references to video tapes, but otherwise it could have been written yesterday. What's particularly remarkable is that it is all about the internet (if not named as such) at a time when the internet wasn't widely known. This was written in 1992, yet when Microsoft launched Windows 95, it wasn't considered necessary to give any thought to the internet. That's how quickly things have changed.

As you might expect from Stephenson, there are some dramatic set-piece fights and rather a lot of violence, virtual and actual, but it also features erudite and quite lengthy library exposition of the precursor myths to many modern religions and some mind-boggling (if far-fetched) ideas about language, the nature of the Babel event and of speaking in tongues. There's also a strong female character, though today's readers might raise an eyebrow about a relationship between a 15-year-old girl and a thirty-something mass murderer. Oh, and I love the rat things.

If you find some of Stephenson's more recent books overblown, this is the one to go back to. Nicely done indeed.

Snow Crash is available from and

Thursday, 14 January 2016

Sometimes doing this job gives you a warm glow

I'm the first to admit that, if you can make a living doing it*, being a writer is not a bad gig. You don't have to set the alarm in the morning. It's part of your job to read interesting stuff and mooch about being creative. Admittedly the writing part is a bit of a faff, but, hey, everyone has downsides to their job. But there is one aspect of it that can be particularly pleasant - when someone says one of your books has inspired them.

According to the incomparable Cumbria Crack (I never get my news anywhere else**) Keswick student James Firth, who has won a science bursary, wants to study astrophysics and has a long term aim of following Tim Peake into space. And, it seems, 'James has a fascination in astrophysics having poured through books by Stephen Hawking and Brian Clegg. He now aims to study astrophysics at university.'

I wish James every success and am genuinely delighted to have played a small part in helping inspire his fascination with science.

* This is not the easy part. Note that, according to an ALCS/ Queen Mary University study, the median professional author income from writing in the UK in 2013 was around £10,000.

** Okay, not entirely true.