Skip to main content

How not to write Popular Science

I get sent many popular science books to review, a very small percentage of which are self-published (never one based on the author's 'new theory'). A recent example was, for me, an object lesson in the pitfalls of writing science for the public, most of which apply whether you are DIY or with a mainstream publisher. (In fact, some big names, particularly academic publishers, provide very limited editing these days.)

Note, by the way, I am not talking about quality of writing here. A starting point is having a good narrative and making your writing engaging and readable. That's a given. But this is more about the content that is presented in the book.

There was one issue that was specific to self-publishing: make sure there are no layout oddities. Appearance is as important as fixing typos. This particular book had a first paragraph in a smaller font that the rest of the text (as well as a couple of spurious bursts of italics, finishing part way through a word). More generally applicable, the author was a medical doctor and used 'Doctor X Y' as their author name. I strongly recommend not doing this - many popular science writers have doctorates, but very few use the title in their author name - it just looks tacky. (It's even worse if, say, you are writing about physics but are a medic.)

The book covered a phenomenon that could have a physical cause, could be purely psychological or simply the result of lying (or any combination of the above) - important to know, as we will see, when looking at a survey that was central to the story. A first content lesson is that it's really important to present numbers in an easily-followed way - if there is some oddity in the statistics, for example, then it needs to be carefully explained. In the survey, 50% of the studied population had a particular experience, but 75% of the population had a subset of the experience. This clearly can't be literally true - it turned out to be because the survey was not particularly well worded, and a fair number of participants didn't realise that the second type of experience was just one example of the first.

It's also important only to state what can be logically deduced from a study. In this case, the author assumed because 200 experiences were described in the survey, there must be a physical explanation. Unfortunately, as suggested above, this is a false deduction. The fact that 200 people described subjective experiences does not mean that there anything physical happened - far more evidence than a self-reporting survey is needed to be able to make this deduction. It does mean there is something to investigate - but does not give any direction on what that might be.

We next hit on problems with probability. This is never an easy subject to deal with - if it's outside your personal area of expertise, then it's always worth getting expert guidance. The author works out an incredibly small likelihood of something happening by chance. Unfortunately, in doing so, she hits two familiar probability pitfalls. The first is of dismissing an occurrence as too unlikely to be coincidental when it happens to a specific person, where the relevant probability was the likeliness of it happening to anyone. It's a bit like saying the probability of me winning the lottery at 16 million to one is almost zero - but the reality is that someone wins most weeks. Similarly, I've more than once bumped into someone I know in a different country from the one where we both live. The probability of both that specific person and me being in that place at that specific time is ridiculously small. But the chances of such a chance meeting happening at some time is large.

The second probability error is the probability equivalent of the false deduction from the survey: the false dichotomy. The author mentions p values - the probability of something happening if the null hypothesis applies. But it's essential to remember that this is not the probability that your hypothesis isn't true. If it's very unlikely that there is no cause for something occuring, it doesn't mean that your theory for what that cause is happens to be true. Again, I'm not saying there isn't something worth investigating further, but there is no evidence by suggesting the null hypothesis is unlikely that your preferred reason is correct. 

Later on we get some physical science, but unfortunately the author has limited knowledge of physics and gets enough wrong to make the whole proposition open to doubt. We are told that humans 'generate 25 watts per second in our brain alone'. Unfortunately a watt is a measure of energy per second already, so 'watts per second' is meaningless unless we're talking about a rate of change. Also, the brain doesn't generate 25 watts of electrical power, it consumes about 20 watts, not all of which will be translated into electrical energy. We are also told that neurons have a 'negative electrical charge at rest of minus 70 millivolts/millimetre.' Leaving aside voltage not being a measure of charge, we are told this is equivalent to 14 million volts per metre, which makes it more powerful than lightning. Even if the numbers were correct, this is highly misleading, both because the neuron's potential of 70mV isn't measured across a millimetre, and because you can't multiple the potential difference up meaningfully in the way it was done here. 

A final major issue is the haphazard use of studies to back up a theory. Bearing in mind we are dealing with a topic that could both involve physical and psychological effects, there is no mention of the replication crisis in psychology and plenty of use of psychology studies going as far back as the 1970s. Some of these have been thoroughly discredited (particularly where physicists were trying to work outside their own field) and others have repeatedly failed to reproduce. There was a time when even high profile popular science titles (for example Daniel Kahneman's Thinking Fast and Slow) could get away with using studies from the swamps of pre-2012 psychology that would later prove unreproducible, but that should no longer be the case.

Some insist that an author should be an expert in the field they are writing a popular science book about. I disagree (I would have to, given the range of books I've written) - but if you are stepping outside your areas of expertise, you do need to be extremely careful about these kind of issues if you are to produce a title that does justice to the topic and doesn't mislead the reader.

Image from Unsplash by Eliott Reyna.

These articles will always be free - but if you'd like to support my online work, consider buying a virtual coffee or taking out a membership:

See all Brian's online articles or subscribe to a weekly email free here

Comments

Popular posts from this blog

Why I hate opera

If I'm honest, the title of this post is an exaggeration to make a point. I don't really hate opera. There are a couple of operas - notably Monteverdi's Incoranazione di Poppea and Purcell's Dido & Aeneas - that I quite like. But what I do find truly sickening is the reverence with which opera is treated, as if it were some particularly great art form. Nowhere was this more obvious than in ITV's 2010 gut-wrenchingly awful series Pop Star to Opera Star , where the likes of Alan Tichmarsh treated the real opera singers as if they were fragile pieces on Antiques Roadshow, and the music as if it were a gift of the gods. In my opinion - and I know not everyone agrees - opera is: Mediocre music Melodramatic plots Amateurishly hammy acting A forced and unpleasant singing style Ridiculously over-supported by public funds I won't even bother to go into any detail on the plots and the acting - this is just self-evident. But the other aspects need some exp...

Why backgammon is a better game than chess

I freely admit that chess, for those who enjoy it, is a wonderful game, but I honestly believe that as a game , backgammon is better (and this isn't just because I'm a lot better at playing backgammon than chess). Having relatively recently written a book on game theory, I have given quite a lot of thought to the nature of games, and from that I'd say that chess has two significant weaknesses compared with backgammon. One is the lack of randomness. Because backgammon includes the roll of the dice, it introduces a random factor into the play. Of course, a game that is totally random provides very little enjoyment. Tossing a coin isn't at all entertaining. But the clever thing about backgammon is that the randomness is contributory without dominating - there is still plenty of room for skill (apart from very flukey dice throws, I can always be beaten by a really good backgammon player), but the introduction of a random factor makes it more life-like, with more of a sense...

Is 5x3 the same as 3x5?

The Internet has gone mildly bonkers over a child in America who was marked down in a test because when asked to work out 5x3 by repeated addition he/she used 5+5+5 instead of 3+3+3+3+3. Those who support the teacher say that 5x3 means 'five lots of 3' where the complainants say that 'times' is commutative (reversible) so the distinction is meaningless as 5x3 and 3x5 are indistinguishable. It's certainly true that not all mathematical operations are commutative. I think we are all comfortable that 5-3 is not the same as 3-5.  However. This not true of multiplication (of numbers). And so if there is to be any distinction, it has to be in the use of English to interpret the 'x' sign. Unfortunately, even here there is no logical way of coming up with a definitive answer. I suspect most primary school teachers would expands 'times' as 'lots of' as mentioned above. So we get 5 x 3 as '5 lots of 3'. Unfortunately that only wor...