I read in the i newspaper that Britain's Got Talent 'recorded the lowest audience figures in the show's ten year history.' How do they know? Because 'on average 8.5 million viewers watched the final... [recording] a peak audience of 10.5 million viewers.' But how can they know that? They can't - and worse still, the method of discovering it is far less reliable than it used to be.
These numbers are based on a sample. A few thousand brave volunteers register what they watch on little boxes - the data is then aggregated and multiplied up by various esoteric factors to try to make the sample truly representative. As polls often show, this kind of multiplying up has many problems and often doesn't work very well. But at least it was relatively simple when this system first started to be used. You either watched some or all of a programme, or you didn't.
Now the viewing audience is painfully splintered. We, for instance, hardly ever watch anything when it is broadcast. We either watch it recorded on a YouView box, or using catch up. And a fair proportion of the time we're viewing via more indirect streaming from the likes of Netflix. So, for instance, a couple of months ago, we watched Series 3 of Call the Midwife on Netflix. Even if we were part of the sample (which we aren't) there is no way that would count towards the viewing figures when it was first broadcast in 2013.
Of course not everyone watches the same way we do - but that's the whole point. For example, we hardly ever watch TV on phones or tablets - but some do all the time. This fragmentation makes the margin for error on the statistics potentially much larger. I have never seen any error bars on viewing figures (why not?) - but by now they must be pretty enormous. I honestly don't think the public is too thick to cope with a range rather than a single figure - and it would make the statistics far more honest than they currently are.
These numbers are based on a sample. A few thousand brave volunteers register what they watch on little boxes - the data is then aggregated and multiplied up by various esoteric factors to try to make the sample truly representative. As polls often show, this kind of multiplying up has many problems and often doesn't work very well. But at least it was relatively simple when this system first started to be used. You either watched some or all of a programme, or you didn't.
Now the viewing audience is painfully splintered. We, for instance, hardly ever watch anything when it is broadcast. We either watch it recorded on a YouView box, or using catch up. And a fair proportion of the time we're viewing via more indirect streaming from the likes of Netflix. So, for instance, a couple of months ago, we watched Series 3 of Call the Midwife on Netflix. Even if we were part of the sample (which we aren't) there is no way that would count towards the viewing figures when it was first broadcast in 2013.
Of course not everyone watches the same way we do - but that's the whole point. For example, we hardly ever watch TV on phones or tablets - but some do all the time. This fragmentation makes the margin for error on the statistics potentially much larger. I have never seen any error bars on viewing figures (why not?) - but by now they must be pretty enormous. I honestly don't think the public is too thick to cope with a range rather than a single figure - and it would make the statistics far more honest than they currently are.
Comments
Post a Comment