Daniel Bennett Hello and welcome to the BBC Science Focus podcast. I'm Dan Bennett, the editor of BBC Science Focus magazine. In this episode, I'm speaking to Tom Chivers and David Chivers. Tom is a veteran science journalist and author, and David is a lecturer in economics at the University of Durham. As well as a surname, they both share a passion for statistics or more precisely, for the way that numbers are used and presented in the media. Together, they've written a book: How to Read Numbers: a guide to Statistics in the News and knowing when to trust them. Over the next hour, we talk about how to understand the confusing stats around health and risk, how to spot a suspicious claim when you see one, and how to think about the current concerns surrounding the Oxford AstraZeneca vaccine. I spoke to Tom first and I asked him whether it was a sheer coincidence that they had written this book at a time when the public, the government and the media have been obsessing over numbers more than ever before.


Tom Chivers Well, I'll tell you, we were trying to work out the other day what the specific story was that triggered us wanting to write this book. It came from us in our Twitter DMs complaining about some news story or other with terrible numbers in it and just going, well, someone should write a book about how you can do better than this and then realising that we were probably quite a good sort of pair to do that because, you know, he's an economist. I'm a journalist with some interest in numbers. But no, it was a coincidence that it happened just as the pandemic was, it was sort of appearing. I remember when when we wrote the proposal, which I think was in February last year or January or February last year, the numbers were just starting to spiral out of control, especially in China. I think it was starting to come across them. I think our agent suggested before we sent the proposal across to the publishers, look, you really need to mention some stuff about this. This could be the biggest political story of the next few weeks, you should probably mention something in the proposal. And so we did. But it wasn't what we started out with. We started out with some story about numbers of deaths from something or other in a university, which we ended up cutting from the book itself altogether. So, yeah, it was I mean, from a purely sort of self-interested point of view, it was very well timed. I think it's fair to say.

David Chivers I think that's being a bit unfair there, I think we could have just said we predicted the crisis was going to happen.

Tom Chivers Yeah, amazing forecasting, as I mentioned in chapter, I have to go through my own chapter list. But chapter 17, in fact. Yes.

Daniel Bennett And so, obviously, as we speak today, this will be going out next week. But as we said, there's a big, in fact it's emblazoned on one of the front pages of one of the newspapers, a big statistic. You know, everyone's talking about the risk of blood clots in the under 30s in regards to the Oxford AstraZeneca vaccine. And now it's, you know, it's understandably quite a scary topic and, you know, quite a sensitive topic. But just wondering, given that you've spent a lot of time thinking about this stuff, if you could give our listeners some advice on how to think about this story and this this this idea around the risk of vaccinations.

David Chivers So I think I think there are two things here. And the first one is it's understanding about what the evidence is surrounding blood clots and vaccines, because even though the sort of regulators are suggesting there may be some link, how exactly do we know for certain that these vaccines are causing blood clots? And we actually don't know that. There is some evidence to suggest that there may be a link between the two, but how confident do we have to be before we inform people that there is some evidence? And that's something that's very difficult to portray. That's something that we talk about in our book as well, about what evidence means and how likely this would be. So that's one risk we need to sort of suggest that you need to take into account, which is that we don't know the evidence. It's not quite certain. The other thing and that's the risk of the actual blood clot itself happening when you take the vaccine will depend quite a largely on your age group, but how you actually process that risk is something that Tom and I have been discussing this this morning. Actually, it's very difficult because these involve incredibly large numbers. I'm not very good at thinking in large numbers. And I often tell people that risk is a feeling. It's something that we interpret in a feelings kind of way and how exactly me as an individual is meant to process the risk of, say, a one in 100 chance or one in 10,000 chance or a one in a million, because to me they just seem zero or it's very difficult to comprehend. And how that makes you feel is very difficult. But the important thing to remember is that everything we do involves some level of risk and everything we do in society is risk. The crossing the road, eating a sandwich, playing football, anything will come with a certain background risk. And it's important to compare. I think Tom's written about that. So maybe he could come in on that point.

Tom Chivers I can come in a bit. One thing. I was speaking recently to David Spiegelhalter, who has just become one of the great celebrities of the pandemic. Winton professor of Public Understanding of Risk at Cambridge University, he's a very great statistician. And he said one way to look at it was that if you took Wembley Stadium and filled it with 20 year olds and gave all of them the AstraZeneca jab, that at the current rate of COVID in the population, you'd expect about one of them to end up in the ICU intensive treatment unit with COVID and about one would end up in the intensive treatment unit with the Oxford AstraZeneca jab, because what he's saying is that actually COVID is very, very not risky for these young people under 20, under 30, under 20. But so actually, from an individual point of view, the risk is low whether you take the vaccine or not. I was talking about this with Dave earlier, though. He pointed out that actually, again, as he says, we're not very good at thinking in terms of these large numbers. And can you visualise how many people there are in Wembley Stadium? Does that mean anything to you? I don't know. I don't know. So I mean, there are other numbers I've been trying to get hold of while trying to make sense of it. This is it's roughly your risk of dying. Well, given that most people who have these clots then go on to survive, it's about I think about one in four of them so far have tragically died. It works out as about your risk of dying crossing the road in a given six month period. So. So these are things that we do. It's very comparable to just like the normal background risk of being alive day to day. And so I think you can try and get these sort of contexts of risk across. I think that is a useful thing to do. But I mean. So fundamentally, it is that it is worth remembering that for young people, COVID isn't very risky. It is actually is much, much riskier for old people. There's the complication of long covid and the long-term effects of fatigue and some sort of cognitive issues and muscle pain and so on. They are real, but from their point of view of death or acute disease, it is very low risk for individuals. And actually, when you talk about this risk of severe illness from the disease versus severe illness from the the vaccine in young people, while they might be balanced, it doesn't take into account the fact that when you get vaccinated as a young person, what you're actually doing is doing it for society, you're doing it to protect people around, you know, the people in there who are much higher risk. And that feels a bit unfair to bring up because so much of this pandemic has been about young people sacrificing large chunks of their life to protect all the people who are at greater risk. And I think maybe it's unfair to look at it like that. But I think this sort of starkly looking at on the one hand, you have this risk from the clot and on the other hand, you have the risk from the disease, that doesn't take into account the wider societal benefits of the vaccine. And I would still be very, very keen if someone does get offered that AstraZeneca jab, be very, very keen to stress that they should take it both for their own sake and for society's and solely for their own sake. Calculation gets less obvious at the lower end of the age range for 20 to 30 somethings. They still the benefits for society at large are enormous and should be. And we shouldn't lose sight of that.

Daniel Bennett I think that's a very good point about, you know, that underlines a lot of this book that life is not without risk. And David, that's always sort of the point. He starts from his thinking about, you know, the broader dangers at play. And so I just want to go back to your book a bit then and then and pick up on the first sentence, actually, which I thought was great. By the way, many writers probably torture themselves with their first sentences. "Numbers of cold and unfeeling." Do you think that's why we're so drawn to them as as readers, journalists, politicians, when we try and persuade people when we try and make points on Twitter, is that what you think their power is?

David Chivers I think so. I thought a lot about this actually a line, I think, when we were suggesting this, because one of the things that's very difficult as some of these lower numbers, like Tom and I do, is thinking about how someone else would feel if they don't really like numbers. So I know lots of my friends and lots of people say "I'm not a numbers person". It's quite a sort of common feeling. I think there's two reasons that we sort of think about. One is that people think they can't do the numbers, which we talk a bit about as well in the introduction. And another one is that they just don't like them. There's a sort of I think it is it really just doesn't capture anything that we think as an individual. You know, if someone's to treat me as a number, I think I get quite upset because I'm an individual and a person and we sort of have that sort of feeling. In fact, I often when I think about that sort of idea of thinking about numbers, actually in my mind, I think a lot about sort of World War One and sort of view the sort of people going 'over the top' and just treating people just as if they just complete casualties. And it seems to me quite a... I don't know, it evokes something that seems quite wrong in my mind. The thing is, though, is that because numbers are so useful, this is what we wanted to argue, and because we can then use them in a way that actually helps us treat people as if they weren't numbers that if they, you know, we actually can care about them, that's why we need to use numbers. And that's why we need to, I think, have a different sort of view on them. So rather than sort of just thinking about them as treating people as numbers, we're using numbers to treat people as people, as themselves. I think that's sort of what we wanted to get across that. I don't know if Tom had any idea.

Tom Chivers Yeah, I do. So you were also talking about whether or not that's why people like them. And I feel like I think a lot of the reasons why people do like them, why they get used a lot is because they have this veneer of truthiness, I think is the word, isn't it? So if some some figure comes out, a politician says child poverty has gone down by X since this time or whatever, you know, and that sounds like a hard and fast cold number that you can't you can't really argue with. But the other thing we really want to make clear with this book is that so much about the numbers you use and the numbers you hear about which ones you choose and how you frame them. I mean, that example of child poverty is one we mentioned in the chapter on how you cherry pick numbers, and sometimes numbers go up and down some years, but the economy is better. Whatever, you know, things just randomly change. In some years, child poverty is higher or lower. And if you say, if, for example, in 2010, child poverty happened to be really low, in 2011, it happened to be really high. And then you in 2018 or 2020, looking back, you say, oh, I'm the leader of the opposition. I want to make it look like the government has done really badly. I can choose the year when child poverty was really low as my starting point. And now you compare it to where child poverty is now and say look, it's gone up, whereas if I'm the leader of the government, I could start from the year when it was really high and look at it compared to now and say, look, it's gone down. And which of these is right, which is accurate? Well, neither is true. Neither is. There is no right answer there. What you've done is, is you have chosen this sort of way of looking at it, which makes you look as good as possible. And what we're trying to do in this book is sort of say, look, a lot of this is about how you frame it. It's about choosing the angle you look at things from and actually sometimes these numbers aren't as, sort of, there's not a right answer. And actually, you need to be better at zooming out, putting it into context and trying to understand how they can go wrong, how they can mislead, and how you can sort of stumble towards making them tell the whole truth, more truthful stories.

Daniel Bennett Before we get into some of the examples in the book, because there's a lot of of them and they're brilliant. That's made me wonder, you know, when I read this book, there's a feeling you get often in a really good science book. You sort of finish it and you think, why aren't they teaching this or they should be teaching this. Is that something that you sort of feel should be sort of more widely taught in schools or universities etc.? Because at the end, you even have a little guide for journalists. But this is far beyond journalists and politicians, isn't it?

David Chivers Yes, I think I think the problem is, is that when you are enthusiastic about something, it's very easy to say we should do more of something in school. So I'm sure if we did an English thing on Shakespeare, they'll say Shakespeare needs to be taught more than it is at school. And fortunately, I'm an economist, so I don't actually think statistics, although it's part of economics, should be taught more in schools than economics because it is essentially the foundations of science. A lot of science we do now involves statistics and numbers. And what we think about truth is scientific. And I do feel the statistics is something that we use every day, even interpretation, interpreting things like risk. So I would like to see statistics being taught more in school. I mean, it is taught because I did it at A level as an optional thing and it is taught in university. But I think it's something that we probably need to have a better handle on in society when we know we're hearing them all in the news. And it's not something we're really, I think as a society, very good at, whereas I think with other things that we do concentrate on, things like in English, like grammar and all that sort of stuff, we actually do probably quite a lot of it. And I think we can actually understand words quite well in comparison to the numbers. And I think we should be able to have the same level of understanding or at least a high level of understanding that we do now in numbers, as we do in words. And I think that's something we sort of argued in the book, is that if we think sort of literacy is important for democracy, I mean, we can't imagine a society that couldn't read participating in democracy, even though it did exist a few hundred years ago. It was the thing then. We have now a society that isn't very good at reading numbers. And that's where lots of information is given to us. And if we can't participate, it seems sort of fundamentally wrong to me. But I do think then this probably gets to the fact that it's sort of a sort of slightly acceptable to not be very good at numbers. People would say, oh, you know, I'm not a numbers person, that's OK. But I'm sure if I made a spelling mistake, people people really get angry. So I they say, oh, that's because maths is hard.

Tom Chivers I had a lot of people telling me off for plugging the book by saying "Dave and me have written a book." They're like "Ugh, it's 'Dave and I'!" Its colloquial use.

David Chivers It's funny because the meaning is very clear and this is very strange. Dave and me or Dave and I, it's completely obvious what you meant. No one going to go, hmm, I really wonder if that's the case. But if we just had a probability like what we say in the book, like oh, there's a 20 per cent increase in something, it's perfectly, perfectly grammatically correct. But it gives you absolutely no information. It's entirely unclear what we're talking about, but that is acceptable, whereas 'Dave and me' is not acceptable to me. I think I know where my priorities lie, really, in that.

Tom Chivers Yes, no, I agree. And on the subject of should this stuff be taught, if you're saying should every schoolchild in Britain have to buy a copy of a book, then I say yes. But I have had a couple of messages from university teachers, university lecturers and from school teachers saying 'I've been starting using this to teach core maths'. And I was just thrilled about that because I think that's really, you know, the first thing, that's a great endorsement of our book. Secondly, I do think, also I do think this shows how numbers are relevant to everyday life, because we will read these things. It is not just about you know, it's so easy. I remember in maths thinking 'when will I ever use quadratic equations' or whatever, but actually, if you if you read if you read a news story saying red wine will make you make you healthy, will give you cancer or prevent cancer, or if you read any of these things about the risks of crime and everything like that, we use these news stories to navigate the world. We sort of make decisions about the risks we face in the world every day, literally down to crossing the road, what we eat, what we drink, whether, you know, whether we leave the house, whether we feel safe. And I think it's really important to be able to just put it to... Hopefully this book gives you the tools to be able to put those sort of numbers in context and not just hear the scary "if you if you have a child over the age of X, then your risk of something goes up by 33 per cent." What does that mean if you don't know what the risk was? And so so, yes, I would love it if this stuff was taught. I would love it even more if we sold millions, millions of copies to school libraries. But that's that's probably a separate thing.

David Chivers But I think what's important is that people who maybe haven't read the book, probably think it's a lot about actually adding up or taking away of numbers or some kind of multiplication usually doing in your head. It is not about that. And that is really important because I think that we think about mathematics a lot of the time. We are thinking about adding up, taking away in our head, these type of complicated scenarios. Honestly, I can't remember the last time I've actually added something up in my head. I'm using a computer programme all the time. Statistics. These are more conceptual debates. If you're interested in debating or philosophy or arguments or anything about this, this is what we're talking about. And so how to read numbers and understand them is a lot of the time I'm spending on what is the right variable to choose, what is the right kind of thing we're doing. And these are arguments. These are sort of debates that we can have. And it's a real thing. It's not sort of like, OK, can you do 27 times 34 in your head. A computer can do that. I mean, it's great if you can do that in your head, but it's not the most important thing when we're talking about how to read numbers. So don't worry about how fast you are calculating stuff in your head. It's more important to sort of look at the actual arguments that these sort of numbers are making and whether you can do that.

Tom Chivers Yeah, this is exactly, this is really relevant to the thing we talked about earlier on about the AstraZeneca vaccine, the blood clots, because, I mean, I first thing I wrote about it a few weeks ago, I looked at the background, the background rate of blood clots in society. And if you give 18 million people whatever, if you just take 18 million people, what number of them will have these sort of clots in a given period anyway? And I think, well, is that the right base rate that I should be comparing to or should I be comparing it to, as I decided, as we now realise later on, is a better comparison, if we compare it to the rates of cerebral venous sinus thrombosis, and then, as it turns out, it's actually a more complicated thing than that. Still, it is a particular and rather rare combination of that and something called from thrombocytopenia or low platelet count. So what is the correct, if we're comparing the risk, what's our base rate that we should compare it to? And that is not a question of being able to add up numbers. It is a question of wisely choosing the correct base rate to be comparing it to and how to put it into context. And yes,you can do any multiplication or addition problem you want in the search bar of your browser. It takes five seconds and it's not difficult. And you don't even need a calculator. You don't need any sort of skill. What you need is sort of the mindset of thinking, oh, right. That sounds like a big number, but is it a big number? How do we find out what context we put it into to establish how seriously I should take it? And yes, it's sort of a philosophical discussion as much as it is a numbers one.

Daniel Bennett Yeah. So they struck me that especially for us as journalists, that tools for sort of critical thinking in the world, in the many problems we face and how much data there is that we can pull up on any given problem just could be immensely valuable. And so you see throughout the book, you sort of outline different effectively, in a sense, red flags, things to spot or ways of thinking about problems when it comes to statistics and claims or even even reports, major studies. And I was curious, you know, I was a little bit worried. I'll be honest. I was nervous, I thought am I going to see Science Focus in here? But, you know, because it is so tricky, it is so treacherous. But I wondered how how hard you find it to find these examples. Was it worryingly easy or did you have to do some digging?

Tom Chivers It's not hard to find examples. I suppose partly it's not hard to find examples, because I can just go through my, you know, my back catalogue of times I've complained about people getting the numbers wrong in the past. It was a little bit like doing a greatest hits album to be honest. I mean the thing is, right I, I actually I'm a journalist now, I've been a journalist, oh God I don't want to think about it. Thirteen, fourteen years, whatever. And I and I genuinely have a high opinion of journalists as people. I think they are, I think we are generally well intentioned, generally clever, generally sort of trying to do good in the world. But generally speaking, we are better with words than we are with numbers. As a as a profession, journalists are not especially numerate. That's not a huge criticism, any more than that would be a criticism of mathematicians to say that they're not especially literate. You know, that's that's the skill set that gets trained. So I think that has led to, it means that journalists are susceptible to falling into these sort of pitfalls when you see a thing saying something puts your risk, your risk of... Eating bacon every day raises your risk of a particular kind of cancer by 20 per cent. That sounds really bad. And that's your headline, 20 per cent risk. But then, and it doesn't occur to journalists because they have this sort of way of thinking drilled into them, and just say, wait, wait, wait. That's a red flag. 20 cent more than what? What's my starting point? Where do I go from there? And I think that is something that we'd love journalists to get better at. But at the moment, I think it is a common problem. There's also there's another issue which we discussed in the book is that there's an incentive problem for journalists, which is that we we do want to improve the world and help everyone, you know, help out and understand the world. And generally we see journalism as a public service, but it is also a business. And you are trying to sell papers or get people to watch your news story or listen to a podcast, whatever. And if you go around constantly saying no one died in a plane crash today, then you won't sell any papers. You are incentivised to find the exciting things, the dramatic things, the shocking, startling, surprising things. And the thing is that quite often the surprising things will not always be the ones that are best for navigating the world. I mean, like that, a 20 per cent raise in cancer risk sounds much more dangerous than, I think, I can't remember the exact numbers, but it works out. There's about a one in 90 chance that will actually affect you. It's not nothing. But it's not a big deal. It's not as big a deal as it sounds. So you are incentivised as a journalist to make it sound as dramatic as you can so that you sell more papers and journalists do sometimes try to do that. But as well as this sort of naturally not being brilliant with numbers, there is also a sort of enemy action problem that you are pushed into doing the most dramatic and sometimes misleading versions of stories or versions of numbers that you can find.

David Chivers And I think incentive is not always a journalist fault is other things, it can also be speaking in academic and academics while we have our own incentivisation problems with this whole publish or perish idea, the fact that we are, if anything, and academia is all about trying to publish things that are usually sort of startling just like you would in news. Oh, this is an interesting event. And the problem with that is that you can lead to publication bias. So we can and that's something we talk about in our book, this sort of idea that, you know, we have certain effects, maybe more prominent, we would see more of a sort of balancing out of something negative and positive effects associated with. The idea, though, is that I think it is a problem. And I think that and has also even incentivising to really talk to the media is not massive. I think in some countries it's slightly different, but there's not huge incentives, at least for promotional things in academia to talk about the work compared to, say, just publishing in a journal, let's say two or three people read it is would be seen as quite a big accomplishment compared to, say, sort of talking to the nation or explaining it. So it's quite tricky.

Tom Chivers And there's the thing we talk about in the book is, is the replication crisis. I'm sure you've discussed it on this podcast before, but that's exactly it that a lot of a lot of scientists who themselves, as it turned out, were either not very good at statistics or deliberately using statistics as dark arts almost. And they found that they could if you could get essentially a noisy data set that doesn't say anything particular and chop it up in lots of different ways until you find something that looks like it's real. And then you can say, look, I found this. I found that men twice as much pizza when they're in front of women to impress them or something like that, then you get that published in the journal and everyone goes, oh, very exciting. And then when someone goes back more carefully and checks. So, no, that's that doesn't stand up. The data isn't there. But you've got your citation, you've got your publication. You've got to get cited. You get tenure as a professor. You know, you're incentivised with these things and then so and then so this is exactly what we talk about. So you incentivise the scientists to push these dramatic findings, even if they're not real, and then as a journalist you are incentivised to cherry pick the most exciting of them and put them in a newspaper. So by the time a statistic makes it into the newspapers or into the news, it is already gone through two filters of exciting this which may well mean that it is not actually true by the time it gets to you. And I think that's a real problem. And it's a lot to ask journalists to say you should be able to check the work of scientists to make sure it's good, because the scientists can't do it. What I just mentioned about the pizza thing, that was uncovered by some really sophisticated data sleuthing, also one really badly judged blog post by the guy who did it, and then also some proper investigative journalism by BuzzFeed's Stephanie Lee, who got leaked emails, and good old fashioned journalism to uncover there was bad statistical practice going on. The idea that you could do it as a science correspondent of. I don't know, the Independent or the Times or something, who's writing two or three science stories in the day, you just haven't got the time or the bandwidth to go through the stacks like that. So it's a lot to ask journalists to be better at that sort of thing.

Daniel Bennett I mean, I did have a colleague once upon a time who had a sort of degree, I think it was in mathematical physics. And even they when they when they were starting out, they didn't know what a P value was, which is, you know, essentially what is the statistical significance, which is probably better left for another time, but of this result. And so it is very hard for both journalists and the public to sort of often navigate this. But I guess that's what the book's for right now.

Tom Chivers I will say about physical significance. It is one of the things we go on about a lot in the book will be lovely to have time to explain it. We may be right, but I would love the take away from from from this in the book that the word statistical significance does not in any way imply actual significance and just means that it may not be not real or something like that.

David Chivers I think with the idea with statistical significance is people when they think of significance is effect size. Right. That's a significant finding. You've got a really important big thing. Right. But actually, when we probably better to think of as detectable. So bacon is a really good example. Bacon and cancer is a statistically significant result that it will increase your chances of getting cancer. So if you go, wow, statistically it's significant. It sounds scientific. Maybe I don't eat bacon now, but then what if we said that the effect size of that is incredibly small, like in other words, for every piece of bacon you eat, it increases your chances of cancer by a negligible amount. Saying that significant is a bit I think that's a bit difficult. And how we would think of it is language. There may be a link, but the effect size is so small. I think saying statistically significant is very difficult. But I think statistical significance, what sort of generally people get confused about is that they think it's sort of a probability, which it kind of is, but it is more to do with. And you can't really talk about this without talking about hypothesis testing. You're doing a test.

Daniel Bennett Do tell us what statistical significance is.

Tom Chivers OK, I'll see if this one works over the radio, shall we? All right. So if you let's imagine you've got a bunch of a bunch of dice and you want to see whether they are loaded.

David Chivers Let's say it's not biased. If you roll a six, it might actually not be you might have got somebody who's trying to play a trick on you and it's say more or less likely to roll a six.

Tom Chivers Yeah. So so if you're trying to test if this dice is loaded. If you roll it, roll it once you get to six, does that mean it's biased? No, because you could roll the six by accident. Just in any way, even if it wasn't. If you roll one hundred six is in a row, they'd be very unlikely that that would happen on an unbiased dice. That was just you're reaching into the just into the greater smallest, longer odds than the number of atoms in the universe or something that so if you so but at no point can you actually say this is definitely loaded, you could always roll one more six. The P value. The P value is the likelihood of seeing that result on an unbiased dice. So if you roll three six in a row, that's one in 216 chance of seeing that on a on a fair dice. So your P value is one divided by 216 or about 0.001 or something.

David Chivers But I think the crucial thing here is to think about the fact that it's sort of is an experiment, like a thought experiment is like a it's like a philosophical experiment. What would we expect to happen if this dice was unbiased? So I think that's the way to think about it. When you think about P values, what will we expect to happen? So how likely would it be if the dice was unbiased for us to roll 100 sixes in a row? And we say that's extremely unlikely. And what we'll do is we'll set a tolerance level beforehand when we think and this is where we get the P equals 0.05 from. That's our tolerance level, how tolerant we are to say, well, if it's below that level, if that risk of that probability is so low, then we'll say, yes, it probably is. The fact that it's very unlikely to happen in this world, because in this, is it really likely that we have say 100 sixes with an unbiased dice? Probably not. So it's this idea that it is a thought experiment. And I think that's what probably people forget when they think about statistical significance and one thing that happens a lot with with journals, which is this idea of P hacking, is that people forget that it is this experiment. So they'll just keep on rerunning this thing lots of times, forgetting that, and then suddenly you'll just have this random result that, oh, wow, I've got this I think I've got statistical significance. What you've actually done is run this thing so many times that by chance you've got this random result because there is always a chance. I know this is going to be great. There is always a chance that you do get 100 sixes, maybe not 100 sixes, but there is always a chance that this thing does happen. And that's how it works in medicine. That's exactly how we'll be thinking about the problem with the blood clots that we talked about earlier. How many, for example, if we didn't see a link between vaccines and blood clots, how many blood clots would we expect in the general population? So assuming that there's no link, what's the likelihood of the amount of times we get people with blood clots and then we might be seeing now as evidence that there is some. And so we're doing this and it's difficult because it's low, there's low values, probability. There's all sorts of other things going on here. But that thought experiment is essentially what we think of as scientific proof. And I think people obviously think about falsification. But actually a lot of the time it's not is always the case that something could possibly be true. We can't prove forever that gravity always exists. We just think it's extremely highly likely it does, because every time I've done it, I see this thing falling down and it's all about likelihood and probability. And I think that's reason why going back to everything you said, science is so important in schools, because this is what we think of as scientific proof. It's not the theory necessarily. And I think this is such an extremely important thing for people to know and to understand.

Daniel Bennett I think that's done a great job of explaining that. I think certainly in the book as well, you even use caps to make your point.

Tom Chivers Extremely important!

Daniel Bennett It is and, you know, I think we probably at Science Focus are maybe a little bit guilty of not going into that too much because it does, if you have to explain it every time you mention a discovery, it's a bit laborious, but it is fundamental. And so it wanna just move on to another value. But I think it's interesting because, again, it's something we've all been talking about for the last year, and that is R in relation to the predictability of the of COVID. And we've been seeing R values shown all the time. I think I'd like to just talk about that, because I think it's a good example of how. There's an example, there's no sort of intention to mislead this sort of misrepresentation necessarily of numbers, but even then, you know, a number can hide another meaning or a deeper meaning, especially when it's a simple single number that's quite neat and tidy, like, oh, so we're below one. So everything's OK. And I wonder if if you could sort of talk us through the example of how the R rate can sometimes sort of hide important detail in the sort of reporting and the communication of what's going on with COVID.

Tom Chivers All right. OK, well, so that was this was just a it was a really interesting thing that happened and I think May or so of last year that John Edmunds is the epidemiologist who I think he's on stage. Then he recently told the Science and Technology Committee that the R value had gone up and this was in the depths of lockdown. That sounded awful. And so, of course, the next day, you know, R value's gone back up, might be as high as one, the newspapers all over the place. But if you paid a bit of attention to what he was talking, what he'd said, it was a bit more complicated than that, that actually he was treating it as though there were two epidemics going on, one in the in the wider community. You know, so amongst all of us and one in the care homes and hospitals. And in care homes and hospitals, the R value was a bit higher than it is in the society at large or spreading more easily in care homes and hospitals because they're more cramped environment. But what would what was happening in both of these both of these environments was that the R value of it was coming down. The number of cases was going down on the number and the likelihood of spreading was coming down in both of them. But because it was coming down faster in the wider community than it was in hospitals and care homes, it meant that even though both of these individual places, these individual epidemics were getting was slowing down or getting smaller, the care home epidemics was taking up more of the average. It was becoming a bigger part of the average and therefore the average, when you look at the both of them together was going up. It was this is this weird, counterintuitive thing called Simpson's paradox, that even though the numbers when you get a big dataset and divide it up, even though the smaller bits, the the numbers can be going one way, when you add them together, it can look like the average is going the other. You can see this in some really interesting things. What was a marvellous example that we from the for example, in America, the median wage for all of society was going up. But if you broke it down to people who have gone to university, people who haven't gone to university. People have gone to high school, who haven't finished high school. And each of the median wage for each of those four segments was going down, even while the society as a whole was going up. And that was because more people were moving into the higher group. So more people were going to university, more people were finishing high school. And so it became this really complicated thing that even though the overall numbers going one way, when you look at the subgroups, it goes another. And this was a really difficult thing because what is the correct way of looking at it? And if you if you if in the median wage, for example, if you were a person with a college degree, then it's likely that your wage has gone down over that period. But if you're an American, you're as a whole, it's likely that your wage has gone up. What is the correct way of telling that story? And I think in the case of the R value, at least, it might be important to know that to treat it as one as one big epidemic or it might be important to look at the subgroups. The only honest thing you can do is say, look, this paradox is happening. This this complicated story with the numbers is going on. And we need to sort of express that and let the readers understand there's more to it than a simple number going up and down.

David Chivers I find this paradox extremely, I mean, people are sort of slightly confused about how this can happen. Don't worry. I still find it very difficult. It's actually when you get written down or you see a sort of paper, we've got a nice table in the book which is sort of advertisement to buy the book. It's actually, oh OK, that makes sense. Now, it's actually a lot to do how the group sizes change and then then you can see how it happens. But I think just one interesting point about the R number, which I always felt was so, so difficult, especially since just a number you see 1.1 or 1.2, I'm used to dealing with sort of compound growth, exponential growth. And I find it the people... What's very difficult is to know what's the big deal about 1.2, 1.3 or 1.4? What do they they don't really seem like that that big a deal. Right. But when actually you write it down, then I suggest you do this if you're so inclined, if you're not really convinced, it's just to see it once and literally just sort of take a number, say, 10 and times it by 1.1 and see how quickly it grows. And you'll be surprised that small, very small difference, like 1.1 to 1.3, how quickly it just completely gets out of hand. And I think it goes back to this idea of what's in our experience, which I think is important. Unless we maybe see something like starting a fire and suddenly it sort of start small, then suddenly it completely, completely takes off, we have very little experience of compound growth in our lives. I think that's why it's so hard to see from a number. How can such a small difference really make this sort of big impact on how scary that is? And I think that was something at the time people probably didn't really have a good grasp of.

Tom Chivers All I will say, though, it is astonishing. Well, I think I would love to do some research into this or something, because I bet if you'd asked the British population 14 months ago or certainly 18 months ago, what is the R value? I would bet that one person in 100 could answer it. Professional epidemiologists, statisticians, that sort of thing. I couldn't have done until I read a book about it, until I read Kucharski's book about it in about January last year when this was all kicking off. I think I don't think I'd have been able to tell you what it was. Now, if you did it again, and I know I'm making up numbers there, but I would be amazed if much less than half of people could, you know, couldn't tell, were able to tell you. I think it's now very much part of our common parlance. I think there's been...

David Chivers They could give you some definition. They probably know greater than one is bad. Less than one is good, that sort of thing.

Tom Chivers I think that I think I think they probably know that it's to do with the number of people that a person you know, the number of people infected by the average person. Maybe, maybe half is optimistic. But I do think the pandemic has been a marvellous indication of how people can get the hang of this stuff to a reasonable degree. They're not all going to be Professor David Spiegelhalter. But they can get it to the degree where they can make use of it in their lives when they have to. And, you know, I think that has been somewhat heartening. I may be totally wrong. Maybe no one has a clue what they're talking about. But I I think that certainly when you watch the the, um, the political journalists and people have been thrown into having to deal with this stuff, I think it's been fashionable to sort of say they've got it terribly wrong. But I feel like going from a standing start, they've done a pretty decent job of having to go from rarely using numbers to do more than look at how many votes they've got in the House of Commons to now actually working out of R values and things. I think I think it's been a realisation.

David Chivers I wonder if that's because maybe R, it doesn't really have anything behind it. It just says R so you have to learn the definition. But herd immunity, you're going to say what that means I think probably people would make more mistakes and that maybe you and I talked about that a lot because it's a bit like statistical significance. It's a misleading word that you think, I know what this means, and you sort of go off topic.

Daniel Bennett I think it would be fascinating, wouldn't it, because I suppose there's a bit of both tied into that, isn't it? Because a lot of this is not intuitive. It is far from intuitive. For example, the the paradox that we just saw. But then I suppose. In the last year, we have had a very real example of what happens when R goes from 0.9 to 1.1, we suddenly, we've quickly seen these fluctuating numbers where very quickly you're suddenly not you know, you're not following the news anymore because it's dwindling. And then suddenly we all have to be inside, everything's, you know, terrible. So it would be a fascinating thing to see how people understand these sorts of things now. You know, how much the science communication got through and made a difference.

Tom Chivers It was a thing, sorry I jumped in there, but it was something I've been asked a lot. When I won the Royal Society Statistical Excellence in Journalism Award only a few months ago, one of the things you have to write a little thing afterwards about all this. And they said, do you think this last year has been an education for people? And I have to say, as someone who likes to take numbers seriously, I don't know. I haven't done any surveys on this. I would love it if someone did. But my fairly confident guess is that there have been a real improvement in understanding the relevant numbers, things like infection fatality rates, things like R, things you know, just a much larger percentage of the population could talk reasonably knowledgeably about these things than they could have done before. But sorry, I interrupted you.

David Chivers Now I'm thinking how you'd go about looking at that. And I'm like, oh, we don't have a baseline anyway.

Daniel Bennett Well, I know. I mean, I think it's fascinating because I think as well, before I put this back on the rails, um. I would be fascinated to see whether behaviour changes as we return to normal in terms of just simple things like how disease spreads, will we shake hands? Will we keep masks on in public transport? Will we... Now that people have this intimate knowledge of how disease spreads and what the risks are and etc.? I wonder if that will change. But anyway, to get back to statistics that we know about, not the ones that we haven't found out yet. I just want to ask about a couple more before we wrap up another one, not to keep it on coronavirus, but it's pressing, I suppose, is Goodhart's law. There are some great comics, if you Google Goodhart's law, about this by XKCD and others like it. And could you tell us sort of how that played out in the pandemic a little bit, and what it is.

Tom Chivers OK, so Goodhart's law is this very dry sounding, but incredibly profound thing, I think. The line is: when a measure becomes a target, it ceases to be a good measure. And what that means is if, you know, we use numbers, we use metrics, measures to measure what is going on in the world around us. We want to know how is our education system doing? So we measure how many children get five A*-C grades at GCSE or we want to know how our hospital system is doing, so we say, what's the survival rate of people 30 days after they are allowed out of the hospital? And that's really good and useful. But then if you take that number and say, right, this obviously correlates with how good things are, we think, OK, so we look at we look at education and we say, oh, so children who get more A*-C grades tend to do better in life. Therefore, if we push the number of children to get A*-C grades up, more children will do better in life. So that would be our target. We will tell schools if you don't get 50 per cent of your children having five A*-C at GCSE, then you will be punished and you'll be rewarded if you do. Head teachers will lose their jobs or you'll be put in special measures if it doesn't happen, that sort of thing. And then of course, what happens is that measure becomes a target. And so it ceases to become a bigger measure. So then you'll get things like teachers will teach to the test or as genuinely did happen, they concentrated on the children on the C-D grade boundary. So they'll pushing children who are in D up to C and neglected everyone else because they or they're either going to fail or they're going to get the C to B or A or whatever. So that is the way of pushing up this arbitrary metric is to concentrate on those children hugely. It pushes up the metric, but it doesn't particularly improve the thing you actually care about, which is the sort of flourishing of all children and the whole grade pyramid. And what happened in COVID which was, yeah, I was weirdly proud of myself about this, because it was, again, April, I think last year Matt Hancock declared that they were going to reach 100,000 tests by the end of April. I think I've got my numbers and dates right there. And I immediately thought, wait a minute, wait a minute. You're giving yourself a metric there, you're going to set yourself target. And this is going to be a bit of a hotbed for Goodhart's law, and lo and behold. That's exactly what happened the day they started counting antibody tests as COVID tests, even though that wasn't what we were using as tests. Matt Hancock was sending out emails to all the Tory mailing lists saying, please go and get tested so that we can get our numbers up. They started counting tests that had just been sent out in the post rather than actually taken as taken. So, you know, and then they started counting tests that have been... So if someone did a test, it didn't work, and they were tested again, that counts as two tests and by doing all these things, that got it up to 100,000. And, you know, I'm not saying this was a bad thing. The testing regime got up really quickly and was impressive, but they they stopped... What we actually cared about was the number of people who needed the test and were able to get one. And what we didn't care about was whether it was exactly 100,000 people, 100,001 or 99,999. That doesn't matter. The number on the thing was unimportant. And the Goodhart's law nature of it meant that, of course, once we set this target, it was going to push us to getting these numbers as high as they possibly could.

David Chivers I think what's interesting about this law is that it's so applicable to so many things. In fact, we've discussed a couple. One is, would you say it's a good idea for journalists to look at the amount of clicks on that on their article? Now, you could say that's a good thing because it would incentivise some behaviour to make sure they care about maybe, say, the importance of that journalism. But also we could also say there's some negative behaviour that would go in, maybe more sensationalist, same with academics and publishing. The idea that maybe it incentivises me to be more ambitious and work harder or maybe just incentivises me to cheat. And I think what's really important, it's not I don't think it should be that we should not necessarily set targets. They can they can be a good thing. Right. Saying we should aim for this. I think saying that we should aim for something is OK. The problem then is that then if that becomes the only way you judge something, that's where you get into trouble. If you then saying, look, we're only judged by the success of the hundred thousand because that then that measure has become something that we can't objectively say "this is what's going on with the vaccine". And I think it's important because I'm sure many of you listening will have this in your daily lives. It happens all the time. And I think what's important is not to lose sight of what you are trying to achieve. And this is why I think people dislike numbers so much, because when you get things like bosses or whatever, looking at why haven't you done this so many things, you get these numbers and people are trying to sort of show that as evidence or maybe even giving you that as a target. I think that really leads to something that is bad. It's a bad use of numbers. I think one of the things to do is when you get a number rather than saying necessarily this is what this thing says, this is what the shows, think 'why it it showing this?' And think of all the possible reasons for this, not just the good or the bad. So that's probably a better way of thinking about numbers rather than just saying this immediately means you're rubbish at journalism, Tom, because you can get the thousand million clicks or whatever.

Tom Chivers I always get a thousand million clicks.

David Chivers Is that something else? I think give be too quick to jump to conclusions. You're not really getting you get getting the most out of the data.

Tom Chivers The other thing, another thing that the BBC last year and speaking to an educationalist and sports fan, Daisy Christodoulou, who she's the woman who mentioned to me the the D and C grade boundary thing. But she also said you can see Goodhart's law in a really interesting way in sports. Like you have, for example, in football, you have a problem, like the game doesn't flow well because someone keeps standing, just goal hanging. So you try to introduce rules to stop someone goal hanging, you introduce the offside rules, say, OK, you're not supposed to stand past this point, past where the last defender is and then, of course, that becomes part of the game, manipulating this new offside rule becomes part of the game. And then oh, so a lot of goals that would otherwise be good get ruled off. You get VAR looking at someone's armpit being offside by half a millimetre and all this sort of stuff. But that's not really what the rule used to care about, if someone is a quarter of a millimetre offside. It would care about the flow of the game. But any time you introduce these new rules, you set these new targets, you set these new metrics, and then that makes them useless for the thing you're really trying to do. I think it's a really Goodhart's law. Is this one of these things that when you see when you've had to explain to you, you start seeing it everywhere, just everywhere. It just becomes. My last book was about how AI could go wrong in dangerous ways and basically I realised I could have written a large part of the book that was 'like Goodhart's law, but more so'. If you tell a computer to do something to like cure cancer or make paperclips or something like that, you have to be damn sure that it's going to actually do the thing you want it to do, not just fulfil the metric that you've set for it, because that's where it goes disastrously wrong. I think Goodhart's law is a really profound and fascinating thing. Honestly, ince you've seen it, you can't stop seeing everywhere.

Daniel Bennett So then finally, I just wanted to sort of end on an on an upbeat note because maybe it points in this podcast or maybe in the book you might think, oh, God, can I trust anything? Can I. If anything true? But I suspect you might have if either actually or different view to that. But what do you think? Can we can we trust anything anymore or is it just about the nature of the world as it is now?

David Chivers I think this is something again, it's a worry, worries and action for us that we are worried about bad behaviour, that suddenly, you know, you see you read everything in this book and you say everything is just completely nonsense. So I can't trust anything. And I think it's completely reasonable thing to do. I really hope that people don't do that well, and I would have told them not to. We're obviously highlighting certain events where things do go wrong. OK, and I think it's more about how to interpret when you see things in the world, how to interpret that. And I do. I believe quite strongly that numbers are extremely important and can help us get to what we think is the truth or something along those lines, and even if you have stuff in the media that's not quite true, it doesn't mean we can't learn something from that. You know, a lot of the times when I when I see something I say, for example, this sampling bias here, even on Twitter, doesn't mean I can't learn something from that thing. It might be that that's a good representation of Twitter. It's maybe a bad representation of the population as a whole, but I might be able to learn something from that. So it's more what can I take from this when I see a number? What can I learn from this? So rather than sort of saying, oh, nothing is real, everyone's sort of lying to me, I think it's more well, I mean, that is true. You could understand that people go and I think it's a common thing. Right. Lots of people, when we mention this book say, oh, yes, lies done by statistics and all that. And that's probably only true because we are letting people do that. We know that if we knew how people did that, it'd be far harder for us to be tricked by it because we would know, you know, basically if everybody knew or followed the rules in the back of the book, about 20 per cent increase, there would be no, the sensationalism will go away because there'd be no point in doing it. Everyone will go 'this is stupid'. That would be ridiculous. No one would believe it. So I think that's one of the reasons why at the end of the book, we gave this idea of a style guide. We gave some concrete ideas. And it's a campaign that we're trying to do in terms of statistical literacy. If you're interested, you can sign up on how to read numbers. And it's this idea of this. If we have the media and follow these simple rules about how to present numbers, then it is better informative for us what we want to do. It's not just journalists following them. If we as the public want to demand people giving better numbers because that's what we're interested in and we want it to be sort of demand led.

Tom Chivers And I, I will say this about the... Firstly, I yeah, I worry too about the the lies, damn lies and statistics thing, because I think I think there's a there's a counterpart to that which we quoted in the book, which I think is really important, which is while it is easy to lie with statistics, it is even easier to lie without them, because if you're using statistics, at least using real statistics, at least you have to it makes a little bit harder to falsify things. Right, because you can't just make stuff up in the same way that you could without them. That said, like I, I do think it's been brilliant and I completely agree with David. This much such thing we would love you to do is go to howtoreadnumbers.com and sign up for campaign to get more media outlets to use statistical style guide, whether ours or their own. Something that we'd love it if the BBC wrote a statistical style guide, like they have a writing style guide. But I do think while I can't trust anything in the media, any numbers, all these numbers of lies is an overreaction. I think it will be good for readers to have a sort of an instinctive, 'well, they've given me a number. What should I do now? What should I trust but verify?' sort of aspect of thing. And think if you if you see if you see a number, just knowing what questions you should ask to interrogate that number. Have they give me a context? Have they given me the absolute risk as well as the relative risk? Is there an effect size here? If they just say that there is a statistically significant link between eating fish fingers and developing a problem with snoring or something? You know, they when they say there is a statistical link, how big is that link? Have they told me? And if they don't tell you these things or if they don't tell you the sample size of the the study or perhaps the confidence interval around their estimates or anything like that, it might be worth just saying, well, should be a bit more willing to just not trust it or to go and look a bit more into it somewhere else, especially if you're going to base any decisions on it. I think that that would be a good thing to take away.

Daniel Bennett That was Tom Chivers and David Chivers. They were talking to me about the new book, How to Read Numbers. The book's on sale now, and it's published by Weidenfeld Nicolson. If you are interested in their advice on how to report statistics, do head to their site howtoreadnumbers.com, where you can find a statistical guide in full. Thanks for listening. And if you enjoyed the episode, please do leave this review. This podcast was brought to you by the team behind BBC Science Focus magazine. In the April issue, which is on sale now, we dive into the missions that will take scientists to new frontiers. For the magazine, we've talked to the teams exploring mountains under the oceans. We've taken a close look at the project that will attempt to take an actual photo of an exoplanet. And we've explored how cosmic rays are allowing scientists to see deeper into ancient pyramids than ever before. Of course there's much more on site and on our website, sciencefocus.com.

Let us know what you think of the episode with a review or a comment wherever you listen to your podcasts.

Listen to more episodes of the Science Focus Podcast: