Everything is awesome. Photo: Mike Hewitt/Getty Images


July 14, 2020   7 mins

Back in the middle of March, I was pretty stressed out. Funnily enough it was the handwashing that got me; this was a few days before lockdown, and the real public health message had been wash your hands. So I washed them, dozens of times a day. My hands were raw and cracked, sometimes to the point of bleeding. They itched — burned — constantly. I was at every moment aware of this burning sensation, and it was an ever-present, unignorable reminder of how the world had changed. 

My six-year-old son’s hands were red and raw too, from his conscientious efforts at washing his hands at school. He would proudly tell me that his teacher had said he was the best hand-washer in his class, and it broke my heart even as I praised him for it. I’m not an anxious or depressive person, and I’m not usually stressed. But the coronavirus pandemic — the uncertainty, the fear, and the god-damn handwashing — got to me.

It’s easier now: you get into routines, the weird becomes normal; plus I started to backslide a bit on the handwashing, once they shifted the emphasis to social distancing. But, for a while, I could feel myself struggling. 

So when people say that Covid-19 is bringing with it a secondary epidemic of mental health problems, I can easily believe it. I can believe it because I could see how tough being out of school was on my kids, and because I heard other people saying the same; and because we were all well-off people of privilege, so I could only imagine that it was worse for others. I can believe it because I know so many people who went from being full-time workers to full-time workers and full-time parents and educators. It’s a bloody stressful time.

Mental health matters. If people are suffering significant mental health impacts because of coronavirus, or because of the lockdown, then we need to know, because that will affect the correct way to respond, personally and as a society. We need to honestly address the trade-offs, the costs and benefits, of the policies our country takes to minimise the damage of the virus.

That means that research into the mental health impact is important — and there’s lots of research going on. The trouble is that, just as in so much of science at the moment, everything has to happen fast. That’s understandable, but it can cause real problems. 

Pete Etchells, a professor of psychology at Bath Spa University, told me that “there’s a lot of scrabbling around to do research on the wider effects of Covid. Not just in psych. And it comes from a good place; people want to help and do something useful.” But, he says, we need to be extraordinarily careful. “When there’s stuff going out there that either has the aim of, or has an impact on, policy decisions, it has a real potential to have a big impact on people’s lives.” 

Careless science can lead to wrong conclusions amazingly easily. Psychology has learnt that lesson more harshly than most disciplines, having suffered huge and embarrassing setbacks during the “replication crisis”. What it has discovered is that some precautions are vital. Some of these are slightly technical-sounding but important, such as preregistration of hypotheses, and committing to publishing regardless of the results, which avoid innocent and deliberate statistical malpractice. And, perhaps most importantly, it’s vital to share your data and methods, so other researchers can check your work.

I want to flag two papers looking at Covid and mental health, each with very different approaches to how you do science, to show you what I mean. One of them came out in May, and was intended to establish the “validity” of a new measure that one of the authors had developed, a “fear of Covid-19” scale, back in March. The authors have since used it in further research.

The “validity” of a scientific measure is how well it applies to the real world. For instance, most research into smartphones and mental health has been carried out using one of several questionnaires which asked people about their phone use. But a study last year discovered that how much people thought they used their phones was only tangentially related to how much they actually used it, as recorded by a special app. So the phone-use scales are not very valid: they aren’t very accurate at measuring the thing they’re supposed to be measuring. 

Validity is crucial in psychology, because so much of it is based on questionnaires and surveys. If I make a questionnaire to give to patients, and which then gives them a score, and that score is called their “suicidality score”, then it is really important that people who score highly on it are at higher risk of suicide than people with low scores. 

It’s important because, if that scale is picked up, other people will do research using it. So they might do a trial of some antidepressant, and measure subjects’ suicidality scores on this scale before and after. Then, if the subjects’ scores improve, the authors can say “this drug improves scores on the Chivers Suicidality Scale by an average of 11 points”. And it might get picked up by NICE and given to patients by the NHS. 

So, if the suicidality score is actually not valid, if it doesn’t relate to real-world outcomes, then it is worse than useless: it is directing resources and future research away from the places where they can do good.

That means the paper establishing the validity of the new “fear of Covid-19” scale is very important. The authors seemed to take it seriously, recruiting 8,500 Bangladeshi students to test it on. And, vitally, they said that “data will be available on request” if other researchers want to look at their findings.

But — disturbingly — they haven’t gone ahead with that. Another researcher, Brittany Davidson — one of the authors of the smartphone self-report validation study I mentioned earlier — asked them for the data several months ago. She had wanted to look at the data because, she says, she was wary of psychometric scales coming out with a “lack of rigour”, and she had questions over how the authors had managed to recruit such a huge sample size in a very short time, apparently without offering to pay the subjects anything; that is, she says, very unusual.

So far, none of the data has been forthcoming. This is really out of the ordinary, and worrying, especially since — as far as we can tell — the data is not sensitive. Researchers keeping data to themselves was a significant part of why the replication crisis got so bad. It’s especially bad when they say in the paper that they’ll share it.

You may feel this is all a bit technical and unimportant, but it’s not. The original “fear of Covid-19 scale” paper has more than 120 citations already. The validation paper was carried out in Bangladesh, but the authors have also created English, Turkish, Iranian, Spanish and other versions of the scale, in each of those languages. A small but non-negligible percentage of the research into the mental health impacts of Covid-19 is carried out using this scale. 

And its authors make serious claims about those impacts: they say in one paper that Covid-19 has led to people “suffering from elevated anxiety, anger, confusion, and posttraumatic symptoms”, to “unusual sadness, fear, frustration, feelings of helplessness, loneliness, and nervousness”, and that in “extreme cases, it may trigger suicidal thoughts and attempts and, in some cases, actually result in suicide”. This is important, non-trivial stuff.

More than that: the authors use the scale to propose policies for national governments, such as banning and removing websites that host “misinformation (e.g., false COVID-19, treatment remedies, COVID-19 conspiracy theories)”, or having government health agencies “make online counseling sessions available to help ease the mental health concerns and worries of the general population”. It is very possible that, somewhere in the world, a government white paper will be written citing the “fear of Covid-19” scale as support for some policy or other. (Stranger things have happened.) And yet other researchers cannot look at it to see how it was validated. 

(There are, I should note, other concerns with some of the research of one of the authors, some raised here by psychologist Dorothy Bishop and others here by the psychologist Nick Brown.)

As we said earlier, it seems obvious that Covid-19 is dragging a mental health crisis in its wake. It certainly seems obvious to me. So you might think it doesn’t really matter if some papers cut corners a bit in order to tell us things that we already know.

That’s why I wanted to bring up another piece of work, which seems to me an example of careful research in this fast-moving and scary topic. It looked at 800 adults living alone in the UK and USA, and measured their mental health on well-established, well-validated scales (the standard measures of depression, anxiety and loneliness). 

The paper measured their responses at three points, one some time before the crisis had really taken off, and two after lockdown had started. Vitally, also, it was a “registered report”: that is, the authors said in advance what they were looking for, and the journal agreed to publish it as long as it was carried out according to its methods, regardless of its results. That rules out lots of the bad statistical practice that can distort science.

Just as the existence of a Covid-19 mental health crisis seemed obvious to me, it seemed obvious to the authors of this study, Dr Netta Weinstein of the University of Reading and Dr Thuy-Vy Nguyen of the University of Durham. They just assumed that it was true; so much so that they were only measuring them to look at other things, such as whether introverted people were coping better.

But to both authors’ surprise, they simply didn’t find any increase at all on any of the three scales. The average level of loneliness, anxiety and depression did not go up. It was — to me, and to the authors — a startling result.

It’s not the last word. There could be all sorts of reasons why it didn’t find any impact. Weinstein wondered if the study, looking at people living alone, missed the stress placed on families. Or it could be that depression, anxiety and loneliness aren’t the main things that went up. “We didn’t look at stress,” she says: “I can remember that moment when I realised I was a full-time mum and a full-time academic, that sense of panic, but I don’t think I felt depressed.” 

But the point is: we would all have confidently predicted, like Nguyen and Weinstein, that lockdown would make people living alone more lonely, more anxious and more depressed. They didn’t find that. Psychological research is messy and hard; it demonstrates the importance of really good, careful science, especially in messy topics like psychology, and especially in fast-moving situations like this. “This is a good opportunity to practise all the stuff that came out of the replication crisis to make sure that the answers we get are the right ones,” says Etchells. Open data, preregistered hypotheses, well-validated scales: we know this stuff is important. 

My stress levels have returned to normal now; partly that’s routine, partly it’s kids being back at school, partly it’s my hands being back to their usual washing-up-liquid-advert softness. But the issues surrounding Covid-19 and mental health have not gone away; if nothing else, we can expect economic hardship and widespread unemployment in the coming years, and we know that is linked to mental health problems. Research into those issues will only get more important, to help mitigate those problems. But it needs to be good research. The lessons of the replication crisis need to be learned.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers