If you go to the App Store on your iPhone and type “depression” into the search bar, you’ll find dozens of apps purporting to screen for or help alleviate depression. One study claims there are 10,000 such apps, another some 350 or so.
Depression is a problem. The World Health Organisation calls it one of the “leading causes of disability” worldwide, and says there are 260 million sufferers — about one in every 30 people globally. The number of sufferers has gone up by about 50% since 1990, according to the Global Burden of Diseases, Injuries, and Risk Factors Study 2017, although since the global population has also increased by about the same percentage in that time, the prevalence hasn’t really changed. If we look at all mental health issues together, almost a billion people — more than one person in eight — are affected.
This is a serious burden, whether or not it is getting worse. But it goes largely untreated. The same WHO article says that between 76% and 85% of people with depression in developing countries receive no help (although they base this claim on data from 2007 so it may be outdated). In the developed West, of course, the figures are less — but still, here in Britain we are facing an NHS funding shortfall, and not everything that we would like to fund can be funded. It’s against that background that we see the proliferation of depression-treatment apps.
In theory, at least, an app doesn’t need a doctor, or an appointment; it’s fast and scalable and cheap. So if you can farm out a decent fraction of treatment to them, then you’ll provide a huge amount of benefit to a large swath of society, and save the NHS millions as you do so.
The trouble is that there doesn’t seem to be a great deal of evidence that they actually work.
It's time for the doom-mongers to clock off
A study from November last year looked at 293 mental health apps and found that only 3.41% — that is, ten of them (!) — even claimed (!) to have evidence showing their effectiveness. Of those 10, only three were backed by independent research, i.e. research not carried out by the people who made it.
The apps — things like Destressify, “a complete program for developing the practices that permanently rewire the brain” and give you “20% less stress in four weeks” in “as little as 10 minutes per day”, or MoodMission, an app which gives you short “missions” to improve your mood, such as running on the spot — take ideas from therapy, such as CBT or mindfulness, and offer it to you in app form. Those are in fact two of the 10 which have some research evidence, but even they are small, un-preregistered studies which (knowing what I know about the state of psychological research) I would be cautious to trust.
That doesn’t mean that the apps are worthless. You could argue that you don’t need to test your app if it’s just offering well-evidenced CBT therapy, although of course getting CBT therapy from an actual human may be very different from getting it from a screen, in the same way that French lessons from an actual French teacher are probably more effective than Duolingo. But only 30% of the apps tested even claimed to have experts helping with the design, meaning that “over two-thirds of apps for treating depression and/or anxiety were developed without any professional input”. Other studies have found similar shortages of expert input and evidence; this one found that 89% of apps have exactly no research evidence.
A more recent study, covered in this excellent post on the mental-health blog Mental Elf, found that even the evidence that exists is somewhat problematic. The study, a meta-analysis, looked at 18 earlier papers studying mental health apps, to see what percentage of users dropped out – they thought that would be a good proxy for how well-engaged the users were. The study found that more than a quarter of participants dropped out before the study was completed; and once they accounted for publication bias, that figure went up to almost half. Disturbingly, real apps fared no better than “placebo” ones. For comparison, only about 14% of people using “non-app controls”, e.g. face-to-face therapy or some other more traditional method, drop out.
It also found that the rate of drop-out wasn’t affected by what particular kind of therapy the app offered, but by whether or not there was an actual human involved. If some real therapist is looking at your app inputs and responding to them, you are less likely to give up on the whole process, which probably isn’t surprising. A psychologist I asked about this said “I would predict that these apps are only useful insofar as they connect people to actual services or actual people, or destigmatise seeking help.”
Is the 'epidemic of loneliness' fake news?
Yet there is a keenness to use them, and partly that’s for the cost-based reasons mentioned earlier: you can send out an awful lot of apps for the cost of one clinical appointment. But also, I think, it’s to do with our fascination for all things tech. A lot of the time this manifests as negative scaremongering — the obsession with “video game addiction” and “internet addiction”, for instance, neither of which really stand up to scrutiny, but which are already the subject of “boot camps” for sufferers in China, which may have actually killed users.
On the other side, though, there’s an eagerness to use Technology to cure all ills. I am sincerely hopeful that it will, some day: the use of big data to improve medical research, for instance, is a fascinating thing, leading to profound changes in how science is carried out. AI in radiology can augment human decision-making; in many arenas it is about as good or slightly better than human experts at spotting problems, but marvellously, it makes different mistakes, so when you use both human and AI, each one spots the things the other missed.
That’s all brilliant and I hope it carries on. But it can lead to overenthusiasm, such as the health secretary trying to get the NHS running on AI when large parts of the NHS is still running on Windows XP.
There is no mental illness epidemic
The possibly premature introduction of mental health apps into healthcare seems like another example of overenthusiasm. For instance, the NHS lists 20 “trusted” mental health apps, ranging from free to £30. Some offer guides to mindfulness meditation, such as Be Mindful, the £30 option. Others provide a “toolbox” of stress-reducing techniques, or CBT-like methods of managing negative thoughts.
Unlike many of the apps discussed above, most of these have some sort of evidence base: Be Mindful, for instance, proudly lists studies finding “63% depression reduction”, “40% reduction in stress”, and “58% anxiety reduction”. But digging into it a little, those studies are either uncontrolled, meaning it’s impossible to know whether the effect was caused by the app or just by people getting better, or small, and when the effects are as implausibly large as that, it sounds like something’s not right. It’s definitely better than no evidence, and £30 is a lot cheaper than a course of mindfulness therapy, but there is plenty of room for scepticism.
I don’t want to put anyone off using an app if they find it helps them with their mental health. But if you’re scrolling through an endless list on the App Store, trying to work out which one to use: be wary. It’s far from clear that they’ll do any good.