Why do so many exciting studies turn out to be bogus? Credit: DeKeerle/Sygma via Getty Images

August 26, 2021   5 mins

In 2002, a Harvard professor named Marc Hauser made an exciting discovery about monkeys. Cotton-top tamarins, to be specific. The monkeys, just like human infants, were able to generalise rules that they’d learned across different patterns. This was a big deal: if monkeys had this capacity, it would provide key insights into how human language evolved.

Except it was all fake: in the experiment, which relied on the monkeys looking in particular directions when shown certain patterns, Hauser had simply pretended that they were looking in the direction relevant to his language-evolution theory. They hadn’t been. When a research assistant questioned how Hauser himself kept finding the results he wanted when nobody else who looked at the data could, he turned into a browbeating bully: “I am getting a bit pissed here,” he wrote in an email. “There were no inconsistencies!”

It is just a tiny bit ironic, then, that Hauser had also written a book about morality. Moral Minds: The Nature of Right and Wrong came out in 15 years ago, and described Hauser’s theory that we have an in-built, evolved morality module in our brains. Perhaps his had gone somewhat awry: not only did he fake the data in that monkey-learning paper, but there were also allegations that he’d lifted many of his book’s ideas — most notably, the idea that morality has a “universal grammar”, like language — from another academic, John Mikhail, without crediting him at all.

You might have expected better from an Ivy League university professor. But the Hauser case was a classic reminder of how even the most high-powered intellectuals from the most august institutions should never be given our implicit trust.

Sadly, we now have yet another story that underlines this lesson; something similar might have happened again. Another psychology professor from a top university; another pop-science book; another set of results that aren’t real and were never real to begin with; another set of credible (though, I hasten to add, at this time unproven) allegations of scientific fraud. And another irony, because the potentially dishonest results were in a study about: honesty.

Duke University’s Dan Ariely has written several books that made a big splash in the world of popular psychology and “behavioural economics”. His combination of humour and what appears to be deep psychological insight made them fly off the shelves. In 2008, Predictably Irrational provided an apparently “revolutionary” argument for why economists were wrong to assume rationality on the part of the average consumer. In 2012, The (Honest) Truth About Dishonesty used some of Ariely’s own research to explain what makes people break the rules. Ariely’s slick, charismatic TED talks have racked up millions of views. One of them, titled “Our Buggy Moral Code”, explains “why we think it’s okay to cheat or steal”.

Unfortunately, it seems someone involved with Ariely’s research thought it was okay to cheat. Last week, an in-depth statistical analysis showed that a dataset from one of his 2012 papers was, essentially beyond doubt, fraudulent. The study had apparently showed that people were more honest about how much mileage their car had done if you made them sign a “I promise this information is true” statement before they reported the mileage, rather than at the bottom of the page. But it hadn’t shown that. In fact, it seems no such study ever happened, and the data was just produced using a random number generator.

Ariely responded to the claims: he said that he’d had a car insurance company collect the data, so someone there must have faked it (impressively, the faker made the results of the study line up perfectly with Ariely’s theory). In other words, his crime was one of sloppiness rather than fraud, since he didn’t double-check the data. He won’t say which insurance company it was — his responses were described by BuzzFeed News’s investigative journalist Stephanie Lee as “vague and conflicting”— nor will Duke University reveal any of the details of the investigation they claim to have made into the matter. The study with the allegedly fraudulent data — which has been cited over 400 times by other scientists — is to be retracted.

Like Hauser’s paper on monkeys (which has been cited more than 175 times), that apparently faked paper on honesty has already done damage to the scientific literature: each of those 400 citations used it, to a greater or lesser extent, to buttress some scientific argument they were making. In every case, they seem to have been misled. This is part of the tragedy of fraud in a cumulative endeavour like science. The least Ariely could do now is provide every possible detail of the provenance of the fake dataset so the scientific community can get to the bottom of it.

But, in the case of Ariely, reticence is something of a pattern. In 2010 he told an interviewer a “fact” about the extent to which dentists agree on whether a tooth has a cavity (he said it was only 50% of the time). His apparent source, Delta Dental insurance, denied this. Ariely claimed someone at Delta Dental had given him the information — but he wouldn’t reveal anything about them, other than the fact they’d definitely not want to talk to anyone else about it.

And just a few months ago, another of his papers (from 2004) was given a special editorial “expression of concern” because of over a dozen statistical impossibilities in the reported numbers. These couldn’t be checked, Ariely said, because he’d lost the original data file.

Maybe it’s worth looking at Ariely’s own theory about cheating and dishonesty. In his TED talk, he described an experiment of his in which the participants had been more likely to cheat on a “dollar-per-correct-answer” maths test if they only had to self-report their number of correct answers, having shredded the answer sheet. That is, when nobody could check the details, dishonesty kicked in. (Someone should probably check the data in that study is legitimate, though.)

Science is supposed to be all about nullius in verba — take nobody’s word for it. Everything, down to the tiniest detail, is supposed to be readily verifiable. Even if a scientist has done everything completely above board, they shouldn’t have to rely on “the dog ate my homework” or “I do have a girlfriend but she goes to a different school so you wouldn’t know her” excuses. The whole idea of having a scientific record is to, well, record things; a literature that’s full not just of fraud but also unverifiable claims is a strange contradiction in terms.

Even if this is the end of the Ariely affair and no other issues with his research are found, it’s still a perfect illustration of so many of the problems with our scientific system. A patchy literature of unclear veracity. Researchers losing track of their data, allowing error, and sometimes fraud, to slip in. Scientists building lucrative careers on a foundation of dodgy research, while the people that clean up their mess — the fraud-busters and data sleuths — go largely unsung. Bestselling popular books spreading untrue and unverifiable claims to thousands of readers.

You need only look at previous massively-successful books on the topics of human biases and the importance of sleep to see how low-quality research and sloppy scientific arguments can reach enormous audiences. At best, the implications of this are that dinner-party conversations will contain even fewer solid facts than usual. At worst, patients (or their doctors) will make decisions about their health on the basis of some unproven, vaguely-reported fact they read in some famous professor’s popular book.

It’s easy to get distracted by the tangles of irony of these cases: the immoral morality expert; the dishonesty expert who (at best) got duped by dishonesty. As amusing as the stories are, they also have rather grim consequences. No matter the subject area, and no matter how impressive the credentials, our trust in the experts keeps being betrayed.

And in some sense, this is a good thing. Scandals like this remind us to take nothing at face value. Nullius in verba, after all. In response to the new fraud revelations, Ariely wrote that he “did not test the data for irregularities, which after this painful lesson, I will start doing regularly”. Whether or not you trust Ariely’s research or his books any more, it’s good advice.

Stuart Ritchie is a psychologist and a Lecturer in the Social, Genetic and Developmental Psychiatry Centre at King’s College London