If you’re a parent, you’re probably familiar with threadworms. If you’re not a parent, then you may not have heard of them at all. They’re one of those secret joys of parenthood that no one mentions until they turn up — grim, pale little creatures, wriggling in a child’s poo — whereupon you learn that almost every other family has gone through the same thing. Your parents may well have gone through it, with you, and then just … never spoken of it again, because eeerrgh. I almost feel like I’m breaking some parenting omertà just by mentioning them. But I need to, because threadworms can help you understand why we need to worry about artificial intelligence.
They’re unsettling, disgusting things, threadworms: or they were to me, when I first came across them. But – luckily – they’re easily treated. A cheap, over-the-counter medicine, mebendazole, kills the lot. You have to dose the whole family, clean your linen, and dose again two weeks later, because the eggs (god, I’m shuddering just typing this) survive for a couple of weeks; but basically there is a simple, nuke-them-all-from-orbit cure.
Threadworms are just one species of intestinal worms, also known as helminths; there are hookworms, pinworms, tapeworms, whipworms, others. And they’re common — not just in north London, but around the world. The US Centres for Disease Control estimate that Ascaris lumbricoides is present in the intestinal tracts of about 1 billion people around the world. Usually, the infection has no symptoms, but in children it can lead to malnutrition, slow growth and impaired learning.
Luckily, again, these helminths can be killed with the same drug, mebendazole, or a related one, albendazole. Just as with the threadworms that cause itchy bottoms in the rich west, the Ascaris that cause childhood stunting in the developing world can be nuked, effectively and cheaply. A course of albendazole costs a few cents.
Dosing someone who’s been diagnosed with helminths, then, is a no-brainer. It’s cheap and easy and effective. But over the past 20 years or so, there’s been a row going on about doing more than that: about giving the drugs to entire schools’ worth of children, regardless of whether they’ve been diagnosed with worms or not.
In the late 1990s, two economists, Edward Miguel of Berkeley and Michael Kremer of Harvard, ran a study. They took 75 schools in Kenya, and gave albendazole to all the kids in 50 of them. Their results were impressive: absenteeism dropped by a quarter; children did better at school. The benefits also seemed to spread to nearby schools which hadn’t been treated, presumably by disrupting the spread of worms in the region. The whole thing cost $3.50 per pupil.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThis is just one example of the ‘replication crisis’ that has plagued academia for nearly a decade. For those who don’t know, a huge proportion of experimental results, especially in psychology and related fields, and including some very famous findings (such as the thoroughly debunked implicit-association testing), have failed to be replicated when the experiment is conducted again.
There are a number of factors involved. First and foremost, academics are incentivised only to get published papers, not for that work to be true or useful. A perfect example is Neil Ferguson, the notorious epidemiologist, who has been staggeringly wrong about every single thing he has ever done in his entire career, none of which stopped him from rising to the professoriate, and thence to his disastrous input into the lockdown debate.
Also important is that, due to the lack of political diversity in universities, academics have a strong and uniform bias in favour of non-realist woke dogma, meaning that they experience strong internal and external pressure to ensure that any woke hypotheses they test are proven, regardless of those hypotheses’ innate plausibility. There are a few other causes too, but I think these are the two most impoortant.
(There is a parallel but not quite related phenomenon in medical research, where similar effects of false claims in studies are caused instead by the vast amount of money at stake in the pharmaceutical industry. Those of us who have worked in that industry know that, as a rule, only positive findings are published: negative findings are locked in a drawer and never seen again. I even heard from reliable sources of major international companies intentionally conducting their trials in Third World countries where reporting standards were lax, so that they could plausibly deny the existence of any negative trials.)
All of this is to say: laypeople have every right to be sceptical when they hear the grandiose claims of scientists, data experts, etc. We are not prophets, we are not magicians, we are not infallible. Certainly use scientific findings and experimental results as an input into your reasoning, but don’t let them over-ride your judgement entirely.
Good post. Yes, most of us stopped listening to the ‘experts’ years ago. And, as you say, the odious and serially incompetent Ferguson embodies this class. The lack of political diversity in universities is certainly a major factor as it means that any common sense, conservative voices will not even be raised.
I disagree. I have personally replicated one of his results, and would stand by it. But if you wish to be taken seriously, do consider making statements that are less … ambitious in their scope. Do you seriously maintain that every single one of his collaborators, editors, peer-reviewers and supervisors was incompetent or corrupt? If you want to say that he has made some big mistakes, or that he has been careless in engaging with users of his research, then many people might accept that. But this it’s-all-one-huge-conspiracy position is, to use a phrase, “staggeringly wrong”
Please point me to where I said it was a conspiracy. Ferguson is simply incompetent, in a field where incompetence is excused.
I wish that the devotees of the Religion of Science would stop their dogmatic insistence that any disagreement with their doctrines is a “conspiracy theory”.
I agree with some of what you wrote in your first post, but I also agree with Richard Pinch that your attack on Prof Ferguson is over the top. Here’s a link to his publications:
https://www.imperial.ac.uk/…
It’s not very likely they are all wrong, and probably not even a majority, or even a large minority. Maybe you could tell us which are wrong, bearing in mind that the popular account of Imperial’s COVID-19 ‘forecast’ is very misleading, as their report made it clear that the 520k figure for deaths with no action was a baseline, not a forecast of what might happen without a lockdown. As even this ‘forecast’ could never be tested, it’s a moot point how accurate/inaccurate it was.
Thanks for that link – fascinating to look at such a long career and range of subjects. In fact, his predictions on vCJD were pretty accurate *laughs because the range was so wide*.
Neil Ferguson’s proud history of predicting pandemics:
2009, Swine Flu.
Prediction: up to 65,000 UK deaths.
Actual: 457 UK deaths.
2005, Bird Flu.
Prediction: up to 200 million global deaths.
Actual: 282 global deaths.
Note: An absolutely insane scale of failure here.
2002, CJD.
Prediction: up to 50,000 UK deaths.
Actual: 177 UK deaths.
Note: Ferguson actively insulted another team who predicted only up to 10,000 deaths as being “unjustifiably optimistic”, so he clearly thought the upper end was most likely.
2001, Foot And Mouth.
Prediction: up to 150,000 deaths.
Actual: under 200 deaths.
Note: Ferguson was fired from DEFRA for his incompetence as events proved him wrong, but this was kept typically hush-hush, and did not affect his career.
He doesn’t get to excuse it by saying that it was a range. His ranges were so vast as to be utterly ludicrous and entirely useless, and worse, they were hopelessly lop-sided in every single case. When actual results are consistently at the very bottom of an enormous predicted range, that is a total failure of prediction.
I don’t even blame Ferguson himself. I blame the academic and policy institutions that continued to support and acclaim him long after it became clear that his model was an abject failure. The real Ferguson problem is that he is merely a synecdoche for the rotten state of academia in general.
Here is a decent summary of many of the problems with epidemiological models, if you’re interested: https://lockdownsceptics.or…
Well, the estimate for deaths over 80 years from vCJD was between 50 (that’s fifty, not fifty thousand) and 150,000 as a 95% confidence interval, and to quote the paper
It turned out to be 177 over 20 years, which is, obviously, consistent with both the wider and the narrower estimate. Of course the media reported the more exciting “up to” figure rather than “as low as 50”. I agree that an estimate covering some four orders of magnitude would in general be better expressed as “we don’t know”.
From Nature, 10 January 2002
What a brilliant exposition of that “waste of rations ” Ferguson.
You deserve a Knighthood, or at the very least the title ‘ Inquisitor General.
Well done indeed sir!
From Nature, 08 September 2005 (my emphasis)
So if avian flu had become human transmissible, then … . But it didn’t and 200 million people didn’t die. Which is good,
Reasonable Worst Case scenario, not a prediction. As David Spiegelhalter puts it RWC is “designed to be very extreme”.
If a prediction / scenario / call it whatever you will is grotesquely wrong every time, it is not “reasonable”. If I produced models like that for my commercial clients, I would not get any repeat business. In academia, I would get a promotion. And therein lies the problem.
A Reasonable Worst Case scenario isn’t a prediction of what will happen certainly, or most probably, it’s an estimate of the worst outcome that can reasonably be expected. It’s something your business clients would be interested in and indeed might well ask you for. Has that never happened? For example, a business might ask you to consider what would happen if its factory burned down. You might produce a RWC on the basis of cost of loss of stock, possibly loss of life, loss of business to competitors, and so on. That would be a reasonable basis on which to start the consideration of insurance, fire alarms, sprinkler systems and so on. You would not expect at the end of the year to be sacked because the factory did not burn down and so your “prediction” was wrong.
I dispute that. Of course in academia the “wrong” people sometimes get promoted. But I can think of no case in which some scientist’s published work has been consistently wrong, known to colleagues and workers in that field in general to be wrong (let alone “staggeringly wrong about every single thing he has ever done in his entire career” as you claimed, falsely in my view) and been consistently rewarded with promotion. It may have happened but it simply is not true that academic promotion in general occurs without reference to the validity of a scientist’s work.
So, here’s my challenge. Name five professors of science in UK universities whose published work has been “grotesquely wrong every time” and who have been promoted to their positions in spite of that being known to the generality of their field at the time of promotion.
Still waiting on your answer as to whether you are an academic, hiding your bias.
As the kids say: dis u? https://ima.org.uk/team-pro…
Yes, that’s me. But if you think that knowing something about the subject under discussion constitutes “bias” then you’re using the word in a very different sense to the rest of us.
By the way, I detect in your last two responses to me a certain degree of personal animosity. I’m sure there’s a fruitful discussion to be had here if we avoid that. Now about those names …
Aren’t you being a little unfair? Somebody told me that Dr. F. recommended circling every third lamp post three times when walking down the street – it keeps the wild tigers out of residential neighbourhoods. I have been doing this and can report there are no tigers on my street!
Lisa Simpson’s famous Tiger Rock! https://youtu.be/xSVqLHghLpw
Agreed with other two posters here
“Devotees of the Religion of Science” are not all and everyone to do with science. There is good and bad science, are good and bad scientists. The importance is allowing us the information to be able to make a judgment on that.
We don’t need to defend science as if it were a religion – as unlike religion the overwhelming body of evidence on its side does that. Think of that as you type into a plastic box/device made of microcompressors and technology that would make a Victorian believe in magic.
Same detail for “experts” in general. We have got too casual announcing people as ‘experts’ without actually knowing if they deserve that title. It doesn’t mean there aren’t experts.
But I agree: there is a huge difference between the practice of science, which is iterative and value-neutral, and the Religion of Science, which treats scientists as a priestly caste and hails their fallible predictions as prophecies.
Fair point, agreed – apologies – misunderstood you.
Not at all. I love polite disagreement. (Not quite as much as I love slavish adoration, but it’s up there.)
Well, we could argue about that too. But your emphasis on the practice of science understates the point that the object and indeed the result of that practice is to produce a body of knowledge about the way the world works. It is true that this body of knowledge is subject to revision, but it would be pointless, indeed meaningless, if we were not able to use it operationally in the world.
While it is formally that our knowledge of, say, chemistry, is subject to revision, if a professor of chemistry predicts that pouring petrol on the floor of your kitchen and lighting the gas will lead to your house burning down, you would be better advised to heed her prediction and put the top back on the petrol can than to sneer at the priestly caste and its fallible prophecies.
I think that once you adopt a position that requires you to believe hundreds of people to be incompetent or corrupt, uniformly excusing incompetence and concealing from other scientists and the public at large — then you’re in conspiracy theory territory.
I equally wish that people who dislike the conclusions of some piece of Science would stop their dogmatic insistence that any disagreement with their doctrines is equivalent to general incompetence at best and criminality at worst.
Let me guess, you’re an academic.
No, I’m not. In case you’re interested, I spent the first half of my career as an academic mathematician and the second as a civil servant. I’m currently spending the third half of my career (popularly called retirement) as a consultant mathematician and a member of the council of my professional body. So I have some experience of the academic world and a continuing connection with it which enables to be confident that your description of it is mistaken.
Academia AND the civil service! Wow, you’re a whole barrel of competence, aren’t you.
You raised the point, and I responded. If you want to sneer at my experience then there’s nothing more to be said in that direction. Would you like to share your experience so that we can judge whether or not you’re likely to know anything about, say, how academic promotions really work?
Part of the problem is that the results of science are presented as “science”. Science is a process, not a body of “facts.” The priesthood, as you call them, have forgotten what science is, when they claim to be presenting “truth”.
For the record, science is the process of mapping out what is false. “Truth” lies somewhere in what is not yet proven false. A scientific theory is a sketch map of the regions of truth and falsehood. But just because the map says “treasure be found here” does not mean you “dig” in the map – you still have to go to the island and dig there.
Of course some scientific theories manage to bracket the “treasure” (i.e. “truth”) pretty narrowly. QED comes to mind. Some of its predictions have been matched by experiment to within 1 part in a billion billion – a pretty high bar for any rival theory to meet – and yet…we still can’t say QED is “true” (but it is really useful!).
I just wish Science as a mapping process were more generally appreciated.
Exactly. Science is a process, and fallibility is inherent to it. Without fallibility, there could be no science, because science is iterative.
Yet we are constantly told that “the science is in” or that “we must follow the science”. A complete misunderstanding of what science is.
Trouble is that not only are many journalists scientifically and statistically illiterate, but they to are beholden to create quick impact news stories.
On top of that, they too have been through the same sausage factory of universities that causes the issues in the first place where scrutiny and rigour is applied unequally across university departments
Good question! My guess would be a combination of the above though.
Too much time it would take to investigate contrast and compare thoroughly, an inability to actually do so effectively due to scientific illiteracy, capped off by some ideological reasons why they might not.
The end result is fewer quality scientific/analytical pieces.
It is that the twenty-first century woke left has adopted the Religion of Science. Exposure of scientific dishonesty may make for a good news story, but it would be a heresy against their dogma. John Gray, an UnHerd writer, is good on this subject btw.
Stories that go too far “off the reservation” get spiked.
I am not sure what “artifical intelligence” is?
As used at the moment, it seems that it is an affliction that has infected the politicians and their class….
…. They sound clever and competent, but scratch the surface and all is revealed.
An interesting and thought provoking article. It’s all connected! In a mind bogglingly old and vast universe, are we really the first sentient creatures to reach this evolutionary cusp? If not what happened to the rest? If we aren’t the first there only seem two logical possibilities: first we’re already within an AIs orbit, probably completely immersed. The second is we’ll at some point get a hard and fast lesson in death from the single remaining apex predator. Remaining an optimist is difficult.
I have seen a speculation that there are sentient species in the universe, but they keep very quiet and don’t advertise their presence.
If you’re alone in the jungle at night, why would you make a noise?
Consideration of ET is predicated on :-
1) ET being benign
2) No local pressures from for example Galactic Fascist group keeping aspiring groups from becoming a threat.
Then we have potential slave maker groups.
This latter threat exists in the Insect World and is very successful.
I have never completely trusted statistics since being taught, that with the toss of a coin there is a 50/50 chance of heads or tails.
I could believe the answer for a few number of times, but not when told that after 99 heads results, the chances of heads or tails was still 50/50.
My reaction to that was that the most likely result was heads again, probably because it was a double headed coin.
Or in other words extending statistics to “weird results” is most likely wrong, because the premise of the question becomes faulty at some stage.
Well the mathematical assumption is that the coin is fair and has had the statistically unlikely chance of 99 heads. In the real world you would be right though.
An interesting point. A “fair” coin is by definition one in which successive throws are independently and equally probably heads or tails. If you know, somehow, that a coin is fair, then yes, even after 99 heads the probability of a tail is still 50%. But if you don’t know, and are trying to test the “null” hypothesis that you have a fair coin, as opposed to one that is, somehow, unfair, then the event you’ve observed is staggeringly unlikely, and it is reasonable to reject the null hypothesis. Again, you may have a reason to assess the probability of being offerred a fair coin: you might think that the person betting with you has a 10% chance of being a crook with a double-headed coin. You now have a Bayesian situation, where you’ve updated your prior estimate of 10% unfair to a new one which is almost 100%.
Nick, what you don’t trust in your example is the fairness of the coin, or perhaps probablility theory itself. What you are trusting is statistics – your experience and measurement of data to inform you of what the truth might be.
I’d like to see how the thinking is inflected by a fair amount of research which shows that infection with helminths has an impact on how the immune system works, for both good and bad.
Hmmmm… squillions of humans to come, but would they be the same as us ?
If they would be very different, does it matter that we are wiped out and the dolphins reach the stars instead ?
What has posterity ever done for us?
Nothing
Incidentally how did your discourse with Chris Martin end? I had you at 40-15, but must have missed the last game.
I think the ball is in his court.
Slightly more seriously, though, one could imagine a Net Present Value with a discount rate of say 3%. So each future generation is “worth” half of the preceding one, and the total NPV of a human population of one billion for ever is still only equal to about two billion current-day lives.
I’m glad that Tom acknowledged that it “is not inconceivable that the actual impact is negative” which had occurred to me. (What if the treated kids had far higher incidences of cancer in middle age?)
But I became lost at the idea of donating to “organisations that will reduce the chance of an AI apocalypse” as the linked article doesn’t mention any, and how any organisation or government could do this remains obscure. How will they ensure benign AIs? Social media seems toxic enough and nobody consciously planned it that way, nor could have steered it differently.
The whole piece would have been better without the mention of AI; the worm wars were interesting enough.
There are bodies at work right now on a code of conduct for professional engineers, scientists, statisticians and mathematicians in data science, including AI. You might like to contribute to one of those. Declaration: I am a member and indeed trustee of one of those.
What is your position on ID2020?
An interesting question: I had not previously been aware of this initiative. My immediate reaction is that I don’t think there has been an adequate discussion about what people mean by privacy and personal identity and what choices people want to make — indeed, I don’t think we have yet developed the right sort of language for that discussion.
But as a general principle, I would like to see some evidence that there has been an informed discussion between the potential users of the technology and the developers. I didn’t see that on a cursory glance through their website.
There’s a huge flaw in the work to eradicate parasitic worms. Nowhere, is the downside considered.
What downside? Humans have co-evolved with their parasites, so to suddenly remove them causes unexpected issues. Serious, life threatening and life shortening issues, namely auto-immune ailments such as asthma, food allergies and lupus.
To protect themselves from the human immune system, parasitic worms developed the ability to secrete immuno-suppresent chemicals in an evolutionary arms race that had come to an armistice. Then, suddenly, the enemy is gone and their suppresent is gone too. Do you imagine an event like that will be consequence free? Well it isn’t. We take away people’s parasitic based ailments and present them with the gift of auto-immune ailments instead. It’s much like the gift of alcoholism and drug addiction we gave to indigenous peoples around the world when we cured them of tribal warfare and scientific ignorance. Or like the day some bell end thought that introducing Cane Toads to Australia would be a great way to get rid of the Cane Beetle.
Am I saying that we shouldn’t be doing something about parasites? No. I’m saying if we’ve failed to consider this massive factor in the parasite eradication equation, imagine how badly we could make an arse of the AI equation? Think on!
Blanket dosing of children is simply wrong. Medication should be given on clinical need only. Many of these programmes happen without consent of the child , their parent or guardian. Refusal to comply can lead to exclusion from the school or is given by the school or local authority, without the knowledge of the adult in the family.
No medication is 100% safe, so why expose some children to abdominal pain, diarrhoea, nausea, vomiting, liver damage or anaphylaxis (although some of these side effects are not common) when they may not even have the helminth? This is compounded in a malnourished child.
Is it altruistic to harm another person?
These programmes sing about statistical outcomes for the group, not the individual. How well did that thinking and modelling work for the folk with A Levels and Scottish Highers this year?
How would you feel if your child came home from school one day with a sore stomach, or with diarrhoea, having been given a drug which they neither required nor was consented to? It just does not happen in U.K. so why fund these practices in the global south?
As mentioned by some other people in this thread, anthelmintic resistance happens already, so using a drug when not required simply adds to this. These anthelmintics are used worldwide in animal farming too, so cross species unnecessary use builds resistance in the whole system.