January 2, 2018   2 mins

When I was first prescribed a statin to control my liver’s production of cholesterol, it required a (very good-natured) verbal tussle with my primary care physician. “You have a very healthy lifestyle,” he told me; “You are not at risk of developing heart disease in the next ten years.” I don’t care, I told him. My father died of heart disease, as did my maternal grandfather. Too young, both of them, they died too young.

The physician agreed (and wrote the prescription). So why did he begin with mild reluctance? Not an innate desire to avoid pharmacological interventions (else he would not have written the prescription.)

The conflict arose from a clash of algorithms. I had taken a statistical model to the surgery with me — a very informal one, but a model nonetheless; “Given the input of my family history, I believe my innate long-term probability of heart disease to be high, and wish to minimise that risk.”

Physicians tend not to think in terms of statistical model but the machines which sit on their desktops do. Dr P typed in various numbers (inputs) to describe me, which included family history, and various blood tests, and metabolic measurements, and lifestyle indicators… and the classification algorithm in his surgery’s software returned: “Medium risk.” The computer, in other words, said no.

On this occasion, and with this algorithm, the human discussants over-ruled its decision but with the rise of machine intelligence — a fancy-sounding name for “classification or prediction algorithms built on huge amounts of data” (to which Peter Franklin gives a very good introduction here) — the potential for conflict between the human users of the systems and the system itself, increases. And it’s not clear whose rights will take precedence.

Should you qualify for parole? An algorithm might determine that.

Should you be allowed to board an aircraft? An algorithm will determine that.

How should you drive from A to B?

Which irritating adverts will decorate your internet browsing experience?

Which film does Netflix suggest you watch next?

What insurance deal will the Halifax offer you on home contents cover?

Is it worthwhile spending money on a drug to prolong your life? An algorithm, increasingly, determines all of these.

Now in a sense there’s nothing new here. Statisticians have built classification models for aeons (slight exaggeration) but there’s a difference — morally, definitely and, surely, legally — between me (as a statistician) showing you (as user/customer/patient) the model I’ve built and you deciding whether or not you agree with its form and are happy with its predictive abilities — and a proprietary black box system, whose precise structure is not made available to you, deciding (for example) whether or not you should be sent home from court or locked up in a prison. You — as “user” — won’t have a say in how the model has treated your data. You won’t know how it combines the inputs; which inputs it rates most highly; the operating characteristics of its rule (how often it makes the wrong decision.)

What — morally, legally, culturally — should be done about this? It’s an open question. There’s a good discussion of the issue by Olhede and Wolfe1 in Significance, the Royal Statistical Society/American Statistical Association co-publication1.

Go figure, as the American statisticians might say. But figure it out quickly, before a machine-derived algorithm decides that it’s a waste of time to do so.

***

Introduction to this Under-reported series.

Summary guide to all under-reported articles in this series.

FOOTNOTES
  1.  Olhede, S. and Wolfe, P. (2017), When algorithms go wrong, who is liable?. Significance, 14: 8–9. doi: 10.1111/j.1740-9713.2017.01085.x

Graeme Archer is a statistician and writer.

graemearcher