Around 2014, I started to notice that something was up in academic philosophy. Geeky researchers from fancy universities, having first made their names in abstract and technical domains such as metaphysics, were now recreating themselves as public-facing ethicists. Knowing some of the personalities as I did, I found this pivot amusing. If the ideal ethicist has delicate social awareness, a rich experience of life, lots of empathy, and well-developed epistemic humility, these people had none of those things.
What they did have was a strong career incentive to produce quirky arguments in favour of the progressive norms emerging at the time, an advanced capacity to handle abstraction and technicality, and huge intellectual confidence. In real life, these would be the last people any sane individual would trust with a moral dilemma. Luckily for the outside world, they tended to have little influence, mainly because nobody could understand what the hell they were talking about.
The same cannot be said for the philosopher geeks in charge of the hugely popular and influential Effective Altruism (EA) movement, which was given new vim last week with the publication of a new book by one of its leading lights, 35-year-old William MacAskill, accompanied by a slew of interviews and puff pieces. An Associate Professor at Oxford, MacAskill apparently still lives like a student, giving away at least a tenth of his income, living in a shared house, wild swimming in freezing lakes, and eating vegan microwave meals. (Student life isn’t what it used to be.)
But his influence is huge, as is that of EA. Beloved of robotic tech bros everywhere with spare millions and allegedly twinging consciences, EA and offshoot affiliate organisations such as GiveWell, 80,000 Hours, and Giving What We Can aim to apply strictly rational methods to moral action in order to maximise the positive value of outcomes for everyone. Unlike many metaphysicians-turned-ethicists, MacAskill sells this in a style that is comprehensible, even attractive, to civilians — and especially to those with a lot of dosh to give away. Quite frankly, this worries me a bit.
The background to EA is austerely consequentialist: ultimately, the only thing that counts morally is maximising subjective wellbeing and minimising suffering, at scale. Compared to better potential outcomes, you are as much on the hook for what you fail to do as for what you do, and there is no real excuse for prioritising your own life, loved ones, or personal commitments over those of complete strangers. MacAskill’s new book, What We Owe The Future: A Million Year View, extends this approach to the generations of humans as yet unborn. As he puts it: “Impartially considered, future people should count for no less, morally, than the present generation.” This project of saving humanity’s future is dubbed “longtermism”, and it is championed by the lavishly-funded Future of Humanity Institute (FHI) at Oxford University, of which MacAskill is an affiliate.
Longtermism is an unashamedly nerdy endeavour, implicitly framed as a superhero quest that skinny, specky, brainy philosophers in Oxford are best-placed to pursue — albeit by logic-chopping not karate chopping. The probability, severity, and tractability of threats such as artificial intelligence, nuclear war, the bio-engineering of pathogens, and climate change are bloodlessly assessed by MacAskill. As is traditional for the genre, the book also contains quite a few quirky and surprising moral imperatives. For instance: assuming we can give them happy lives, we have a duty to have more children; and we should also explore the possibility of “space settlement” in order to house them all.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeI’m starting to think that the single biggest mistake the progressives made was to push Kathleen Stock from her job. Such a sharp mind and a prose style that skewers her target with only the faintest hint of malice.
Such a biting sense of humour. I spat out my espresso reading one skewering.
Seems a peevish b***h.
An excellent example of what she deplores,
Peevish, eh? That’s rich coming from someone who clearly doesn’t realise that ad feminam is a fallacy… But perhaps you could tell us where she’s wrong. You know, in light of certain recent events.
Just a postscript to your comment I will add this short extract from a Free Speech Union email I received today:
The philosopher Kathleen Stock has won Prospect’s ‘World’s Top Thinker 2022’ prize, as chosen by the magazine’s readers. It’s a terrific accolade, thoroughly well-deserved and indicative of the extent to which a grassroots fightback against cancel culture is now underway. As Prospect notes, the persecution – let’s call it what it is – of Professor Stock over recent years has turned her “into an emblem for open inquiry and free speech”.
So clearly Kathleen Stock has fans elsewhere as well as Unherd.
Great conclusion that PhD holders are not the best ethical guides My favourite example of PhDs and ethics is the SS Einsatzgruppen commanders in Russia in 1941. Three out of the four commanders had doctorates. One was known as Doctor Doctor Rause as he had PhDs in law and philosophy. Most of the human race do not need a philosophy degree to work out that killing tens of thousands of innocent people is immoral. Similarly most people do not need a law degree to know that mass murder is illegal. But I guess you need advanced education to rationalise the worst atrocities.
“Most of the human race do not need a philosophy degree to work out that killing tens of thousands of innocent people is immoral.”
The trolley problem rears its head again. We thought we had this cracked after the horrors of the 20th century, but in 2020 we went collectively mad and decided that actually the principles of the Convention on Human Rights weren’t really important after all.
The trolley problem was answered institutionally by the Convention with an assertion that actually, it does matter whether a tragedy results from deliberate action as opposed to mere inaction, and it therefore provided a clear answer to the trolley problem, namely that it is wrong to save five lives by the deliberate act of switching the trolley onto a different track where mere inaction would save only one life.
The foundational basis for EA similarly adopts a utilitarian model in which action and inaction are morally equivalent, and it is wrong for the same reasons that the monsters you mention were wrong.
Take Dr. Yuval Noah Harari, as near to being demonic as there is – yet he is very much on the path of this very disturbing Effective Altruism. A Transhumanist I suppose he would think everyone being finally digitized and living in a state of eternal joy and productivity is the answer. (unless Yuval did not like you, then I suppose the eternal joy would be rather different.)
If you take the God ‘Of the Book’ out of this altruism it will become the domain of the devil.
“Trying to get people removed from their jobs in the name of kindness”. Got it in one.
Though that isn’t about effective altruists, it’s about progressives.
I think it’s about marxists.
Fantastic article.
There is one point I can make and it’s to mention Matt Ridley’s rational optimism ideas, described in his book of that name. The core point of the book is that all we have to do to ensure the continuous growth of human prosperity is to maintain and expand individual liberty within the institutional framework that protects free exchange. He doesn’t just mean economic trade in that sense, but also the free exchange of ideas and the protection of rights for those who originate them.
If you are persuaded that this is the true engine of progress – as I am myself and many others – then it becomes impossible to support the idea that people alive today must suffer in the name of those yet unborn, because the plight of those yet unborn depends crucially upon the prosperity of those alive now. This really ought not to surprise anyone: everyone alive today inherits the benefit of the product of previous generations, even the majority who don’t inherit trust funds etc, because we all live in an advanced society run on systems of trade and cooperation that has eradicated hunger, doubled life expectancy, and in general offers a cornucopia of choices that would amaze any of the hundred billion or so humans who lived harsher, shorter lives prior to the modern age.
The counter-argument to all this, of course, is the rather silly neo-Malthusian finite-planet-can’t-support-infinite-growth trope, which is the basis for arguing that consumer capitalism will strip the planet of it’s resources and climatic stability thus robbing future generations of the prosperity we presently enjoy. In fact though this is nonsense and even more hilariously is contradicted by the IPCC itself, which projects a year 2100 position of the average human being 450% wealthier than today if climate change does not exist, but 434% wealthier in 2100 if climate change is both real and as dangerous as is claimed under RCP8.5, the commonly cited scenario about the future effects of climate change. In other words, it is not a problem remotely close to demanding emergency deindustrialisation.
Anyway, climate rant aside, the point here is that the position adopted by EA isn’t merely questionable, it is contradicted fairly strongly by what we already know about the nature of liberty, prosperity, resource use, trade and the role and purpose of institutions. The silliest thing about this bicycle riding vegan trying to save both the planet and its future inhabitants is that future humans will be no more likely to want to surrender their comforts than we are today, yet he espouses a lifestyle that nobody actually wants.
Agree with everything you wrote. I would add that the major problem with longtermism (more than with effective altruism) is the utilitarian assumption. Utilitarianism — or any form of consequentialism — turns everyone into a moral slave. They must do the “right” actions and have no claim to their own life, their own relationships, or their own projects. It’s a kind of moral totalitarianism.
Another major aspect of longtermism not really covered in the article is the obsession with existential risk. It looks like an unfortunate outgrowth of the same view of life that led to the precautionary principle. (I’ve tried to counter that with the Proactionary Principle.) That view runs counter to the kind of rational optimism Ridley writes about so well.
Ridley has to be an Objectivist of some sort: good! Also the problem of the times is everyone suffers from an artificially enlarged or swelling consciousness: bad!
“the plight of those yet unborn depends crucially upon the prosperity of those alive now.”
The IPCC also points out the false presumption, which you make, that overall, Western comfort must prevail over world comfort.
This comfortable lifestyle you worship, while you mock the vegan bicyclist, is/will be killing millions very uncomfortably. e.g. Pakistan flooding, drought so many places.
You make no sense to me, but I have solar, an electric car and a home garden, and am thus dismissible..
Well, we didn’t know about the solar, the car, the garden. It was your comments that were dismissed. Unless you think your car speaks more for you than what you say.
If you give a man a fish you will feed him for a day. If you teach him how to fish you will feed him for life. On a national scale teaching people how to fish is a lot more complicated. Just giving money away isn’t the answer. Sometimes there is something wrong with the culture that causes the poverty in some countries. Most people want to help nations but pouring money into a country is not always the answer. We have to be responsible in how we sow. A lot of money given to countries often ends up in a dictators pocket.
You are merely parroting politicised misrepresentations of the IPCC’s scientific output without understanding that that’s what you’re doing.
You are also misrepresenting my arguments, which are pragmatic assessments of the likely consequences of various ways of solving the problem, as nothing more than self-interested arguments for personal privilege when it ought to be obvious to anyone with any intelligence that this is not what I’m doing. If you think my arguments are wrong then you could have explained why they wrong, but instead you descended into childish motives fallacies, and consequently you merely embarrass yourself.
Final point: if you want to live a Green ascetic lifestyle and grow your own vegetables, that’s fine. Nobody is stopping you. However please try and understand that this is a religious position you have adopted, not one based upon evidence, and it is impudent of you to moralise at others who do not want to make the same choices you do yourself.
I also support your and Matt’s position and I wrote a book about it. In the interest of noncommerciality, I’ll stop there. Longtermism *is* wokeism.
PS Steward Brand’s Foundation for the Long Now is the opposite, and I support it.
Hi all,
I’m an “effective altruist” though in this context I wish the name didn’t sound so confident. I rarely feel effective.
I guess my main response to this piece is, “tell me something I don’t know”. Likely you’ve had someone tell you from a distance that everything you do is misguided and that you don’t understand your motivations. I guess Kathleen has had that. I guess you have. Reality is more complicated.
So, here are some articles from “effective altruists” (EA) engaging with the points Kathleen makes:
I guess my question to Kathleen would be, what would you like me to do given your article? If you could ask me, an ‘effective altruist’ to answer one of your questions or learn one thing, what would it be?
But in this article, while the problems she states are somewhat real, we are aware of them. But we don’t see them as reasons to give up. If you have solutions, I’d love to hear them, but it’s possible the problems are harder than they seem.
I hope you are all well,
Nathan (@nathanpmyoung on twitter)
EA is “rational” (long term-ism) vs “emotional” (short term-ism)? If the “ends justify the means” it is still totalitarian-ism. Does changing your “justification” for it make you “feel” better?
Rhetoric. You sound like Tucker Carlson.
I’m signed up to Giving What We Can, part of the EA movement, a while ago. It appealed to me as it was a way of outsourcing some of the analysis into which of the unnumerable charitable causes out there might offer the most “bang for your buck” (in terms of human suffering alleviated). It originally started recommending particular charties and the measures they deploy e.g. deworming tablets and anti-mosquito nets being some of the most “cost effective” meaures. I felt this was pretty useful. It later evolved into a number of themed funds one could chose to contribute to, the managers of which would decide which charities to give money to. I found it extremely weird that in addition to the Global Health and Development Fund (the cause to which I had and continue to be happy to contribute towards) , a fund for Animal Welfare and a “Long-term Future Fund” – focused on things like the risks posed to humanity by AI and genetically engineered pathogens – had been added. These seemed like quite a departure from the original idea of EA and marked the beginning, putting people yet to be born and other species of creatures on a par (or over?) our fellow humans today. I think the original idea of EA was and is solid and not particularly woke though….
I disagree about emotions being a good and healthy part of decision making. Decisions are about the will not emotions, although the will must be fully informed in all sorts of ways. Also I think this global health thing has done more harm than good especially within Covid.
For a glaring demonstration of just how morally bankrupt EA is, I would suggest watching a recent interview with Roger Hallam of Extinction Rebellion, who justified blocking roads and preventing ambulances getting the sick and injured to hospital on the grounds that climate change puts millions of lives at risk. Why save the one life right in front of you that you really could save, when you can claim to be saving millions of hypothetical people against the hypothetical perils of the future?
The holy grail for totalitarians is the legal right to kill people today in the name of ideals they themselves define and control.
Amen. They are also the Kommissars of the woke marxists who also to defer to, when deciding whether an identity is authentic within a category they themselves have constructed.
The justifications may change, but the ends remain the same. Totalitarians by any other name…
Do you mean abortion or the right to die?
Because they’ve moved from hypothetical to proven? Isn’t that what the IPCC is doing?
We’re running an experiment at scale that could easily decimate the human race, and immiserate the remainder.
Isn’t that exactly the same argument every orthodoxy has ever used throughout history, from the South America Indian civilizations who carried out child sacrifices to appease the gods, to the brutal suppressions of the church of Rome for hundreds of years? The Inca, Maya and Aztecs did what they did because they believed in a causal link between keeping the gods happy and the crops failing, and I bet you an Aztec priest would have said not doing this would cause the world to end – how is your stance so different?
We aren’t running an experiment, we are going along with an unproven hypothesis.
Scientists don’t get their grants for research unless they agree with the great global warming deception.
“Could”
Some argument.
When the climate change people start killing off farms as they are doing in the Netherlands and Canada and have done in Sri Lanka then something is desperately wrong.
When has the weather not changed? CO2 is needed by all plant life or it will die.
I wouldn’t worry – nerdy EA type appeals for the betterment of humanity have never yet in history come remotely close to working, or even influencing the wider public, and that is not about to change now no matter what type of sales flummery the ideas are couched in. The proof is, I as a tech bro type who actively looks out for the emergence of nerdy philosophical ideas around technology from academics (e.g. Bostrom), had never heard of MacAskill. This is notwithstanding a few nerdy tech bro billionaires paying lip service and extending a little patronage.
The point at which to worry would be if and when a nexus of academic ideas wins the hearts and minds of large numbers of young idealistic student types, who then subsequently have careers of low intensity but pervasive influence spanning years – primary school teachers, civil servants who progress to low level policy making, HR types who have influence over hiring and firing, local councillors who can burn millions building a cycle lane on a road which rarely sees a bicycle (as I have seen done on a couple of occasions). This happened with progressive leftism, emerging out of the lethal radical left philosophies of the early twentieth century, because those ideas were simple, in the sense that a Barbra Cartland novel is simple, something someone waiting at a bus stop can consume without it actually touching the sides of the brain.
“a nexus of academic ideas wins the hearts and minds of large numbers of young idealistic student types, who then subsequently have careers of low intensity but pervasive influence spanning years”
I suspect it’s people just like themselves living in the future who they have moral concerns about.
I believe that most of the original communists were nerdy types; and they begat some of the greatest mass muders of history.
i knew the nerds were bad news.
I classify nerds as people who start with a fair degree of innate intelligence and talent, but are obsessed with ideas, typically but not always around STEM, that they gravitate towards first expertise and occasionally later display originality and innovation in the fields they are obsessed with. Chess players are an example.
Nerds who get to the point of wanting to tell other people what to do are no longer nerds in my book – they are just yet another bunch pushing an agenda, sometimes with the primary aim of gain for themselves, or sometimes that gain is welcome but a secondary consideration, or sometimes there is no material gain but the person in question is driven by psychological factors which are specific to the person. A case in point is Bill Gates, who is a nerd (but with a whole array of other high end talents besides), but who also has quietly been pushing agendas for quite a while, notwithstanding the absolutely vast amounts he has given away in charitable causes. It is then difficult to see if he is doing it for self gain or because he sincerely believes something and has the means to push for it to happen as billionaires do.
So you’re against collective purposes…
I’m against collective coercion.
I’m against the erosion of free speech.
I’m against anyone being forced into modes of speech and behaviour (pronouns, taking the knee and so on) that they don’t believe in.
I’m against collective purposes as a proxy for religion.
The collective purposes I’m for, center around:
– our security as a society and what we as a group have to put in place to protect ourselves from external threats both from nature and from people who wish us harm
– our collective responsibilities as a society to children and young people
– our collective responsibilities to the old and the infirm
– our collective responsibilities to help those who fall through the cracks.
I appreciate I might be fighting a losing battle.
You must have hope or you will lose. Many of us think the same as you.
Sometimes collective purposes are wrong. Because they’re “collective” doesn’t make them right.
Outpacing Hitler easily.
More likely religious types, like “Gott Mit Uns” and ex-priest Stalin…
Cults of personality, not benevolence.
You better hope the nerd types program the World Controller AI for longtermism.
Nice. Would “progressivism” be not used at all, and just substitute “marxist” or “marxism”?
No, just no.
The 1000 year Reich was one type of appeal « for the betterment of humanity ». And although it has « never ….come remotely close to working, or even influencing the wider public » it ruined the World: so you should worry Prashant!
On being chastised about not using the latest technology in her efforts to serve the poorest of the poor, Mother Theresa said, “I took a vow of poverty, not a vow of efficiency”
I’m confused – is this pro MT or anti? It’s a smug reply to a fairly reasonable suggestion.
Probably just a case of IOWS- Irritable Old Woman Syndrome, which is quite common these days I gather.
I’m confused, too. Whatever, it does address the substance of the article. For Mother Theresa the obvious act was the immediate one: remove the sick from the gutter, give them shelter, care and dignity. No long term plan, just act now,
I’m afraid the problem with Mother T. was that although willing to treat the symptoms she was unwilling to treat the causes. The Catholic church is largely in favour of poverty: it’s their best recruiting sergeant.
Yeah prevention wasn’t her strong point.
Addressing a problem is not preventing a problem, prevention falling into «longtermism».
Poverty of spirit, yes; but the Catholic Church is not in favor of material poverty, nor is it their recruiting sergeant. For the poor shall never cease out of the land. . .
I don’t want to sound like a scold but if anything it is the well-off who are rejecting the Church.
Not all of them. Some use their money wisely in supporting those who are helping people, but to satisfy the decision to live on handouts is just encouraging laziness. We need to be discerning on what the real needs are.
What were the causes?
Interesting. I didn’t see it as smug, but maybe rather indirectly emphasizing the human element of caring.
No just pointing out that she never really gave what was needed apart from comfort in dying which was good in itself. Having a vow of poverty didn’t deal with the practical needs of getting treatment for her sick people so that they could recover.
Why are these woke progressives all about ‘feelings’ and not about practical solutions to today’s problem? Promoting Christain ideals where ‘One loves thy neighbour’ is far more attractive than some intellectual nerds in their ivory towers talking about future human beings.
We need to be caring for the here and now.
..problem being to whom/what does one extend the concept of «thy neighbour»[?] Is my grand grand grand…child neighbour to me in any way[?]
“This shall be written for the generation to come: and the people which shall be created shall praise the Lord.”
“Who is my neighbor?”: Luke 10:29-37
Yeah do you walk by on the otherside or not. A difference from those who want money when you know it goes on drink or drugs.
Of course. We should care for are families and not just leave them to charity but generally we will get oppoortunities to help others in small ways. I think love is a decision not just a feeling. It often has a cost.
Yeah nothing to do with feelings.
Yeah but I go around all the time muttering, “Bl**dy Victorians inventing factories, bl**dy Normans razing forests, bl**dy Romans with their empire ideology, bl**dy Paleolithics deciding it was a good idea to move from warm Africa.”
Africa is too warm. Fewer bugs and diseases in the North.
Effective altruism sounds to me like an up-dated version of Benthanism which, like the great man himself, ought to be stuffed by a good taxidermist, seated in a big glass case and exhibited as a curiosity.
I think it’s going a bit far to suggest that charitable giving is ‘scary’ tbh. I’m also someone whose heart sinks when I get seated next to someone who wants to talk about their fintech startup (this happens with alarming frequency in my life) but honestly I admire anyone who gives 10% of their annual income to charity even if they are a tech bro. While it’s obviously right that intelligence isn’t the same as ethics, it’s a fallacy to suggest that this means that it isn’t possible or desirable to try to do a ‘rational’ cost-benefit analysis of how your funds are distributed, which is surely all effective altruists are doing.
The problem with giving 10% of your annual income to charity is that this approach is not scalable downwards. For the majority of society giving 10% of income to charity is just not practical and all it leaves them with is to admire/envy that minority, from whose 10% they may, sometime, get a pittance. None of these “ethical” people seem to be interested in making fundamental changes in society such that the grotesque inequality, that has been getting worse for well over half a century now, would be remedied.
I certainly agree that anyone interested in inequality should campaign on structural issues to lessen inequality. But I don’t think giving to charity precludes taking a political position as well and I don’t see any evidence for the claim that “ethical” people are all apolitical (the couple of people I have met personally who is interested in EA are politically engaged). Also there are plenty of apolitical people who are in a privileged position to campaign against inequality but don’t do so. If you’re going to be one of these then better to be one who gives 10% of annual income (or whatever you can afford) away, no? Certainly it’s not “scary” to do this.
…Progs and EAs alike. Left brain hemisphere blinded psychopaths.
That’s a really great article, but there’s one thing that stayed on my mind.
I’m currently reading his book “Doing Good Better” and it feels like you’re keeping one of its most important things out of your analysis.
He focuses a lot on being more involved in your altruistic actions and actually measuring them. It’s not that you shouldn’t help those that are close to you (or yourself). I would put it more like this: if you WANT to help others, why not do it in the most effective way?
For example, just giving money to whatever charity you find might not be the best use of that resource because often we don’t know what they’re doing exactly, we’re not closely involved, we don’t know about the actual results… Same logic applies to the situations where you THINK you are helping but you don’t follow up, you’re not sure if your action was a good thing.
I haven’t read his new book about “longtermism”, but on the surface I have to say I probably agree with you on this one. Just the “effective altruism” that, although is related, seemed to be put in the same basket in your article – which I don’t think is the case, there’s a lot of awesome outputs from his book that are not close to “wokeism”, but to seeing the truth and actually helping people, being close to them.
Another good piece from Kathleen. What we need is ‘bigpicturism’, not longtermism, which, for all its claims, remains short-sighted and dehumanising.
‘What we need is “Bigpicturism” ‘ Indeed!!
This is great. I read Scott Alexander so had heard of EA but never paid much attention to it. This seems like a highly effective takedown of a moral system that tries too hard to extrapolate logically from an incomplete set of axioms.
Classic: Perfect as enemy of the good.
Incomplete axioms indeed. You can complete the set?
“Those who can, do; those who can’t, teach”.
G.B.S.
“And those who can’t teach, teach teachers.”
Julian Farrows
Our politicians are often criticised for short-termism. Thinking about the long term is not new, however. Past generations have tried to live their lives with the future in mind. It is a weakness of our age to focus on our own immediate needs, as in the way we treat our environment and our planet (generally regarded as a moral issue) and in the debt we pile up.
It is fairly obvious to prioritise those closest to us, but there has been a trend in philosophical thinking to extend our moral obligations to humans far away (we give to disaster relief) and further to all sentient beings, as promoted by Peter Singer in The Expanding Circle (1981).
I obtained a PhD (for a study of the rocks of Ben Nevis). Should this have disqualified me from writing, some 40 years later, a book on the place of moral thinking in the human personality?
I agree with Kathleen Stock’s specific criticisms. Moral thinking about some issues can be complex and needs a combination of moral and intellectual thinking and may need to recognise the validity of contrary thinking (which MacAskill seems not to). But long-term thinking should be encouraged
I think the criticism is more based on the notion that there are large subset of people (and those bearing PhDs comprise a big portion of them) that think the credentialed should have the right to unrivaled autocratic rule over others (be it by actual state given power, or just morally)
The last two years should have convinced just about everyone how wrong that idea is, but it very strikingly has not.
I confess I had never heard of “effective altruism” or “longtermism”.
I suppose I could buy the book to know more, but on the other hand I can have a short-term approach and ignore it as I have not really understood what it is about and don’t feel invested enough to be wanting to know more.
Add “future-proofing” too, meaning that, rather than standing for anything good and noble, institutions desperately go with whatever they believe is cool and trendy in order to remain “relevant”.
That’s a very big weakness and tends to kill my confidence in the organisations who do it.
You are not alone in your ‘ignorancing’ of this subject. There are many of us so unaflicted.
Glad to hear .
It is sometimes true that ignorance is bliss.
EA just seems to be another version of Utilitarianism but put on steroids; and Utilitarianism has never been an ethical system that people find workable in their everyday life. How many people would save the two drowning kids near to them and let their own child drown because he was further away? Which they should if they were strict Utilitarians – saving two children is more moral than saving one. Ethics has an emotional element and, contrary to Dr. Stock, I believe that Professor MacAskill knows this as well as Michael Buerk did in Ethiopia in 1984.
If ‘future people should count for no less, morally, than the present generation’, does it follow from this longtermism that abortion is morally wrong?
Without doubt
Yes, but at the same time it makes suicide almost mandatory.
At what point in life? I’m still productive at 82…
Good for you.
Natalism is the word you want.
It’s a common question! I explain why the answer is ‘no’, here. (Hint: if people with organ failure count for no less, morally, than healthy individuals, does it follow that allowing people to keep both their kidneys is wrong?)
No..
I raised the same question but can’t find my comment now. Maybe it got deleted? Those in the womb would have been part of the future generation.
That is one goddamn fine review. The hilarious and biting character observations alone resonate with my own when I was at uni back-in-the-day.
According to the woke, if you think kindness and inclusion are important, you should seek to pursue these attitudes mechanically, not just within institutions, but also in sports teams, in sexual choices, and even in your application of the categories of the human biological sexes.
Ah, woke marxism and its iron law of projection. And of course pathos over logos.
It seems to me that ethical longtermist decision-making involves a very large number of variables. And that the number of variables increases exponentially as we look further into the future because the number of scenarios is multiplied.
So, logically, it seems to me that this would all be done best by an Artificial Intelligence machine powered by a large supercomputer from IBM. We could call it “Deep Ethicist”, after the famous “Deep Blue”. And we’ll give it a position in the cabinet of the president or prime minister, so that all our national decisions will be ethical. This, I think, must be the scientific approach.
What could possibly go wrong?
(Aside from, everything)
What could go worse than nuclear war, famine and pestilence?
This is a dashed response to express my relief with this post after Harrington’s yesterday (‘Am I really a threat to democracy?’), which left me with a vague sense of disquiet wondering whether UnHerd wasn’t in fact surreptitiously gearing readers to accept the very tech-bro future this post is critiquing.
It was a bit depressing certainly, but it was also possibly a call to arms.
Not sure I agree:
I actually answered this directly in the comments to the article in question, but in short I think it misses the point that the adverse consequences from the various forms of institutional decay are already apparent and will get worse if there isn’t a change of course. Younger people are presently supporting ideologies that will immiserate themselves, and they won’t simply keep doing it.
Bicycling from your parents’ home isn’t immiseration, Flooded China and Pakistan and Kentucky is immiseration.
We get it, you made your point ten comments ago
Mm caring about the generation still to be? Well over 9 M babies have been aborted in Britain since the 67 Act. Wouldn’t they be part of a new generation? So much for paving the way for the new generation. These were not even allowed to take their first breath.
A very revealing piece! Brilliant to join all the dots on this murky stuff. Kathleen, you need to write another book on all this.
It’s all very ectomorphic phenotypically and self-denial wise, skinny people preaching paying-forward. It doesn’t look as self-righteously arrogant as wokeism, and a little self-denial never hurt anyone. As a admirer of Elon Musk, I don’t see a lot to dislike here, so long as it is not mandated. Surely the snowflakes won’t miss a few flakes.
One fact I need to correct.
The author cites GiveWell as an affiliate organization of the effective altruism movement. She then states that the effective altruism movement had an “earlier focus on targeting malaria”, implying that is no longer a focus of this movement.
The top two charities currently recommended on the GiveWell.org website are the Malaria Consortium and the Against Malaria Foundation.
This analysis is simultaneously brilliant and sane.
Thought provoking article – thanks! I myself live a fairly austere life – not because I want to protect the planet for future generations but because I’m a mean badrats (anag.) I may even read MacAskill’s book – if I can get a copy for nothing.
It seems to me that western culture has been seduced by the successes of Physics into trying to generalise everything. Generalisation is blindly associated with intellect and so recent pseudo science has gladly picked up this golden goose. This leads some, like MacAskill apparently, to see faces in clouds and to claim they have meaning.
We have to stop engaging with this debate on its terms. Untestable and unfalsifiable generalisations – whether they’re about the past, the present or the future – are anathema to reason. We need to treat this kind of futurology with the contempt it deserves.
Would love to hear the author debate MacAskill, if UnHerd could make that happen.
Superb. Thank you, Kathleen
This just seems like a confusing mash of utilitarianism and – I don’t know what. Stock is right I think that it doesn’t really touch on the basic issues of trying to derive a morality from science.
But it reminds me a little of something David Goodhart pointed out in “The Road to Somewhere” – that the moral landscape of the anywheres is restricted compared to that of the somewheres. The latter imcludes things like loyalty, community, tradition, a strong sense of place – all of which create moral obligations to those we share our lives with directly. The anywheres have none of this, recognizing only fairness, and so tend to see our obligations to the whole world as having equal weight.
There should be no surprise which group most university philosophy profs, or Elon Musk, would fall into.
Gratiutous slam on Musk.
Hardly gratuitous when his picture is at the top of the story.
Just remember that the NAZIS of 1940 were utterly convinced that they were building a better world for future generations to enjoy.
Talk to any committed ‘progressive’ for 5 minutes and they will be categorising and labelling you – essentially dehumanising you – so that when you are no longer worthy of classification as a valid human – you can be marginalised, cancelled and expunged.
Just as you marginalize them collectively.
As always, an excellent article. No doubt some mangled variant of EA will soon start appearing in our schools. Now children put down your teddy bears, today we are going to learn about critical race theory and EA.
What a brilliant take. Wow.
OLD COMMENT:
Seems it is being hijacked by tech-bros and Bill Gates.
Bed nets, planting trees, ideas beyond borders (book translations), —->let’s go to Mars. In that case, it is being hijacked.
Simply put, I would hope some rational charity catches on. EA is not a glib, race-card-pulling, political-othering, purity testing, sloganeering movement. It should not become the gates foundation or offshoot of SpaceX.
Wokeness is based on problematising and witch-hunting; EA is (was) based on being effective and sustainable. Inserting Gates/coinbro/Elon-ism as longtermism ain’t it. (The Matrix, Switch over the dead partner: “not like this,…not like this.”
If EA or its affiliates still cater a decent list AND I can protest their new dumb goals by going around them, I will use it.
END OLD COMMENT
Update (GIVEWELL IS NOT ALL THEY PROFESS AND THAT’S ALL I SNOOPED ON SO FAR; I am not changing my career):
Looked at their GiveWell list and it is 4 charities about malaria in Africa. What. The. Hell. It’s just “save the whales” for a few areas of Africa, which are getting overrun by convert-or-die islamists. But we’re “saving lives”…for what?!
“Hey, bro, wanna say you’re saving lives, bro?”
Didn’t find anything about Mars or buying up all the farm land, but that’s more about the big-shots talking about the movement. Them becoming another WEF.
They are VERY transparent about all of their mistakes (issues raised) and have bowed to diversity in hiring in a, meh, typical way, whatever. But they are not a charity-rater and all that diversity hasn’t found them a more diverse or important list, so…<yeah>
Here are my criteria for best charities: encourages/actively fights corruption and human trafficking; spreads ideas and concepts like liberty, environmental responsibility, raising standard of living, freedom from religion and tyranny…
Please add your short and mid-term goals and concepts and refer the royal “us” to a good list or actual such charity reviewer (none of those “giving free popsicles charity”)
“If EA or its affiliates still cater a decent list AND I can protest their new dumb goals by going around them, I will use it.”
Giving What We Can is more like this, but I may still reference them and avoid the crap covered in this article.
Thanks for riveting article. Though I think, with Typical empathy, we are gradually accepting that many successful academics and billionaires are A-typical. They are highly focused and on the so called autistic spectrum. It was a bit below the belt to throw out ‘speccy’ as a slur. Though the anger is real, ASD is real too and not an affectation.
To expect such individuals to be empathetic on an emotional level is not realistic. They have a lot to give. And they sometimes find solutions and become very rich (like Steve Jobs to name another) and yes they will find like minds.
They should be listened to and understood, they represent an important side of human nature.
But it’s right that our attention is brought to it so well. Because their ideology can seem callous.
As always, this is a great article made all the better by the problems unfolding at FTX. As you say ‘we find longtermists such as Bankman-Fried worrying more about future non-existents than humans suffering today.’ We now know that he was more concerned with fleecing his customers and investors – no doubt done in the cause of future generations – perhaps?
As always, this is a great article made all the better by the problems unfolding at FTX. As you say ‘we find longtermists such as Bankman-Fried worrying more about future non-existents than humans suffering today.’ We now know that he was more concerned with fleecing his customers and investors – no doubt done in the cause of future generations – perhaps?
Long Termists tell us that in the future there will be 10^1000 people occupying the whole universe. And if we stand in the way of this future it is equivalent to us murdering every one of them.
Therefore if a Long Termists Guru has to eat babies now, in order to enable this marvellous future then we should let them do it. Its the morally right thing to do.
Its not a power grab at all.
The longtermist Mrs. Jellybes are going to have some trouble with the Dr. Evil transhumanists, resulting in a tech nerd war consisting of angry tweets and virtual spit balls while sensible people get on in the real world.
While society lasts…
The Jellyby Evils have weighed in
Hilarious example: the word “balls” is immediately censored!
It’s an algorithm, which is a baby AI.
It’s coming…
I’m confused by the word “b***s”. The excluded word that immediately comes to mind rhymes with “itches”, but has too many letters, so it must be some other word. Frankly I would be happier if the censors didn’t bother. Whatever the word actually was I’m sure I could have coped with reading it without reaching for the smelling salts.
An excellent article. Thank you! We need people like Kathleen Stock whose intellectual courage keeps us motivated.
These people seem to prefer the abstract at the expense of the concrete, prefer what might be to what is, and to disregard what cannot be measured. This strikes me as quite unbalanced.
I’m thinking if a man’s grasp doesn’t exceed his reach, then what good is a platitude. Sounds, I don’t know, visionary.
There’s history behind this stuff. WM follows on from Derek Parfit (a great philosopher: the inheritance isn’t total), who couldn’t abide desires or dispositions as the basis of moral judgments; and many of those who emphasised those things were indeed no more impressive than their intellectual papa, Hume . Parfit argued for decisive objective reasons, and did so fascinatingly if not compellingly. The trouble is that there is opinion around the fringes of such reasons: thus there’s evidence for human effect on the climate, unsurprisingly given how many of us there are. But how that effect will play out is open to argument: and what to do is in any case political. So ethics bumps up against politics, which is another way of saying that ethics is an art. To avoid that bruising encounter, ‘science’ is wheeled on (one feels a little sorry for it) to see off any human doubt about the high-minded prescriptions of Effective A. Reminds me, all this, incidentally, of the new atheists in the noughties banging on about how frightfully unscientific Gaaaard was.
Great article.
Since my own doctoral dissertation was largely based on Parfit, I have to comment: Most of his book, Reasons and Persons, is a brilliant analysis of personal identity. The last part of the book, dealing with ethics from a utilitarian perspective is a disaster, in my view. You don’t have to be a utilitarian to enjoy and be enlightened by the rest of the book.
Agreed. I love Reasons and Persons, which came out while I was doing my degree. He strikes me as always interesting – even ‘On What Matters’.
In the mindset of a longtermist, I wonder, does the wellbeing and survival of a foetus weigh more heavily than the freedom of its birthing person.
Woke much? The word is “woman”.
Some ideas are so stupid that only academics can believe them – a saying attributed to Orwell, paraphrased from an original likely quite different.
Brilliant; fascinating
Brilliant
“Impartially considered, future people should count for no less, morally, than the present generation.”
Whence comes that ‘should’? Certainly not anything about human nature, nor any pure logic. Nor is there any obvious pragmatic advantage. It can only come from the warped mind that delivered it. Peter Singer raised to the ultimate. And what might it mean to say ‘impartially considered’? The consequences of such arguments have been dealt with at length in discussions of Singer’s work and shown to be disastrous. Impartially considered, most people in the world might count for the hospitalised paraplegic, so turn off the life support systems. Consequentialist arguments that set aside our humanity and our natures are exceedingly dangerous. Well outed here by Stock.
And there was me thinking longtermism was something our successive governments did not do in order to safeguard our present energy supply. So many words, mental gymnastics, acronynisms. There was me thinking how like it was to the experiment played on little children to see which would eat their sweeties straight away and which held out for reward. Who wouldn’t want a world where pollution, forest and habitat destruction, cruelty to animals and abuse of power to poorer populations were wiped out? The analogy I see is that of religions being messenger from god to the populace. The middle man who at times is brilliant and so often flawed. Hence the proselytising, misplaced protests, cancellation of free speech or any form of questioning – didn’t, don’t, all forms of religion act thus?
A thought just occurred: do you think MacAskill believes in Roko’s Basilisk? In any case it is now too late for anyone who reads this post of course (•‿•) …
They don’t know they are driven by emotions”
Well, what the author framed disdainfully and mockingly as a criticism sounds actually like a compliment to EA given the basic facts of human psychology. For instance, one needs to think of the individual lives (instead of big numbers) in order to be truly moved by the moral weight of the arguments around effectiveness and longtermism, hence MacAskill referring to a graphic description of future/distant people suffering always seemed well justified to me. This looks like an oxymoron to this author, that’s partly why she ends up, probably unwittingly, strawmanning EA. As she doesn’t recognize the fact that EA does not argue for being completely emotion free. By that logic, we’d expect to see a psychopath devoid of emotional capacity would be a perfect match for EA. It’d be funny if not pathetic to assume that. Emotions are like the starting point, without which you can’t CARE about others, let alone pondering on maximizing your impact under the gradually diminishing influence of emotions. EA having its own problems does not negate the fact that it provides an incredibly useful framework to help others. So long as it saves human and non human animals that could not have been saved otherwise, I don’t care much about these commonalities with Woke or whatever (which relies on mischaracterizing EA anyway). Pain is experienced on the individual basis: You wouldn’t mind these criticisms if you were about to die from preventable diseases in a “distant” corner of the world, as it wouldn’t feel distant to you at all. While ideally aiming for the highest impact, we always need to consider the suffering of the individual.
Of course, one might wonder, for those who feel a general urge to help one’s fellow man, is not effective altruism better than ineffective altruism? One might take a deontological stance and argue against consequentialism and utilitiarianism, but is there really an argument against those with a lot of dosh to give away trying to maximize the good they do when they actually give it away? What substitute drawn from a deontological view (or any other ethical stance not embraced by the current EA crowd) would you propose they use in place of a utilitiarian measure if their goal is to maximize the good they do with their money?