A very fine essay, imo. I’m not surprised effective altruism became a favorite analytical approach in Silicon Valley. Those folks have a distinctly quantitative mindset. Everything should be measured and probabilities calculated.
I’m also not surprised the Silicon Valley set feel no more emotional attachment to the current generation than to a future generation a hundred times removed. It has been observed, and written about, that there’s a higher percentage of people on the Asperger’s spectrum in IT. I think that observation is somewhat disfavored now because there might be a hint of stigma, but I believe it is true. For the valley crowd it may truly be as easy to empathize as much with hypothetical, future generations as with the present generation.
Of course, the most successful of the Valley folks are super wealthy. They seem to be among the “elite” who disparage the ordinary working person, the so-called deplorables. Perhaps they view future generations as somehow better, or at least having the potential of being better, than the current generation, perhaps through the operation of evolution.
Or perhaps the philanthropy of the elite seeks any high-minded, philosophical justification to disguise the self-interest at its heart.
Part of the issue is that silicon valley whiz-kids don’t have enough data (by a long shot) to sufficiently predict their users’ current status let alone their future. We’re just at the tip of the iceberg in collecting and analyzing voluminous data behind what makes each of us uniquely human.
But this, along with these whiz kids’ urgent desire for empirical quantitive output, leads the valley geeks to false gods…EA algorithms is one example. Another is the application of Intersectionality grid point-scoring from the social science Yale crowd that moved to the valley for Tech jobs coupled with the very rudimentary-but-quantifiable surface data collection by social media.
In addition, these social media companies survive by trying to drum up more advertising dollars. Because their data coupled with rudimentary Intersectionality grids, you know, figures people out. Wizards behind the curtains in Oz.
Their world fits together quite nicely, until you see real-world effects of good people losing a career they love because their intersectionality score proves they are a n4zi fascist or something.
Or a poor white male Christian kid with a brain like Einstein’s but who happens to living in a shack in the Ozarks who will never see an ad for Harvard and will never be challenged. Their gifts for us will never be realized because they are are living in a digital ghetto created by their Intersectionality and online social score.
When one is a hammer in Silicon Valley (with precious little data), everything and everyone looks like a nail.
A closed-loop pseudo-religion if there ever was one.
Part of the issue is that silicon valley whiz-kids don’t have enough data (by a long shot) to sufficiently predict their users’ current status let alone their future. We’re just at the tip of the iceberg in collecting and analyzing voluminous data behind what makes each of us uniquely human.
But this, along with these whiz kids’ urgent desire for empirical quantitive output, leads the valley geeks to false gods…EA algorithms is one example. Another is the application of Intersectionality grid point-scoring from the social science Yale crowd that moved to the valley for Tech jobs coupled with the very rudimentary-but-quantifiable surface data collection by social media.
In addition, these social media companies survive by trying to drum up more advertising dollars. Because their data coupled with rudimentary Intersectionality grids, you know, figures people out. Wizards behind the curtains in Oz.
Their world fits together quite nicely, until you see real-world effects of good people losing a career they love because their intersectionality score proves they are a n4zi fascist or something.
Or a poor white male Christian kid with a brain like Einstein’s but who happens to living in a shack in the Ozarks who will never see an ad for Harvard and will never be challenged. Their gifts for us will never be realized because they are are living in a digital ghetto created by their Intersectionality and online social score.
When one is a hammer in Silicon Valley (with precious little data), everything and everyone looks like a nail.
A closed-loop pseudo-religion if there ever was one.
As someone with Asperger’s, I’d caution against stereotyping. What you say may be true of some but not all. The social dysfunction symptoms of the autism spectrum are by no means uniform. Personally speaking, I feel almost none of the abstract altruism the article discusses. I’m only affected by suffering I can see in front of me or through a picture, sometimes very powerfully so. I’m very sympathetic to holocaust victims because I’ve seen the pictures of liberated Auschwitz. Whenever the topic comes up I remember those pictures and it makes me angry all over again. My personal theory is that the social instincts simply don’t work right in my brain, so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me. I simply don’t feel the urge to seek out human companionship to nearly the normal degree and I can’t really form the bonds that make people into a tribe, a culture, a town. I can only form individual relationships with people I am in the room with, and then not as well as others. The individual one to one connection is the only one I’ve ever figured out how to make. I can imagine the others and understand the logic behind them, and even pretend to feel them if the situation calls for that, but I don’t really ‘get it’ in the way normal people do. Incidentally, I’m also more powerfully affected by the suffering of animals and children, especially babies, than adults, perhaps because there is a natural instinct among all mammals, even the non-social ones, to care for small helpless creatures so that we care for our young rather than murder them. The philosophy of effective altruism thus seems absurd on its face to me, the product of our hypersocial modern world and the hypersocial personalities who tend to find success in it, the product of an excess of social feeling rather than a deficiency.
“…so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me.”
That statement of yours re-enforces JB’s remarks about the Silicon Valley crowd.
I wonder how these altruistic people feel about abortion? It would make perfect sense, in today’s twisted world, if they supported the killing of unborn babies today, but espoused policies that would protect people not yet born hundreds of years from now.
I should be more precise. I meant that I don’t consider myself part of any particular group, race, etc. nor do I understand why people feel the need to identify themselves thus. People seem to ‘need’ to be a part of some larger group or purpose, but I have no such feelings. It is alien to me. I can perhaps theorize about evolutionary benefits and reason through the psychology of it, but that’s all. I have very little emotional connection to people other than the most direct and immediate ones, or to certain individuals, but never a group. Abortion is a complicated and difficult issue for any religion or philosophy to contemplate and yes, the possible contradictions you mention are a salient point.
I should be more precise. I meant that I don’t consider myself part of any particular group, race, etc. nor do I understand why people feel the need to identify themselves thus. People seem to ‘need’ to be a part of some larger group or purpose, but I have no such feelings. It is alien to me. I can perhaps theorize about evolutionary benefits and reason through the psychology of it, but that’s all. I have very little emotional connection to people other than the most direct and immediate ones, or to certain individuals, but never a group. Abortion is a complicated and difficult issue for any religion or philosophy to contemplate and yes, the possible contradictions you mention are a salient point.
I admire the thoughtful path you’ve taken toward self-awareness and connect with much of what you’ve written on the larger topic of things-that-are-near and things-that- are-distant, either in a spatial or temporal sense.
But I don’t think strict long-termism, nor any variety of doctrinaire utilitarianism, is likely to result from what you call an ‘excess of social feeling’. I’d estimate the sponsoring impulse to be something closer to idealism or intellectualized empathy, a kind of utopian coping strategy likelier to found in people who are at once very intelligent and not very socially attuned.
Your caution against stereotyping is well-taken, but Jeremy Bentham was–in our current, umbrella-term-happy parlance–almost surely ‘on the spectrum’, and so is MacAskill, in my non-expert opinion, which is largely based on a single, long profile lately published in The New Yorker. https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
I want to avoid labelling or pathologizing viewpoints I disagree with, but I’m not there yet, and my personal sense of utility joined to moral responsibility deems rigid utilitarianism (or consequentialism, long-termism, computational pragmatism, etc.) to be both insufficiently humane and, echoing Ahmed, unrealistic. Any philosophy that can announce: ‘By my calculations I must allow your unjust, preventable death in order to save ten people in the 31st Century’ is a bit too detached, and even just plain wrong.
“…so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me.”
That statement of yours re-enforces JB’s remarks about the Silicon Valley crowd.
I wonder how these altruistic people feel about abortion? It would make perfect sense, in today’s twisted world, if they supported the killing of unborn babies today, but espoused policies that would protect people not yet born hundreds of years from now.
I admire the thoughtful path you’ve taken toward self-awareness and connect with much of what you’ve written on the larger topic of things-that-are-near and things-that- are-distant, either in a spatial or temporal sense.
But I don’t think strict long-termism, nor any variety of doctrinaire utilitarianism, is likely to result from what you call an ‘excess of social feeling’. I’d estimate the sponsoring impulse to be something closer to idealism or intellectualized empathy, a kind of utopian coping strategy likelier to found in people who are at once very intelligent and not very socially attuned.
Your caution against stereotyping is well-taken, but Jeremy Bentham was–in our current, umbrella-term-happy parlance–almost surely ‘on the spectrum’, and so is MacAskill, in my non-expert opinion, which is largely based on a single, long profile lately published in The New Yorker. https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
I want to avoid labelling or pathologizing viewpoints I disagree with, but I’m not there yet, and my personal sense of utility joined to moral responsibility deems rigid utilitarianism (or consequentialism, long-termism, computational pragmatism, etc.) to be both insufficiently humane and, echoing Ahmed, unrealistic. Any philosophy that can announce: ‘By my calculations I must allow your unjust, preventable death in order to save ten people in the 31st Century’ is a bit too detached, and even just plain wrong.
Nice comment. I think that, on some level, these guys (from Rousseau, German idealists on, say) think that they can imaging a moral world where human nature can be changed (or changed back) using pure or “better” Reason (remember, even Kant’s moral system was based on one true reasonable “Rule” – the imperative).
This “long-termism” always presumes that human’s can figure out how to avoid death and war and anger, so that everyone loves his neighbor in all places at all times. But human nature stubbornly resurfaces in all places at all times, and it’s still difficult to hoodwink God.
I also like the essay.
I am puzzled at the faulty logic in the book.
McAskill confuses the moral claims of actual people, future people, and potential people. He says “Future people are people too” which is not correct. Actual people currently exist, whereas future people are a conjecture. More seriously conflating people with potential people creates implications. If potential people have a moral claim on us, then abortion and birth control are morally problematic. McAskill tries to hand wave this away, but without any rigor. If many people have to suffer in order to create huge amounts of future people, where does that lead us? And why should I care if there are 10 trillion people rather than 10 billion people in one million years?
Utilitarianism is the basis of EA and Longtermism. It was developed in the 18th century to replace religious moral certainty with rational moral certainty, and it is no coincidence that the EA movement is quasi religious. The idea that any moral action can be measured in a one dimensional numeric objective function, and that addition and subtraction can be applied to a set of moral actions is relatively easy to challenge, except to sociopaths. Compare the torture and murder of a child to a permanent cure for acne. One is a horrible moral injury and the other is a real relief of human suffering. But no matter how many people are relieved of the suffering of acne most people would consider that it could never offset the moral injury to that murdered child. Morality is not one dimensional. We can apply rough comparisons to moral action. Saving two human lives is better than saving one (in general! one could come up with exceptions), but there are different categories of morals, good and bad. Morality is not economics, or accounting.
Utitarianism, especially in its longtermism form, has the capability of justifying terrible actions, particularly when used by immature, geeky, self styled smart, (potentially sociopathic) people. It is no coincidence that SBF was a fan, and Elon Musk liked the book. It is easy to skew the utility function, and the probabilities, to suit one’s natural predilections.
I am surprised that such a philosophically weak book was written by a philosophy professor of Oxford. Standards must be slipping!
As an Unheard fan but also as someone who has been involved in EA in university, I feel compelled to give a more complete picture here.
I have start to with Arif’s conflation of long-termism with the general EA movement. Yes, a lot of EA was built on utilitarian ideas. Yes utilitarian ideals can lead one to support some version long-termism. However, this does not mean that EA membership/affiliation/identification necessitates a utilitarian worldview nor that being EA means you have to subscribe to the long-termist view (or at least not in the way MacAskilll recommends).
The core idea to Effective Altruism is that certain altruistic interventions are better than others. EA then implores individuals to use think beyond their emotions when selecting a cause to intervene in. It then also supports using data and statistics to ensure that they use better interventions in support of that cause.
The movement/philosophy then outlines three useful criteria for identifying causes that one should try to solve. A single problem does not need to fulfil all three criteria but should meet at least one. The problems should be:
Large in scale – (i.e., affect a large amount of people or sentient beings)Tractable (i.e, we can solve them)Neglected (i.e., so that the impact of your intervention is more significantly felt by those helped)
From this EA comes up with a wide range of causes to support. Causes such as reducing global poverty, malnutrition and hunger are actually its more mainstream causes. Reducing animal suffering through ending factory farming is another mainstream-is one. Woke ones have arisen such as US criminal justice reform have arisen. Long-termism is another cause area which is popular among the tech-associated proponents of EA but I think is still not very popular in the movement itself. And then there are some more fringe ones like establishing rights for sentient AI’s (when they arrive, some believe they are already here) or trying to end all animal suffering in the wild.
So yes one can see that there are some weird EA ideas that float around and again one can clearly see the influence of utilitarianism. However, I believe that the movement has a lot of viewpoint diversity. I have not mentioned all the cause areas but I think it is clear from those mentioned that some may conflict with others. In fact, the tech long-termists often get criticised by people within the movement for choosing future non-existent people over “black and brown people in the Global South.”
There also different ways people choose to support their selected causes. From Earning-to-Give(quite controversial), to advocacy, to traditional charity work, to building/ working in companies or research institutions.
Personally, I was quite drawn to the idea that causes we support should be neglected. One, because I do want my altruistic actions to have a significant impact. But more importantly for me was that neglected ideas had not yet been politicised. Living on a university campus with a strict orthodoxy and censorious atmosphere meant debating ideas was not very welcome. However, with neglected causes, the ideas were generally politically neutral. This meant debate was more welcome and easier to have as there was much less fear about saying incorrect things.
I also think that one can care about long-term future of humanity while still caring about current social problems. I think existential risk and reducing our exposure to the downsides of certain identifiable risks is a good idea (like trying to stop the experimentation of viruses to increase their virulence & transmissibility).
As a Christian (born and raised) I also felt that EA takes something that those of us who are reasonably well-off should do (i.e., give to and support those in need) and just urges us to be smarter about how we do that. I think many Unherd readers will actually support that idea even if they are not so pro long-termism.
Lastly, I do want to say that the official EA movement has probably always been liberal leaning (and has more recently embraced a lot of woke behaviours and phraseology). This is a consequence of it being very popular amongst students and white-collar finance and tech people. However, I still do believe that anyone can and should embrace the outlook to inform their altruistic actions. If you can help people why not help them in a better way (if it can be identified)
As an Unheard fan but also as someone who has been involved in EA in university, I feel compelled to give a more complete picture here.
I have start to with Arif’s conflation of long-termism with the general EA movement. Yes, a lot of EA was built on utilitarian ideas. Yes utilitarian ideals can lead one to support some version long-termism. However, this does not mean that EA membership/affiliation/identification necessitates a utilitarian worldview nor that being EA means you have to subscribe to the long-termist view (or at least not in the way MacAskilll recommends).
The core idea to Effective Altruism is that certain altruistic interventions are better than others. EA then implores individuals to use think beyond their emotions when selecting a cause to intervene in. It then also supports using data and statistics to ensure that they use better interventions in support of that cause.
The movement/philosophy then outlines three useful criteria for identifying causes that one should try to solve. A single problem does not need to fulfil all three criteria but should meet at least one. The problems should be:
Large in scale – (i.e., affect a large amount of people or sentient beings)Tractable (i.e, we can solve them)Neglected (i.e., so that the impact of your intervention is more significantly felt by those helped)
From this EA comes up with a wide range of causes to support. Causes such as reducing global poverty, malnutrition and hunger are actually its more mainstream causes. Reducing animal suffering through ending factory farming is another mainstream-is one. Woke ones have arisen such as US criminal justice reform have arisen. Long-termism is another cause area which is popular among the tech-associated proponents of EA but I think is still not very popular in the movement itself. And then there are some more fringe ones like establishing rights for sentient AI’s (when they arrive, some believe they are already here) or trying to end all animal suffering in the wild.
So yes one can see that there are some weird EA ideas that float around and again one can clearly see the influence of utilitarianism. However, I believe that the movement has a lot of viewpoint diversity. I have not mentioned all the cause areas but I think it is clear from those mentioned that some may conflict with others. In fact, the tech long-termists often get criticised by people within the movement for choosing future non-existent people over “black and brown people in the Global South.”
There also different ways people choose to support their selected causes. From Earning-to-Give(quite controversial), to advocacy, to traditional charity work, to building/ working in companies or research institutions.
Personally, I was quite drawn to the idea that causes we support should be neglected. One, because I do want my altruistic actions to have a significant impact. But more importantly for me was that neglected ideas had not yet been politicised. Living on a university campus with a strict orthodoxy and censorious atmosphere meant debating ideas was not very welcome. However, with neglected causes, the ideas were generally politically neutral. This meant debate was more welcome and easier to have as there was much less fear about saying incorrect things.
I also think that one can care about long-term future of humanity while still caring about current social problems. I think existential risk and reducing our exposure to the downsides of certain identifiable risks is a good idea (like trying to stop the experimentation of viruses to increase their virulence & transmissibility).
As a Christian (born and raised) I also felt that EA takes something that those of us who are reasonably well-off should do (i.e., give to and support those in need) and just urges us to be smarter about how we do that. I think many Unherd readers will actually support that idea even if they are not so pro long-termism.
Lastly, I do want to say that the official EA movement has probably always been liberal leaning (and has more recently embraced a lot of woke behaviours and phraseology). This is a consequence of it being very popular amongst students and white-collar finance and tech people. However, I still do believe that anyone can and should embrace the outlook to inform their altruistic actions. If you can help people why not help them in a better way (if it can be identified)
The tech and finance “elite” are very cunning and have suckered millions of folks into believing they are green, they care about the climate, diversity, inclusiveness, and all of the other current PC BS slogans. They are the greediest arseholes in history and would sell ALL future generations and probably the past generations for an extra shekel.
As someone with Asperger’s, I’d caution against stereotyping. What you say may be true of some but not all. The social dysfunction symptoms of the autism spectrum are by no means uniform. Personally speaking, I feel almost none of the abstract altruism the article discusses. I’m only affected by suffering I can see in front of me or through a picture, sometimes very powerfully so. I’m very sympathetic to holocaust victims because I’ve seen the pictures of liberated Auschwitz. Whenever the topic comes up I remember those pictures and it makes me angry all over again. My personal theory is that the social instincts simply don’t work right in my brain, so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me. I simply don’t feel the urge to seek out human companionship to nearly the normal degree and I can’t really form the bonds that make people into a tribe, a culture, a town. I can only form individual relationships with people I am in the room with, and then not as well as others. The individual one to one connection is the only one I’ve ever figured out how to make. I can imagine the others and understand the logic behind them, and even pretend to feel them if the situation calls for that, but I don’t really ‘get it’ in the way normal people do. Incidentally, I’m also more powerfully affected by the suffering of animals and children, especially babies, than adults, perhaps because there is a natural instinct among all mammals, even the non-social ones, to care for small helpless creatures so that we care for our young rather than murder them. The philosophy of effective altruism thus seems absurd on its face to me, the product of our hypersocial modern world and the hypersocial personalities who tend to find success in it, the product of an excess of social feeling rather than a deficiency.
Nice comment. I think that, on some level, these guys (from Rousseau, German idealists on, say) think that they can imaging a moral world where human nature can be changed (or changed back) using pure or “better” Reason (remember, even Kant’s moral system was based on one true reasonable “Rule” – the imperative).
This “long-termism” always presumes that human’s can figure out how to avoid death and war and anger, so that everyone loves his neighbor in all places at all times. But human nature stubbornly resurfaces in all places at all times, and it’s still difficult to hoodwink God.
I also like the essay.
I am puzzled at the faulty logic in the book.
McAskill confuses the moral claims of actual people, future people, and potential people. He says “Future people are people too” which is not correct. Actual people currently exist, whereas future people are a conjecture. More seriously conflating people with potential people creates implications. If potential people have a moral claim on us, then abortion and birth control are morally problematic. McAskill tries to hand wave this away, but without any rigor. If many people have to suffer in order to create huge amounts of future people, where does that lead us? And why should I care if there are 10 trillion people rather than 10 billion people in one million years?
Utilitarianism is the basis of EA and Longtermism. It was developed in the 18th century to replace religious moral certainty with rational moral certainty, and it is no coincidence that the EA movement is quasi religious. The idea that any moral action can be measured in a one dimensional numeric objective function, and that addition and subtraction can be applied to a set of moral actions is relatively easy to challenge, except to sociopaths. Compare the torture and murder of a child to a permanent cure for acne. One is a horrible moral injury and the other is a real relief of human suffering. But no matter how many people are relieved of the suffering of acne most people would consider that it could never offset the moral injury to that murdered child. Morality is not one dimensional. We can apply rough comparisons to moral action. Saving two human lives is better than saving one (in general! one could come up with exceptions), but there are different categories of morals, good and bad. Morality is not economics, or accounting.
Utitarianism, especially in its longtermism form, has the capability of justifying terrible actions, particularly when used by immature, geeky, self styled smart, (potentially sociopathic) people. It is no coincidence that SBF was a fan, and Elon Musk liked the book. It is easy to skew the utility function, and the probabilities, to suit one’s natural predilections.
I am surprised that such a philosophically weak book was written by a philosophy professor of Oxford. Standards must be slipping!
The tech and finance “elite” are very cunning and have suckered millions of folks into believing they are green, they care about the climate, diversity, inclusiveness, and all of the other current PC BS slogans. They are the greediest arseholes in history and would sell ALL future generations and probably the past generations for an extra shekel.
J Bryant
1 year ago
A very fine essay, imo. I’m not surprised effective altruism became a favorite analytical approach in Silicon Valley. Those folks have a distinctly quantitative mindset. Everything should be measured and probabilities calculated.
I’m also not surprised the Silicon Valley set feel no more emotional attachment to the current generation than to a future generation a hundred times removed. It has been observed, and written about, that there’s a higher percentage of people on the Asperger’s spectrum in IT. I think that observation is somewhat disfavored now because there might be a hint of stigma, but I believe it is true. For the valley crowd it may truly be as easy to empathize as much with hypothetical, future generations as with the present generation.
Of course, the most successful of the Valley folks are super wealthy. They seem to be among the “elite” who disparage the ordinary working person, the so-called deplorables. Perhaps they view future generations as somehow better, or at least having the potential of being better, than the current generation, perhaps through the operation of evolution.
Or perhaps the philanthropy of the elite seeks any high-minded, philosophical justification to disguise the self-interest at its heart.
Jeremy Bray
1 year ago
An excellent essay by a man with a good sense of ethics- in my opinion. The bible has an important passage that warns against relying on a projection of the present into the future in the statement: “The race is not to the swift or the battle to the strong, but time and chance happeneth to them all”. It is hubris to suppose we know the future on the basis of a limited knowledge of the present.
Interestingly the woke philosophy so prevalent among the academics focuses too exclusively on the minor harms in the present to particular groups in neglect of the more abstract harms involved in suppressing freedom of speech. So the hurt feelings of the rare trans woman in hearing the “wrong” pronoun applied is prioritised over the general value of the larger population being able to express the biologically correct pronoun. The particular is prioritised over the more general good of the many. This is curiously antithetical to considering the long term.
The author seems to have got tangled up in a pronoun salad of his own: “If each policeman is short-sighted and slow, each additional unit of attention might be better focused on problems that she can effectively address (those in her sector) rather than the ones that she can’t.” Or is “she” now the correct pronoun for a policeman? Surely the word policeman itself is now taboo – shouldn’t it be policeperson? And what does one call a transgender policeperson? I’m confused.
The author seems to have got tangled up in a pronoun salad of his own: “If each policeman is short-sighted and slow, each additional unit of attention might be better focused on problems that she can effectively address (those in her sector) rather than the ones that she can’t.” Or is “she” now the correct pronoun for a policeman? Surely the word policeman itself is now taboo – shouldn’t it be policeperson? And what does one call a transgender policeperson? I’m confused.
Jeremy Bray
1 year ago
An excellent essay by a man with a good sense of ethics- in my opinion. The bible has an important passage that warns against relying on a projection of the present into the future in the statement: “The race is not to the swift or the battle to the strong, but time and chance happeneth to them all”. It is hubris to suppose we know the future on the basis of a limited knowledge of the present.
Interestingly the woke philosophy so prevalent among the academics focuses too exclusively on the minor harms in the present to particular groups in neglect of the more abstract harms involved in suppressing freedom of speech. So the hurt feelings of the rare trans woman in hearing the “wrong” pronoun applied is prioritised over the general value of the larger population being able to express the biologically correct pronoun. The particular is prioritised over the more general good of the many. This is curiously antithetical to considering the long term.
Last edited 1 year ago by Jeremy Bray
Saul D
1 year ago
The problem is that everyone thinks they are doing good in some way – even when the consequences of their actions end up being bad. Pre-war eugenicists thought they were working to improve human beings for the long term. Look where that took us. Communism was a theoretical perfection. Look where that took us. And then look at what we learnt from the ‘badness’ that followed those ideas.
The longer you look to the future the larger the error – too many unforeseen consequences. And if you don’t make the error, you don’t learn to correct for it, or to take it into account later.
The future is another world. It’s best we try to resolve what we can now for the people we are now.
I may be pummeled for being off base but I see a connection to CRT/Frankfurt School’s teachings. My son was a disciple of CRT in high school while studying debate. (All the kids are enthusiastic disciples.) I only associated the word continental to a light breakfast so I had no idea about any of it. To remedy my ignorance, I started listening to old lectures on the Frankfurt School. One of the best was Marcuse being interviewed by Bryan Magee. https://youtu.be/0KqC1lTAJx4. I found Marcuse to be an elitist with a disdain for women – women were only a construct but female ways were useful in the short run to further spread the doctrine. (Sort of a riff off of Pygmalion/My Fair Lady.) I find MacAskill ‘s theory fits nicely within the philosophy – though I understand that there are many derivatives of it and I have a very basic understanding. But for me CRT/Frankfurt boils down to planning and bureaucratic planning at that with rigid long term goals as opposed to individuals making the best decisions for themselves with the knowledge they possess. The latter seems easier to allow for pivoting if harm is produced.
It sometimes feels like we are moving towards a Spanish Inquisition style system where everything is controlled and dictated ‘for the good of humanity’, and dissent must be removed. The contrast, of course, was to the English system of laissez-faire and muddling through. The difference was felt through the Enlightenment – for instance try and name some Spanish scientists or technical breakthroughs from that age.
Whilst I agree that the Frankfurt School has some blame for what is currently happening, it it really not until deconstructionist and post-structural philosophy took hold in European universities that the rot set in. In particular, this school eschewed discussion and debate, which they believed was just another from of power politics (they, of course, ignored the fact that non-debate is a form of power wielding). For some reason, generally, within European countries this tosh remained in universities, but when it was exported to the USA it took off, and then it spread to the UK. The danger it poses is that of no debate, this leads to the silencing of dissenting voices; I don’t care what nonsense they spout (I’ve heard it at many philosophy seminars), but I want to be able to disagree vocally and in print
It sometimes feels like we are moving towards a Spanish Inquisition style system where everything is controlled and dictated ‘for the good of humanity’, and dissent must be removed. The contrast, of course, was to the English system of laissez-faire and muddling through. The difference was felt through the Enlightenment – for instance try and name some Spanish scientists or technical breakthroughs from that age.
Whilst I agree that the Frankfurt School has some blame for what is currently happening, it it really not until deconstructionist and post-structural philosophy took hold in European universities that the rot set in. In particular, this school eschewed discussion and debate, which they believed was just another from of power politics (they, of course, ignored the fact that non-debate is a form of power wielding). For some reason, generally, within European countries this tosh remained in universities, but when it was exported to the USA it took off, and then it spread to the UK. The danger it poses is that of no debate, this leads to the silencing of dissenting voices; I don’t care what nonsense they spout (I’ve heard it at many philosophy seminars), but I want to be able to disagree vocally and in print
“The future is another world. It’s best we try to resolve what we can now for the people we are now.”
As Jordan Peterson cautions, tidy up your own room before trying to change the world.
I may be pummeled for being off base but I see a connection to CRT/Frankfurt School’s teachings. My son was a disciple of CRT in high school while studying debate. (All the kids are enthusiastic disciples.) I only associated the word continental to a light breakfast so I had no idea about any of it. To remedy my ignorance, I started listening to old lectures on the Frankfurt School. One of the best was Marcuse being interviewed by Bryan Magee. https://youtu.be/0KqC1lTAJx4. I found Marcuse to be an elitist with a disdain for women – women were only a construct but female ways were useful in the short run to further spread the doctrine. (Sort of a riff off of Pygmalion/My Fair Lady.) I find MacAskill ‘s theory fits nicely within the philosophy – though I understand that there are many derivatives of it and I have a very basic understanding. But for me CRT/Frankfurt boils down to planning and bureaucratic planning at that with rigid long term goals as opposed to individuals making the best decisions for themselves with the knowledge they possess. The latter seems easier to allow for pivoting if harm is produced.
“The future is another world. It’s best we try to resolve what we can now for the people we are now.”
As Jordan Peterson cautions, tidy up your own room before trying to change the world.
Saul D
1 year ago
The problem is that everyone thinks they are doing good in some way – even when the consequences of their actions end up being bad. Pre-war eugenicists thought they were working to improve human beings for the long term. Look where that took us. Communism was a theoretical perfection. Look where that took us. And then look at what we learnt from the ‘badness’ that followed those ideas.
The longer you look to the future the larger the error – too many unforeseen consequences. And if you don’t make the error, you don’t learn to correct for it, or to take it into account later.
The future is another world. It’s best we try to resolve what we can now for the people we are now.
jmo
1 year ago
The plans of the long termists surely have to be based on the work of theorists and modellers. They never get anything wrong, do they?
Yes indeed. As the saying goes “Predictions are difficult especially about the future”.
Obviously one of the many problems with his thesis is the idea that we can weigh our future using our current values of that future.using a formula. Indeed if that formula is out a bit it might be out rather a lot in 400,000 years or more!
Yes indeed. As the saying goes “Predictions are difficult especially about the future”.
Obviously one of the many problems with his thesis is the idea that we can weigh our future using our current values of that future.using a formula. Indeed if that formula is out a bit it might be out rather a lot in 400,000 years or more!
“Knowledge puffs up, but love edifies. And if anyone thinks that he knows anything, he knows nothing yet as he ought to know.” (1 Corinthians 8:1-2)
“As a father pities his children, so the LORD pities those who fear Him. For He knows our frame; He remembers that we are dust.” (Psalm 103:13-14)
“We need to develop genetic engineering technologies and techniques to be able to write circuitry for cells and predictably program biology in the same way in which we write software and program computers.” (US Government, Sept 2022)
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…” (William MacAskill, quoted above)
“We know so much, and we understand so little. Lord, guide us; Lord, graciously guide us.” (Me, with monotonous frequency for the last few years)
“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward.” Matthew 6
“For our light affliction, which is but for a moment, worketh for us a far more exceeding and eternal weight of glory; while we look not at the things which are seen, but at the things which are not seen: for the things which are seen are temporal; but the things which are not seen are eternal.” 2 Corinthians 4:17
“Let not mercy and truth forsake thee: bind them about thy neck; write them upon the table of thine heart: So shalt thou find favour and good understanding in the sight of God and man. Trust in the Lord with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths.” Proverbs 3:3
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…”
McAskill has lived for 35 years, five of which he most likely can’t remember. After that small amount of time he claims to be able to calculate what to do now to beneficially affect the world hundreds of years hence. He then tells us to spend many centuries figuring it out. Well, which is it? Use his formula developed after 35 years of sentience or use the wisdom of centuries, i.e. what common sense tells us to do?
THIS. 10,000 stars, wholly agree. Confuses me why billionaires think going to mars is more important than focusing on those right in front of us- the homeless, addicted, abandoned oppressed, uneducated and sick. Improving our own weaknesses and faults. Growing in virtue. Maslowe’s hierarchy of needs is tipped upside down to feed their egos. Most of humanity will never go to mars. But their lives will be improved by clean water, good books, vaccines and safety from violence.
Let’s do the plain “boring” things well and go to mars later. How about supporting the amazing caregivers of our Alzheimer’s population or parents who sacrifice their lives for medically fragile/autistic children?
Epidemic of mental illness and opioid addiction, lack of psychologists and social workers in society and Elon and Jeff want to go to mars. How ludicrous. How about paying every teen in society to exercise for a year and see if their sense of self worth /control/mental health improves?
THIS. 10,000 stars, wholly agree. Confuses me why billionaires think going to mars is more important than focusing on those right in front of us- the homeless, addicted, abandoned oppressed, uneducated and sick. Improving our own weaknesses and faults. Growing in virtue. Maslowe’s hierarchy of needs is tipped upside down to feed their egos. Most of humanity will never go to mars. But their lives will be improved by clean water, good books, vaccines and safety from violence.
Let’s do the plain “boring” things well and go to mars later. How about supporting the amazing caregivers of our Alzheimer’s population or parents who sacrifice their lives for medically fragile/autistic children?
Epidemic of mental illness and opioid addiction, lack of psychologists and social workers in society and Elon and Jeff want to go to mars. How ludicrous. How about paying every teen in society to exercise for a year and see if their sense of self worth /control/mental health improves?
“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward.” Matthew 6
“For our light affliction, which is but for a moment, worketh for us a far more exceeding and eternal weight of glory; while we look not at the things which are seen, but at the things which are not seen: for the things which are seen are temporal; but the things which are not seen are eternal.” 2 Corinthians 4:17
“Let not mercy and truth forsake thee: bind them about thy neck; write them upon the table of thine heart: So shalt thou find favour and good understanding in the sight of God and man. Trust in the Lord with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths.” Proverbs 3:3
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…”
McAskill has lived for 35 years, five of which he most likely can’t remember. After that small amount of time he claims to be able to calculate what to do now to beneficially affect the world hundreds of years hence. He then tells us to spend many centuries figuring it out. Well, which is it? Use his formula developed after 35 years of sentience or use the wisdom of centuries, i.e. what common sense tells us to do?
“Knowledge puffs up, but love edifies. And if anyone thinks that he knows anything, he knows nothing yet as he ought to know.” (1 Corinthians 8:1-2)
“As a father pities his children, so the LORD pities those who fear Him. For He knows our frame; He remembers that we are dust.” (Psalm 103:13-14)
“We need to develop genetic engineering technologies and techniques to be able to write circuitry for cells and predictably program biology in the same way in which we write software and program computers.” (US Government, Sept 2022)
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…” (William MacAskill, quoted above)
“We know so much, and we understand so little. Lord, guide us; Lord, graciously guide us.” (Me, with monotonous frequency for the last few years)
Anyone who thinks humans can alter the climate on earth will think they can impact human lives 1000 years from now. It most certainly is hubris.
Jeff Cunningham
1 year ago
The idea is breathtaking in the scope of irs hubris.
Christopher Chantrill
1 year ago
Before we get to the long term we have to survive the short term. For instance, Europe has to get through the coming winter without freezing to death, because lack of Russian gas, and because the Dutch farmers have been well and truly sorted for adding to the 780,000 parts per million of nitrogen in the atmosphere.
The way things are going there may not be a long term for the human race.
Christopher Chantrill
1 year ago
Before we get to the long term we have to survive the short term. For instance, Europe has to get through the coming winter without freezing to death, because lack of Russian gas, and because the Dutch farmers have been well and truly sorted for adding to the 780,000 parts per million of nitrogen in the atmosphere.
I have a little education in mathematics. It’s scary, I know that I know far less about mathematics that most people don’t know.
Education should destroy hubris but sadly, it seems to do quite the opposite.
MDH 0
1 year ago
“Educated beyond all decency and common sense.”
No No
1 year ago
So refreshing to read an informed response to MacAskill’s work. I found his book very interesting, but basically disagreed with some fundamental premises – Mr Ahmed articulates some of this much better than I ever could! To twist Hume, I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me. The other thought I have had derives from my financial background. The applied ethicists seem to me to be using a woefully wrong discount rate in their financial models (or on the evidence of some talks, no discount rate at all). This can end up overvaluing the welfare of future people relative to the people of today. For example, a 5% discount rate suggests we weight the utility of one person today as equal to 3.5 people in 25yrs time, but a 20% discount rate suggests one person today is worth >200 people in 25yrs time. Even at a low 5% discount rate, the model would suggest a unit of utility today is worth 1.9 x 10 to the power of 22 units of utility in 1000 years time. A similar but subtly different argument could also be made for distance, again from a perspective of materialism. Digital connectivity makes the world seem small, but in many ways our good deeds can be much more impactful for being local, and create better positive externalities via network effects (I treat local people/adjacent nodes well, who treat their adjacent nodes well, who treat their nodes well, etc etc). The network effects and positive externalities are lost when you try to jump 50 nodes to a country and people thousands of miles away, ignoring those physically closest to you. Anyway, thanks Unherd, really enjoyed this.
The work being done to protect against meteor strikes is a good example of long-termist thinking! Of course it is an example where we do have an understanding of the threat and some idea of what needs to be done to combat it.
Exactly. This article should be read an argument against misdirected long-termism, not valid long-termism.
In our world of hyper division of labour and specialisation some of us go to work each day to look, accurately, billions of years into the past and into the future. But at the end of the day, we all come home to the much smaller orbits of familiar concerns dominated by those nearest and dearest to us.
Unlike Mrs Jellby, billionaires have typically more than catered for the needs of their circle of intimates and they rightly feel the need to understand what to best do with the rest of their burgeoning wealth. This led Bill Gates, for example, to invest much more into eradicating malaria than into cures for cancer.
We should all be glad that some of us are out there conducting experiments to divert asteroids or drilling Antarctic ice cores to determine the patterns of global climate variation. But we do need a robust institutional and political edifice to process their discoveries and knowledge into meaningful collective actions and to prevent and contain the outbreak of moral panics and frauds based on the propagation if misinformation and mass manipulation of public sentiment. Effective Altruism clearly “needs more work” to avoid the worst but keep the best.
Exactly. This article should be read an argument against misdirected long-termism, not valid long-termism.
In our world of hyper division of labour and specialisation some of us go to work each day to look, accurately, billions of years into the past and into the future. But at the end of the day, we all come home to the much smaller orbits of familiar concerns dominated by those nearest and dearest to us.
Unlike Mrs Jellby, billionaires have typically more than catered for the needs of their circle of intimates and they rightly feel the need to understand what to best do with the rest of their burgeoning wealth. This led Bill Gates, for example, to invest much more into eradicating malaria than into cures for cancer.
We should all be glad that some of us are out there conducting experiments to divert asteroids or drilling Antarctic ice cores to determine the patterns of global climate variation. But we do need a robust institutional and political edifice to process their discoveries and knowledge into meaningful collective actions and to prevent and contain the outbreak of moral panics and frauds based on the propagation if misinformation and mass manipulation of public sentiment. Effective Altruism clearly “needs more work” to avoid the worst but keep the best.
“I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me.”
Indeed, this is a large part of the reason why I ended up doing a PhD on modality, the study of necessity, possibility, the contingent, and the actual vs the non-actual.
The work being done to protect against meteor strikes is a good example of long-termist thinking! Of course it is an example where we do have an understanding of the threat and some idea of what needs to be done to combat it.
“I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me.”
Indeed, this is a large part of the reason why I ended up doing a PhD on modality, the study of necessity, possibility, the contingent, and the actual vs the non-actual.
No No
1 year ago
So refreshing to read an informed response to MacAskill’s work. I found his book very interesting, but basically disagreed with some fundamental premises – Mr Ahmed articulates some of this much better than I ever could! To twist Hume, I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me. The other thought I have had derives from my financial background. The applied ethicists seem to me to be using a woefully wrong discount rate in their financial models (or on the evidence of some talks, no discount rate at all). This can end up overvaluing the welfare of future people relative to the people of today. For example, a 5% discount rate suggests we weight the utility of one person today as equal to 3.5 people in 25yrs time, but a 20% discount rate suggests one person today is worth >200 people in 25yrs time. Even at a low 5% discount rate, the model would suggest a unit of utility today is worth 1.9 x 10 to the power of 22 units of utility in 1000 years time. A similar but subtly different argument could also be made for distance, again from a perspective of materialism. Digital connectivity makes the world seem small, but in many ways our good deeds can be much more impactful for being local, and create better positive externalities via network effects (I treat local people/adjacent nodes well, who treat their adjacent nodes well, who treat their nodes well, etc etc). The network effects and positive externalities are lost when you try to jump 50 nodes to a country and people thousands of miles away, ignoring those physically closest to you. Anyway, thanks Unherd, really enjoyed this.
Steve Elliott
1 year ago
Before I retired I was a computer programmer. One of the guidelines I was given was: when writing a piece of software avoid adding stuff because you think it might be useful in the future. The reasons are A) it makes the software harder to write in the first place and B) the thing you added almost certainly won’t be required and what’s more it will make it harder to change the software to add in the thing you hadn’t thought of first time around but really need now.
I’ve often thought that this rule is more generally applicable.
Also can I recommend the book “Why most things fail” by Paul Ormerod which mostly amounts to “The best laid schemes o’ mice an men gang aft agley”. And I should say that I have no financial or other interest in recommending this book.
It’s good to plan for the future but you have to be flexible because there will surely be something you haven’t thought of.
Before I retired I worked in architecture firms customizing computer applications for designing budlings. I was constantly on the lookout for the one program that “did it all”. Looking back I see that it was a fool’s errand. The programming world and the technology advanced too quickly and, more importantly, people didn’t work well – or create well – in a straightjacket designed by a few nerds – like me.
Absolutely. I can only assume the US government’s stated intention to “predictably program biology in the same way in which we write software and program computers” was made by people with little to no understanding of writing software and programming computers, let alone “programming” biology.
“I have friends, who shall remain nameless right now, who were part of writing the original OS for Apple. I don’t know where we are right now in Mac OS, but they can’t help themselves and open up the OS anytime a new version comes out and go, “Wow, there are things I wrote when I was a kid, and they weren’t very good, but they’re still in there because the thing can’t run without it.” It’s so rare that a protocol doesn’t get changed.” Hans Zimmer (https://www.kvraudio.com/interviews/a-kvr-interview-with-hans-zimmer-55982).
But, hey, yeah, let’s get this programming embedded into our DNA and cellular molecular structures because it’s so predictable and nothing whatsoever could possibly go wrong.
Before I retired I worked in architecture firms customizing computer applications for designing budlings. I was constantly on the lookout for the one program that “did it all”. Looking back I see that it was a fool’s errand. The programming world and the technology advanced too quickly and, more importantly, people didn’t work well – or create well – in a straightjacket designed by a few nerds – like me.
Absolutely. I can only assume the US government’s stated intention to “predictably program biology in the same way in which we write software and program computers” was made by people with little to no understanding of writing software and programming computers, let alone “programming” biology.
“I have friends, who shall remain nameless right now, who were part of writing the original OS for Apple. I don’t know where we are right now in Mac OS, but they can’t help themselves and open up the OS anytime a new version comes out and go, “Wow, there are things I wrote when I was a kid, and they weren’t very good, but they’re still in there because the thing can’t run without it.” It’s so rare that a protocol doesn’t get changed.” Hans Zimmer (https://www.kvraudio.com/interviews/a-kvr-interview-with-hans-zimmer-55982).
But, hey, yeah, let’s get this programming embedded into our DNA and cellular molecular structures because it’s so predictable and nothing whatsoever could possibly go wrong.
Steve Elliott
1 year ago
Before I retired I was a computer programmer. One of the guidelines I was given was: when writing a piece of software avoid adding stuff because you think it might be useful in the future. The reasons are A) it makes the software harder to write in the first place and B) the thing you added almost certainly won’t be required and what’s more it will make it harder to change the software to add in the thing you hadn’t thought of first time around but really need now.
I’ve often thought that this rule is more generally applicable.
Also can I recommend the book “Why most things fail” by Paul Ormerod which mostly amounts to “The best laid schemes o’ mice an men gang aft agley”. And I should say that I have no financial or other interest in recommending this book.
It’s good to plan for the future but you have to be flexible because there will surely be something you haven’t thought of.
andy young
1 year ago
Bloody brilliant article. Once again it boils down to bad science. Trying to use scientific method to solve problems that are presently insoluble by that method & I suspect never will be.
I’m presently reading Plato’s Republic, & it’s scarier than anything M. R. James ever wrote. If it is actually (as described) the cornerstone of Western civilization then we’re screwed.
The Lord preserve us from clever people with all the answers; like it or lump it, true democracy is the only way forward – as Popper (him again) pointed out it’s at the heart of the proper science that has improved most of our lot beyond all recognition.
andy young
1 year ago
Bloody brilliant article. Once again it boils down to bad science. Trying to use scientific method to solve problems that are presently insoluble by that method & I suspect never will be.
I’m presently reading Plato’s Republic, & it’s scarier than anything M. R. James ever wrote. If it is actually (as described) the cornerstone of Western civilization then we’re screwed.
The Lord preserve us from clever people with all the answers; like it or lump it, true democracy is the only way forward – as Popper (him again) pointed out it’s at the heart of the proper science that has improved most of our lot beyond all recognition.
Roger Farmer
1 year ago
An excellent piece that explains what is, imo, a flawed philosophy. The argument that we should be utilitarian, in the sense of weighting future human beings equally with current human beings, has an interesting implication for the abortion debate. Since every human being has the potential to produce one or more children, by aborting an unborn child the mother denies life to a potentially infinite number of future human beings. Does this mean that MacAskill would oppose abortion under all circumstances? If not, why not?
Roger Farmer
1 year ago
An excellent piece that explains what is, imo, a flawed philosophy. The argument that we should be utilitarian, in the sense of weighting future human beings equally with current human beings, has an interesting implication for the abortion debate. Since every human being has the potential to produce one or more children, by aborting an unborn child the mother denies life to a potentially infinite number of future human beings. Does this mean that MacAskill would oppose abortion under all circumstances? If not, why not?
Rob N
1 year ago
The road to hell is paved with good intentions.
MacAskill is clearly an, inhuman, idiot and maybe a savant as well.
Rob N
1 year ago
The road to hell is paved with good intentions.
MacAskill is clearly an, inhuman, idiot and maybe a savant as well.
laurence scaduto
1 year ago
MacAskill’s “SPC framework” is a perfect example of a common flaw in the work of those who consider themselves to be logical and scientific. All of the inputs in his formula are his own assumptions. He has no more insight into the effects of, for instance, nuclear war in the 23rd century than I do. (Let’s not forget that less than three years ago Covid was widely assumed to be the end of civilization.)
But that doesn’t stop him from spinning his assumptions into something that pretends to be data.
laurence scaduto
1 year ago
MacAskill’s “SPC framework” is a perfect example of a common flaw in the work of those who consider themselves to be logical and scientific. All of the inputs in his formula are his own assumptions. He has no more insight into the effects of, for instance, nuclear war in the 23rd century than I do. (Let’s not forget that less than three years ago Covid was widely assumed to be the end of civilization.)
But that doesn’t stop him from spinning his assumptions into something that pretends to be data.
Isabel Ward
1 year ago
ironically isn’t his argument rather a “pro-life” view. I doubt that is something his Silicon Valley supporters would go along with.
Isabel Ward
1 year ago
ironically isn’t his argument rather a “pro-life” view. I doubt that is something his Silicon Valley supporters would go along with.
Andrew McDonald
1 year ago
MacAskill sounds not unlike Leibniz and his ‘Calculemus!’ –
“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right”.
Andrew McDonald
1 year ago
MacAskill sounds not unlike Leibniz and his ‘Calculemus!’ –
“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right”.
Karl Schuldes
1 year ago
I think Effective Altruism is a disguised justification for extreme measures for environmentalism. “Sure, billions will be forced into poverty today, but look at the long term benefits.”
Karl Schuldes
1 year ago
I think Effective Altruism is a disguised justification for extreme measures for environmentalism. “Sure, billions will be forced into poverty today, but look at the long term benefits.”
Brian Laidd
1 year ago
MacAskill is an intellectual moron. People like him over intellectualise everything.
Bankman-Fried is a victim of ironic nominative determinism.
The answer is simple. Look after the present and the future will look after itself.
The writer refers to a hypothetical crowd control situation. It’s beyond the scope of my reply here to explain why hypothetical scenarios have no value in the real world.
There is a lot of emphasis on the “effective” side of the equation, but I might also point out that there is really no actual “altruism” from these financial masters of the universe since they definitely want credit for any perceived good deed.
There is a lot of emphasis on the “effective” side of the equation, but I might also point out that there is really no actual “altruism” from these financial masters of the universe since they definitely want credit for any perceived good deed.
Brian Laidd
1 year ago
MacAskill is an intellectual moron. People like him over intellectualise everything.
Bankman-Fried is a victim of ironic nominative determinism.
The answer is simple. Look after the present and the future will look after itself.
The writer refers to a hypothetical crowd control situation. It’s beyond the scope of my reply here to explain why hypothetical scenarios have no value in the real world.
Steve Elliott
1 year ago
Talking about Killer Toasters. I wish someone would invent an AI toaster which would toast bread perfectly without burning or coming up underdone.
Talking about Killer Toasters. I wish someone would invent an AI toaster which would toast bread perfectly without burning or coming up underdone.
Peter Dennett
1 year ago
My Grandfather always said “look after the cents, and the dollars will take care of itself.”
I think that if we bring this analogy to ourselves and look after our own patch, raise our child(ren) to be decent, capable of looking after themselves and raising children to do the same, the future will be in good hands.
Peter Dennett
1 year ago
My Grandfather always said “look after the cents, and the dollars will take care of itself.”
I think that if we bring this analogy to ourselves and look after our own patch, raise our child(ren) to be decent, capable of looking after themselves and raising children to do the same, the future will be in good hands.
Aphrodite Rises
1 year ago
It used to be said if you cannot change yourself, you cannot change the world, and assumed if you could change yourself (for the better), the world would be very slightly improved.
Last edited 1 year ago by Aphrodite Rises
Aphrodite Rises
1 year ago
It used to be said if you cannot change yourself, you cannot change the world, and assumed if you could change yourself (for the better), the world would be very slightly improved.
Last edited 1 year ago by Aphrodite Rises
Paul Ashley
1 year ago
“As if the upshot of all this discussion would be a final, ideal system, which the statesmen-philosophers of tomorrow could impose on their unwilling subjects with a clear conscience.”
The wannabe statesmen-philosophers of today, primarily populating the World Economic Forum and similar alphabet NGOs, use just this type of thinking, i.e. the wet dream that they know the distant outcome of their preferred imposed actions, to assuage what little conscience they might have. That their nightmare to-do list for the supposed benefit of the (not so distant) future just so happen to benefit them in the present, well, “Look! A squirrel!”.
Paul Ashley
1 year ago
“As if the upshot of all this discussion would be a final, ideal system, which the statesmen-philosophers of tomorrow could impose on their unwilling subjects with a clear conscience.”
The wannabe statesmen-philosophers of today, primarily populating the World Economic Forum and similar alphabet NGOs, use just this type of thinking, i.e. the wet dream that they know the distant outcome of their preferred imposed actions, to assuage what little conscience they might have. That their nightmare to-do list for the supposed benefit of the (not so distant) future just so happen to benefit them in the present, well, “Look! A squirrel!”.
Mark epperson
1 year ago
Excellent. I have been of a mind for the last 30 years that the boomers and Xer’s really believe that everything can be measured, dissected, quantified, and dealt with by algorithms. Basically, pure nerdism. Ain’t really true, though. Just like Eckels accidentally crushing the butterfly, shit happens, and it defies measurement and algorithms. Also, I have a cynical thought that if the ‘elite” believe money can be made by pushing these theories, so much the better, whether they are BS or not.
Of course it’s right that not everything can be measured, and people cannot be forced to care normatively, about this particular thing or that. But how do we fall from that into the simplification of the author who seems to suggest then that there is no space for quantitative thinking in ethics? First, it can have a use when a person, institution or society has already decided what their moral priorities are… because the means we have to address any issue we have decided is important are not only quantifiable but we ourselves have already quantified them – we all live lives in which we are keenly aware of our use and limited availability of time and money. If helping blind people is the goal I have set for myself, it will then be useful to me to know how I can maximise impact within my time or money budget. Second, without forcing anyone I think we can assume there are certain things most people agree are a good idea – like the continuance of civilisation for example. So trying to rank all the potential existential threats in terms of relative probability – AI, epidemics, asteroids, climate change – and the cost of doing anything about them… is then, in fact, useful. I dont understand the reductionism of saying if X is not a silver bullet to solve everything then it must be useless.
Of course it’s right that not everything can be measured, and people cannot be forced to care normatively, about this particular thing or that. But how do we fall from that into the simplification of the author who seems to suggest then that there is no space for quantitative thinking in ethics? First, it can have a use when a person, institution or society has already decided what their moral priorities are… because the means we have to address any issue we have decided is important are not only quantifiable but we ourselves have already quantified them – we all live lives in which we are keenly aware of our use and limited availability of time and money. If helping blind people is the goal I have set for myself, it will then be useful to me to know how I can maximise impact within my time or money budget. Second, without forcing anyone I think we can assume there are certain things most people agree are a good idea – like the continuance of civilisation for example. So trying to rank all the potential existential threats in terms of relative probability – AI, epidemics, asteroids, climate change – and the cost of doing anything about them… is then, in fact, useful. I dont understand the reductionism of saying if X is not a silver bullet to solve everything then it must be useless.
Mark epperson
1 year ago
Excellent. I have been of a mind for the last 30 years that the boomers and Xer’s really believe that everything can be measured, dissected, quantified, and dealt with by algorithms. Basically, pure nerdism. Ain’t really true, though. Just like Eckels accidentally crushing the butterfly, shit happens, and it defies measurement and algorithms. Also, I have a cynical thought that if the ‘elite” believe money can be made by pushing these theories, so much the better, whether they are BS or not.
Su Mac
1 year ago
Interesting essay thanks and a nice logical take down of the thinking errors in long termism etc. Hard to believe how many enormous dead ends human thinking can head down if you give too much attention to any one attention grabbing set of whiz kids
Su Mac
1 year ago
Interesting essay thanks and a nice logical take down of the thinking errors in long termism etc. Hard to believe how many enormous dead ends human thinking can head down if you give too much attention to any one attention grabbing set of whiz kids
Kat Kazak
1 year ago
I like their optimism that “civilization might last millions, billions, or even trillions of years”. Humans are currently depleting soil and flooding the world with plastic, we really need to focus all of our minds and energy on not causing our own extinction in the next 100 years, instead of thinking about what might happen thousands of years from now. I want what these guys are smoking…
Kat Kazak
1 year ago
I like their optimism that “civilization might last millions, billions, or even trillions of years”. Humans are currently depleting soil and flooding the world with plastic, we really need to focus all of our minds and energy on not causing our own extinction in the next 100 years, instead of thinking about what might happen thousands of years from now. I want what these guys are smoking…
Kirk B
1 year ago
One of the more specious reasons for rejecting the Yucca Mountain site for nuclear waste storage was that in 10,000 years local humans might not be able to read the warning signage.
Yes, the reason may be specious. But we have to consider that a criminal absence of maintenance – because those sites although forsaken are important so you can’t forget about them – or an almost complete loss of civilisation could lead to the inability to read those signs. It’s not completely unlikely that such things could happen, so we have to consider some events that could prevent updating warning signs in the future.
More importantly, nuclear semiotics has to deal with a lot of variables, some of them completely incalculable. We deal with 5000 years of history – more or less – and yet there’s the need to predict what might happen and how to deal with possible events that may happen during 10000 years to humans or post-humans! Pretty scary in my opinion, considering our limited experience. A sign of unbridled ambition and presumption, two of my major gripes with longtermism. Summing it up, nuclear semiotics tells more about our real present than about a hypothetical future.
For me, there are far better options than a long term view: for instance, phasing out nuclear energy transitioning to clean renewable energy, or solving the problem of nuclear waste storage in a better way.
Yes, the reason may be specious. But we have to consider that a criminal absence of maintenance – because those sites although forsaken are important so you can’t forget about them – or an almost complete loss of civilisation could lead to the inability to read those signs. It’s not completely unlikely that such things could happen, so we have to consider some events that could prevent updating warning signs in the future.
More importantly, nuclear semiotics has to deal with a lot of variables, some of them completely incalculable. We deal with 5000 years of history – more or less – and yet there’s the need to predict what might happen and how to deal with possible events that may happen during 10000 years to humans or post-humans! Pretty scary in my opinion, considering our limited experience. A sign of unbridled ambition and presumption, two of my major gripes with longtermism. Summing it up, nuclear semiotics tells more about our real present than about a hypothetical future.
For me, there are far better options than a long term view: for instance, phasing out nuclear energy transitioning to clean renewable energy, or solving the problem of nuclear waste storage in a better way.
Last edited 1 year ago by Alberto Garbino
Kirk B
1 year ago
One of the more specious reasons for rejecting the Yucca Mountain site for nuclear waste storage was that in 10,000 years local humans might not be able to read the warning signage.
William Murphy
1 year ago
I see that Sam Harris is busy defending the concept of Effective Altruism on his latest podcast. He has had both William MacAskill and Sam Bankman-Fried on earlier podcasts, and it is not a good week to defend EA. The above essay is an excellent counterweight to the picture of super-bright young guys earning truckloads of money with the aim of giving it away in a very “rational” manner. What did anyone expect of a guy named “Bankman” who was being publicly praised by ultra-spivs like Bill Clinton and Tony Blair?
I can see the logic behind EA when, a few years ago, I saw a local couple raising £250,000 for experimental treatment for their little daughter. Money flooded in, the daughter got the highly controversial treatment and died shortly afterwards. I saw her funeral procession heading for the cemetery. But wishing that people would give £250,000 so quickly and save 1,000 children in Africa using proven medicines and hygiene is to wish that people were constitutionally different
William Murphy
1 year ago
I see that Sam Harris is busy defending the concept of Effective Altruism on his latest podcast. He has had both William MacAskill and Sam Bankman-Fried on earlier podcasts, and it is not a good week to defend EA. The above essay is an excellent counterweight to the picture of super-bright young guys earning truckloads of money with the aim of giving it away in a very “rational” manner. What did anyone expect of a guy named “Bankman” who was being publicly praised by ultra-spivs like Bill Clinton and Tony Blair?
I can see the logic behind EA when, a few years ago, I saw a local couple raising £250,000 for experimental treatment for their little daughter. Money flooded in, the daughter got the highly controversial treatment and died shortly afterwards. I saw her funeral procession heading for the cemetery. But wishing that people would give £250,000 so quickly and save 1,000 children in Africa using proven medicines and hygiene is to wish that people were constitutionally different
Nicky Samengo-Turner
1 year ago
and I thought silicon valley was the first thing one saw when meeting Katie Price?
Nicky Samengo-Turner
1 year ago
and I thought silicon valley was the first thing one saw when meeting Katie Price?
Kenji Fuse
1 year ago
A little more zen common sense amongst us all would obviate this whole discussion.
Tekyo Pantzov
1 year ago
If what matters is what you have before your eyes, then helping suffering people is more important than preventing situations from arising in which people are bound to suffer. In my opinion, establishing a dense network of birth control clinics across Black Africa is more important than saving people from drowning in the Mediterranean Sea.
Jason Elias
1 year ago
It’s hubris and naivety to think we should actively implement a utopia, even as we strive to improve our lot and of those to come. Utopia sounds great, until you realize it will be populated with humans. Go back and read Seneca or Marcus Aurelius, their take on human nature rings every bit as true today, we haven’t changed, and I don’t suppose our natures will (despite the transhumanist’s attempts). “Utopians” have killed more people in recent memory than one would care to count.
Pascal Bercker
1 year ago
Consider the irony: SBF had appointed MacAskill to be in charge of his “effective altruism” fund. How many people falsely trusted SBF and invested in FTX because of the moral cover that people like MacAskill unwittingly gave him? If MacAskill can’t even predict the accidental harm that he may have helped cause in the immediate present, how can anyone possibly take him seriously when he talks about “long termism” stretching over decades and centuries, and even millions of years? The recently arrested SBF is very likely to spend at least 10 years in prison, and quite possibly up to 50 years because he defrauded so many. Should MacAskill be held at least partly morally liable for accidentally helping to give SBF moral credibility when he in fact had none whatsoever? Many actual existing people will suffer greatly because of this crypto farce. MacAskill has recently disavowed SBF. That’s too little too late. Perhaps he should rethink the very meaning of his own “long term” project when he has so obviously failed the “short term” project of doing no harm in the here and now.
A very fine essay, imo. I’m not surprised effective altruism became a favorite analytical approach in Silicon Valley. Those folks have a distinctly quantitative mindset. Everything should be measured and probabilities calculated.
I’m also not surprised the Silicon Valley set feel no more emotional attachment to the current generation than to a future generation a hundred times removed. It has been observed, and written about, that there’s a higher percentage of people on the Asperger’s spectrum in IT. I think that observation is somewhat disfavored now because there might be a hint of stigma, but I believe it is true. For the valley crowd it may truly be as easy to empathize as much with hypothetical, future generations as with the present generation.
Of course, the most successful of the Valley folks are super wealthy. They seem to be among the “elite” who disparage the ordinary working person, the so-called deplorables. Perhaps they view future generations as somehow better, or at least having the potential of being better, than the current generation, perhaps through the operation of evolution.
Or perhaps the philanthropy of the elite seeks any high-minded, philosophical justification to disguise the self-interest at its heart.
Possibly because they plan to “improve” us through transhumanism.
Agreed.
Part of the issue is that silicon valley whiz-kids don’t have enough data (by a long shot) to sufficiently predict their users’ current status let alone their future. We’re just at the tip of the iceberg in collecting and analyzing voluminous data behind what makes each of us uniquely human.
But this, along with these whiz kids’ urgent desire for empirical quantitive output, leads the valley geeks to false gods…EA algorithms is one example. Another is the application of Intersectionality grid point-scoring from the social science Yale crowd that moved to the valley for Tech jobs coupled with the very rudimentary-but-quantifiable surface data collection by social media.
In addition, these social media companies survive by trying to drum up more advertising dollars. Because their data coupled with rudimentary Intersectionality grids, you know, figures people out. Wizards behind the curtains in Oz.
Their world fits together quite nicely, until you see real-world effects of good people losing a career they love because their intersectionality score proves they are a n4zi fascist or something.
Or a poor white male Christian kid with a brain like Einstein’s but who happens to living in a shack in the Ozarks who will never see an ad for Harvard and will never be challenged. Their gifts for us will never be realized because they are are living in a digital ghetto created by their Intersectionality and online social score.
When one is a hammer in Silicon Valley (with precious little data), everything and everyone looks like a nail.
A closed-loop pseudo-religion if there ever was one.
Agreed.
Part of the issue is that silicon valley whiz-kids don’t have enough data (by a long shot) to sufficiently predict their users’ current status let alone their future. We’re just at the tip of the iceberg in collecting and analyzing voluminous data behind what makes each of us uniquely human.
But this, along with these whiz kids’ urgent desire for empirical quantitive output, leads the valley geeks to false gods…EA algorithms is one example. Another is the application of Intersectionality grid point-scoring from the social science Yale crowd that moved to the valley for Tech jobs coupled with the very rudimentary-but-quantifiable surface data collection by social media.
In addition, these social media companies survive by trying to drum up more advertising dollars. Because their data coupled with rudimentary Intersectionality grids, you know, figures people out. Wizards behind the curtains in Oz.
Their world fits together quite nicely, until you see real-world effects of good people losing a career they love because their intersectionality score proves they are a n4zi fascist or something.
Or a poor white male Christian kid with a brain like Einstein’s but who happens to living in a shack in the Ozarks who will never see an ad for Harvard and will never be challenged. Their gifts for us will never be realized because they are are living in a digital ghetto created by their Intersectionality and online social score.
When one is a hammer in Silicon Valley (with precious little data), everything and everyone looks like a nail.
A closed-loop pseudo-religion if there ever was one.
The so-called ‘virtuous thinking’ and ‘long termism’ today is really just a symptom of decadence, high class problems.
They are the new aristocracy after all
They are the new aristocracy after all
As someone with Asperger’s, I’d caution against stereotyping. What you say may be true of some but not all. The social dysfunction symptoms of the autism spectrum are by no means uniform. Personally speaking, I feel almost none of the abstract altruism the article discusses. I’m only affected by suffering I can see in front of me or through a picture, sometimes very powerfully so. I’m very sympathetic to holocaust victims because I’ve seen the pictures of liberated Auschwitz. Whenever the topic comes up I remember those pictures and it makes me angry all over again. My personal theory is that the social instincts simply don’t work right in my brain, so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me. I simply don’t feel the urge to seek out human companionship to nearly the normal degree and I can’t really form the bonds that make people into a tribe, a culture, a town. I can only form individual relationships with people I am in the room with, and then not as well as others. The individual one to one connection is the only one I’ve ever figured out how to make. I can imagine the others and understand the logic behind them, and even pretend to feel them if the situation calls for that, but I don’t really ‘get it’ in the way normal people do. Incidentally, I’m also more powerfully affected by the suffering of animals and children, especially babies, than adults, perhaps because there is a natural instinct among all mammals, even the non-social ones, to care for small helpless creatures so that we care for our young rather than murder them. The philosophy of effective altruism thus seems absurd on its face to me, the product of our hypersocial modern world and the hypersocial personalities who tend to find success in it, the product of an excess of social feeling rather than a deficiency.
Point taken. Thank you.
“…so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me.”
That statement of yours re-enforces JB’s remarks about the Silicon Valley crowd.
I wonder how these altruistic people feel about abortion? It would make perfect sense, in today’s twisted world, if they supported the killing of unborn babies today, but espoused policies that would protect people not yet born hundreds of years from now.
I should be more precise. I meant that I don’t consider myself part of any particular group, race, etc. nor do I understand why people feel the need to identify themselves thus. People seem to ‘need’ to be a part of some larger group or purpose, but I have no such feelings. It is alien to me. I can perhaps theorize about evolutionary benefits and reason through the psychology of it, but that’s all. I have very little emotional connection to people other than the most direct and immediate ones, or to certain individuals, but never a group. Abortion is a complicated and difficult issue for any religion or philosophy to contemplate and yes, the possible contradictions you mention are a salient point.
I should be more precise. I meant that I don’t consider myself part of any particular group, race, etc. nor do I understand why people feel the need to identify themselves thus. People seem to ‘need’ to be a part of some larger group or purpose, but I have no such feelings. It is alien to me. I can perhaps theorize about evolutionary benefits and reason through the psychology of it, but that’s all. I have very little emotional connection to people other than the most direct and immediate ones, or to certain individuals, but never a group. Abortion is a complicated and difficult issue for any religion or philosophy to contemplate and yes, the possible contradictions you mention are a salient point.
I admire the thoughtful path you’ve taken toward self-awareness and connect with much of what you’ve written on the larger topic of things-that-are-near and things-that- are-distant, either in a spatial or temporal sense.
But I don’t think strict long-termism, nor any variety of doctrinaire utilitarianism, is likely to result from what you call an ‘excess of social feeling’. I’d estimate the sponsoring impulse to be something closer to idealism or intellectualized empathy, a kind of utopian coping strategy likelier to found in people who are at once very intelligent and not very socially attuned.
Your caution against stereotyping is well-taken, but Jeremy Bentham was–in our current, umbrella-term-happy parlance–almost surely ‘on the spectrum’, and so is MacAskill, in my non-expert opinion, which is largely based on a single, long profile lately published in The New Yorker.
https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
I want to avoid labelling or pathologizing viewpoints I disagree with, but I’m not there yet, and my personal sense of utility joined to moral responsibility deems rigid utilitarianism (or consequentialism, long-termism, computational pragmatism, etc.) to be both insufficiently humane and, echoing Ahmed, unrealistic. Any philosophy that can announce: ‘By my calculations I must allow your unjust, preventable death in order to save ten people in the 31st Century’ is a bit too detached, and even just plain wrong.
Thank you for speaking up
Point taken. Thank you.
“…so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me.”
That statement of yours re-enforces JB’s remarks about the Silicon Valley crowd.
I wonder how these altruistic people feel about abortion? It would make perfect sense, in today’s twisted world, if they supported the killing of unborn babies today, but espoused policies that would protect people not yet born hundreds of years from now.
I admire the thoughtful path you’ve taken toward self-awareness and connect with much of what you’ve written on the larger topic of things-that-are-near and things-that- are-distant, either in a spatial or temporal sense.
But I don’t think strict long-termism, nor any variety of doctrinaire utilitarianism, is likely to result from what you call an ‘excess of social feeling’. I’d estimate the sponsoring impulse to be something closer to idealism or intellectualized empathy, a kind of utopian coping strategy likelier to found in people who are at once very intelligent and not very socially attuned.
Your caution against stereotyping is well-taken, but Jeremy Bentham was–in our current, umbrella-term-happy parlance–almost surely ‘on the spectrum’, and so is MacAskill, in my non-expert opinion, which is largely based on a single, long profile lately published in The New Yorker.
https://www.newyorker.com/magazine/2022/08/15/the-reluctant-prophet-of-effective-altruism
I want to avoid labelling or pathologizing viewpoints I disagree with, but I’m not there yet, and my personal sense of utility joined to moral responsibility deems rigid utilitarianism (or consequentialism, long-termism, computational pragmatism, etc.) to be both insufficiently humane and, echoing Ahmed, unrealistic. Any philosophy that can announce: ‘By my calculations I must allow your unjust, preventable death in order to save ten people in the 31st Century’ is a bit too detached, and even just plain wrong.
Thank you for speaking up
Nice comment. I think that, on some level, these guys (from Rousseau, German idealists on, say) think that they can imaging a moral world where human nature can be changed (or changed back) using pure or “better” Reason (remember, even Kant’s moral system was based on one true reasonable “Rule” – the imperative).
This “long-termism” always presumes that human’s can figure out how to avoid death and war and anger, so that everyone loves his neighbor in all places at all times. But human nature stubbornly resurfaces in all places at all times, and it’s still difficult to hoodwink God.
I also like the essay.
I am puzzled at the faulty logic in the book.
McAskill confuses the moral claims of actual people, future people, and potential people. He says “Future people are people too” which is not correct. Actual people currently exist, whereas future people are a conjecture. More seriously conflating people with potential people creates implications. If potential people have a moral claim on us, then abortion and birth control are morally problematic. McAskill tries to hand wave this away, but without any rigor. If many people have to suffer in order to create huge amounts of future people, where does that lead us? And why should I care if there are 10 trillion people rather than 10 billion people in one million years?
Utilitarianism is the basis of EA and Longtermism. It was developed in the 18th century to replace religious moral certainty with rational moral certainty, and it is no coincidence that the EA movement is quasi religious. The idea that any moral action can be measured in a one dimensional numeric objective function, and that addition and subtraction can be applied to a set of moral actions is relatively easy to challenge, except to sociopaths. Compare the torture and murder of a child to a permanent cure for acne. One is a horrible moral injury and the other is a real relief of human suffering. But no matter how many people are relieved of the suffering of acne most people would consider that it could never offset the moral injury to that murdered child. Morality is not one dimensional. We can apply rough comparisons to moral action. Saving two human lives is better than saving one (in general! one could come up with exceptions), but there are different categories of morals, good and bad. Morality is not economics, or accounting.
Utitarianism, especially in its longtermism form, has the capability of justifying terrible actions, particularly when used by immature, geeky, self styled smart, (potentially sociopathic) people. It is no coincidence that SBF was a fan, and Elon Musk liked the book. It is easy to skew the utility function, and the probabilities, to suit one’s natural predilections.
I am surprised that such a philosophically weak book was written by a philosophy professor of Oxford. Standards must be slipping!
As an Unheard fan but also as someone who has been involved in EA in university, I feel compelled to give a more complete picture here.
I have start to with Arif’s conflation of long-termism with the general EA movement. Yes, a lot of EA was built on utilitarian ideas. Yes utilitarian ideals can lead one to support some version long-termism. However, this does not mean that EA membership/affiliation/identification necessitates a utilitarian worldview nor that being EA means you have to subscribe to the long-termist view (or at least not in the way MacAskilll recommends).
The core idea to Effective Altruism is that certain altruistic interventions are better than others. EA then implores individuals to use think beyond their emotions when selecting a cause to intervene in. It then also supports using data and statistics to ensure that they use better interventions in support of that cause.
The movement/philosophy then outlines three useful criteria for identifying causes that one should try to solve. A single problem does not need to fulfil all three criteria but should meet at least one. The problems should be:
Large in scale – (i.e., affect a large amount of people or sentient beings)Tractable (i.e, we can solve them)Neglected (i.e., so that the impact of your intervention is more significantly felt by those helped)
From this EA comes up with a wide range of causes to support. Causes such as reducing global poverty, malnutrition and hunger are actually its more mainstream causes. Reducing animal suffering through ending factory farming is another mainstream-is one. Woke ones have arisen such as US criminal justice reform have arisen. Long-termism is another cause area which is popular among the tech-associated proponents of EA but I think is still not very popular in the movement itself. And then there are some more fringe ones like establishing rights for sentient AI’s (when they arrive, some believe they are already here) or trying to end all animal suffering in the wild.
So yes one can see that there are some weird EA ideas that float around and again one can clearly see the influence of utilitarianism. However, I believe that the movement has a lot of viewpoint diversity. I have not mentioned all the cause areas but I think it is clear from those mentioned that some may conflict with others. In fact, the tech long-termists often get criticised by people within the movement for choosing future non-existent people over “black and brown people in the Global South.”
There also different ways people choose to support their selected causes. From Earning-to-Give(quite controversial), to advocacy, to traditional charity work, to building/ working in companies or research institutions.
Personally, I was quite drawn to the idea that causes we support should be neglected. One, because I do want my altruistic actions to have a significant impact. But more importantly for me was that neglected ideas had not yet been politicised. Living on a university campus with a strict orthodoxy and censorious atmosphere meant debating ideas was not very welcome. However, with neglected causes, the ideas were generally politically neutral. This meant debate was more welcome and easier to have as there was much less fear about saying incorrect things.
I also think that one can care about long-term future of humanity while still caring about current social problems. I think existential risk and reducing our exposure to the downsides of certain identifiable risks is a good idea (like trying to stop the experimentation of viruses to increase their virulence & transmissibility).
As a Christian (born and raised) I also felt that EA takes something that those of us who are reasonably well-off should do (i.e., give to and support those in need) and just urges us to be smarter about how we do that. I think many Unherd readers will actually support that idea even if they are not so pro long-termism.
Lastly, I do want to say that the official EA movement has probably always been liberal leaning (and has more recently embraced a lot of woke behaviours and phraseology). This is a consequence of it being very popular amongst students and white-collar finance and tech people. However, I still do believe that anyone can and should embrace the outlook to inform their altruistic actions. If you can help people why not help them in a better way (if it can be identified)
As an Unheard fan but also as someone who has been involved in EA in university, I feel compelled to give a more complete picture here.
I have start to with Arif’s conflation of long-termism with the general EA movement. Yes, a lot of EA was built on utilitarian ideas. Yes utilitarian ideals can lead one to support some version long-termism. However, this does not mean that EA membership/affiliation/identification necessitates a utilitarian worldview nor that being EA means you have to subscribe to the long-termist view (or at least not in the way MacAskilll recommends).
The core idea to Effective Altruism is that certain altruistic interventions are better than others. EA then implores individuals to use think beyond their emotions when selecting a cause to intervene in. It then also supports using data and statistics to ensure that they use better interventions in support of that cause.
The movement/philosophy then outlines three useful criteria for identifying causes that one should try to solve. A single problem does not need to fulfil all three criteria but should meet at least one. The problems should be:
Large in scale – (i.e., affect a large amount of people or sentient beings)Tractable (i.e, we can solve them)Neglected (i.e., so that the impact of your intervention is more significantly felt by those helped)
From this EA comes up with a wide range of causes to support. Causes such as reducing global poverty, malnutrition and hunger are actually its more mainstream causes. Reducing animal suffering through ending factory farming is another mainstream-is one. Woke ones have arisen such as US criminal justice reform have arisen. Long-termism is another cause area which is popular among the tech-associated proponents of EA but I think is still not very popular in the movement itself. And then there are some more fringe ones like establishing rights for sentient AI’s (when they arrive, some believe they are already here) or trying to end all animal suffering in the wild.
So yes one can see that there are some weird EA ideas that float around and again one can clearly see the influence of utilitarianism. However, I believe that the movement has a lot of viewpoint diversity. I have not mentioned all the cause areas but I think it is clear from those mentioned that some may conflict with others. In fact, the tech long-termists often get criticised by people within the movement for choosing future non-existent people over “black and brown people in the Global South.”
There also different ways people choose to support their selected causes. From Earning-to-Give(quite controversial), to advocacy, to traditional charity work, to building/ working in companies or research institutions.
Personally, I was quite drawn to the idea that causes we support should be neglected. One, because I do want my altruistic actions to have a significant impact. But more importantly for me was that neglected ideas had not yet been politicised. Living on a university campus with a strict orthodoxy and censorious atmosphere meant debating ideas was not very welcome. However, with neglected causes, the ideas were generally politically neutral. This meant debate was more welcome and easier to have as there was much less fear about saying incorrect things.
I also think that one can care about long-term future of humanity while still caring about current social problems. I think existential risk and reducing our exposure to the downsides of certain identifiable risks is a good idea (like trying to stop the experimentation of viruses to increase their virulence & transmissibility).
As a Christian (born and raised) I also felt that EA takes something that those of us who are reasonably well-off should do (i.e., give to and support those in need) and just urges us to be smarter about how we do that. I think many Unherd readers will actually support that idea even if they are not so pro long-termism.
Lastly, I do want to say that the official EA movement has probably always been liberal leaning (and has more recently embraced a lot of woke behaviours and phraseology). This is a consequence of it being very popular amongst students and white-collar finance and tech people. However, I still do believe that anyone can and should embrace the outlook to inform their altruistic actions. If you can help people why not help them in a better way (if it can be identified)
The tech and finance “elite” are very cunning and have suckered millions of folks into believing they are green, they care about the climate, diversity, inclusiveness, and all of the other current PC BS slogans. They are the greediest arseholes in history and would sell ALL future generations and probably the past generations for an extra shekel.
Possibly because they plan to “improve” us through transhumanism.
The so-called ‘virtuous thinking’ and ‘long termism’ today is really just a symptom of decadence, high class problems.
As someone with Asperger’s, I’d caution against stereotyping. What you say may be true of some but not all. The social dysfunction symptoms of the autism spectrum are by no means uniform. Personally speaking, I feel almost none of the abstract altruism the article discusses. I’m only affected by suffering I can see in front of me or through a picture, sometimes very powerfully so. I’m very sympathetic to holocaust victims because I’ve seen the pictures of liberated Auschwitz. Whenever the topic comes up I remember those pictures and it makes me angry all over again. My personal theory is that the social instincts simply don’t work right in my brain, so any emotional chain of motivation that relies on a sense of commonality with a group, religion, culture, or all mankind is utterly meaningless to me. I simply don’t feel the urge to seek out human companionship to nearly the normal degree and I can’t really form the bonds that make people into a tribe, a culture, a town. I can only form individual relationships with people I am in the room with, and then not as well as others. The individual one to one connection is the only one I’ve ever figured out how to make. I can imagine the others and understand the logic behind them, and even pretend to feel them if the situation calls for that, but I don’t really ‘get it’ in the way normal people do. Incidentally, I’m also more powerfully affected by the suffering of animals and children, especially babies, than adults, perhaps because there is a natural instinct among all mammals, even the non-social ones, to care for small helpless creatures so that we care for our young rather than murder them. The philosophy of effective altruism thus seems absurd on its face to me, the product of our hypersocial modern world and the hypersocial personalities who tend to find success in it, the product of an excess of social feeling rather than a deficiency.
Nice comment. I think that, on some level, these guys (from Rousseau, German idealists on, say) think that they can imaging a moral world where human nature can be changed (or changed back) using pure or “better” Reason (remember, even Kant’s moral system was based on one true reasonable “Rule” – the imperative).
This “long-termism” always presumes that human’s can figure out how to avoid death and war and anger, so that everyone loves his neighbor in all places at all times. But human nature stubbornly resurfaces in all places at all times, and it’s still difficult to hoodwink God.
I also like the essay.
I am puzzled at the faulty logic in the book.
McAskill confuses the moral claims of actual people, future people, and potential people. He says “Future people are people too” which is not correct. Actual people currently exist, whereas future people are a conjecture. More seriously conflating people with potential people creates implications. If potential people have a moral claim on us, then abortion and birth control are morally problematic. McAskill tries to hand wave this away, but without any rigor. If many people have to suffer in order to create huge amounts of future people, where does that lead us? And why should I care if there are 10 trillion people rather than 10 billion people in one million years?
Utilitarianism is the basis of EA and Longtermism. It was developed in the 18th century to replace religious moral certainty with rational moral certainty, and it is no coincidence that the EA movement is quasi religious. The idea that any moral action can be measured in a one dimensional numeric objective function, and that addition and subtraction can be applied to a set of moral actions is relatively easy to challenge, except to sociopaths. Compare the torture and murder of a child to a permanent cure for acne. One is a horrible moral injury and the other is a real relief of human suffering. But no matter how many people are relieved of the suffering of acne most people would consider that it could never offset the moral injury to that murdered child. Morality is not one dimensional. We can apply rough comparisons to moral action. Saving two human lives is better than saving one (in general! one could come up with exceptions), but there are different categories of morals, good and bad. Morality is not economics, or accounting.
Utitarianism, especially in its longtermism form, has the capability of justifying terrible actions, particularly when used by immature, geeky, self styled smart, (potentially sociopathic) people. It is no coincidence that SBF was a fan, and Elon Musk liked the book. It is easy to skew the utility function, and the probabilities, to suit one’s natural predilections.
I am surprised that such a philosophically weak book was written by a philosophy professor of Oxford. Standards must be slipping!
The tech and finance “elite” are very cunning and have suckered millions of folks into believing they are green, they care about the climate, diversity, inclusiveness, and all of the other current PC BS slogans. They are the greediest arseholes in history and would sell ALL future generations and probably the past generations for an extra shekel.
A very fine essay, imo. I’m not surprised effective altruism became a favorite analytical approach in Silicon Valley. Those folks have a distinctly quantitative mindset. Everything should be measured and probabilities calculated.
I’m also not surprised the Silicon Valley set feel no more emotional attachment to the current generation than to a future generation a hundred times removed. It has been observed, and written about, that there’s a higher percentage of people on the Asperger’s spectrum in IT. I think that observation is somewhat disfavored now because there might be a hint of stigma, but I believe it is true. For the valley crowd it may truly be as easy to empathize as much with hypothetical, future generations as with the present generation.
Of course, the most successful of the Valley folks are super wealthy. They seem to be among the “elite” who disparage the ordinary working person, the so-called deplorables. Perhaps they view future generations as somehow better, or at least having the potential of being better, than the current generation, perhaps through the operation of evolution.
Or perhaps the philanthropy of the elite seeks any high-minded, philosophical justification to disguise the self-interest at its heart.
An excellent essay by a man with a good sense of ethics- in my opinion. The bible has an important passage that warns against relying on a projection of the present into the future in the statement: “The race is not to the swift or the battle to the strong, but time and chance happeneth to them all”. It is hubris to suppose we know the future on the basis of a limited knowledge of the present.
Interestingly the woke philosophy so prevalent among the academics focuses too exclusively on the minor harms in the present to particular groups in neglect of the more abstract harms involved in suppressing freedom of speech. So the hurt feelings of the rare trans woman in hearing the “wrong” pronoun applied is prioritised over the general value of the larger population being able to express the biologically correct pronoun. The particular is prioritised over the more general good of the many. This is curiously antithetical to considering the long term.
The author seems to have got tangled up in a pronoun salad of his own: “If each policeman is short-sighted and slow, each additional unit of attention might be better focused on problems that she can effectively address (those in her sector) rather than the ones that she can’t.” Or is “she” now the correct pronoun for a policeman? Surely the word policeman itself is now taboo – shouldn’t it be policeperson? And what does one call a transgender policeperson? I’m confused.
*transgender policeperoffspring.
*transgender policeperoffspring.
The author seems to have got tangled up in a pronoun salad of his own: “If each policeman is short-sighted and slow, each additional unit of attention might be better focused on problems that she can effectively address (those in her sector) rather than the ones that she can’t.” Or is “she” now the correct pronoun for a policeman? Surely the word policeman itself is now taboo – shouldn’t it be policeperson? And what does one call a transgender policeperson? I’m confused.
An excellent essay by a man with a good sense of ethics- in my opinion. The bible has an important passage that warns against relying on a projection of the present into the future in the statement: “The race is not to the swift or the battle to the strong, but time and chance happeneth to them all”. It is hubris to suppose we know the future on the basis of a limited knowledge of the present.
Interestingly the woke philosophy so prevalent among the academics focuses too exclusively on the minor harms in the present to particular groups in neglect of the more abstract harms involved in suppressing freedom of speech. So the hurt feelings of the rare trans woman in hearing the “wrong” pronoun applied is prioritised over the general value of the larger population being able to express the biologically correct pronoun. The particular is prioritised over the more general good of the many. This is curiously antithetical to considering the long term.
The problem is that everyone thinks they are doing good in some way – even when the consequences of their actions end up being bad. Pre-war eugenicists thought they were working to improve human beings for the long term. Look where that took us. Communism was a theoretical perfection. Look where that took us. And then look at what we learnt from the ‘badness’ that followed those ideas.
The longer you look to the future the larger the error – too many unforeseen consequences. And if you don’t make the error, you don’t learn to correct for it, or to take it into account later.
The future is another world. It’s best we try to resolve what we can now for the people we are now.
I may be pummeled for being off base but I see a connection to CRT/Frankfurt School’s teachings. My son was a disciple of CRT in high school while studying debate. (All the kids are enthusiastic disciples.) I only associated the word continental to a light breakfast so I had no idea about any of it. To remedy my ignorance, I started listening to old lectures on the Frankfurt School. One of the best was Marcuse being interviewed by Bryan Magee. https://youtu.be/0KqC1lTAJx4. I found Marcuse to be an elitist with a disdain for women – women were only a construct but female ways were useful in the short run to further spread the doctrine. (Sort of a riff off of Pygmalion/My Fair Lady.) I find MacAskill ‘s theory fits nicely within the philosophy – though I understand that there are many derivatives of it and I have a very basic understanding. But for me CRT/Frankfurt boils down to planning and bureaucratic planning at that with rigid long term goals as opposed to individuals making the best decisions for themselves with the knowledge they possess. The latter seems easier to allow for pivoting if harm is produced.
It sometimes feels like we are moving towards a Spanish Inquisition style system where everything is controlled and dictated ‘for the good of humanity’, and dissent must be removed. The contrast, of course, was to the English system of laissez-faire and muddling through. The difference was felt through the Enlightenment – for instance try and name some Spanish scientists or technical breakthroughs from that age.
Yes but nobody expects the Spanish Inquisition
Everyone, however, expects the liberal inquisition.
Everyone, however, expects the liberal inquisition.
Yes but nobody expects the Spanish Inquisition
Whilst I agree that the Frankfurt School has some blame for what is currently happening, it it really not until deconstructionist and post-structural philosophy took hold in European universities that the rot set in. In particular, this school eschewed discussion and debate, which they believed was just another from of power politics (they, of course, ignored the fact that non-debate is a form of power wielding). For some reason, generally, within European countries this tosh remained in universities, but when it was exported to the USA it took off, and then it spread to the UK. The danger it poses is that of no debate, this leads to the silencing of dissenting voices; I don’t care what nonsense they spout (I’ve heard it at many philosophy seminars), but I want to be able to disagree vocally and in print
It sometimes feels like we are moving towards a Spanish Inquisition style system where everything is controlled and dictated ‘for the good of humanity’, and dissent must be removed. The contrast, of course, was to the English system of laissez-faire and muddling through. The difference was felt through the Enlightenment – for instance try and name some Spanish scientists or technical breakthroughs from that age.
Whilst I agree that the Frankfurt School has some blame for what is currently happening, it it really not until deconstructionist and post-structural philosophy took hold in European universities that the rot set in. In particular, this school eschewed discussion and debate, which they believed was just another from of power politics (they, of course, ignored the fact that non-debate is a form of power wielding). For some reason, generally, within European countries this tosh remained in universities, but when it was exported to the USA it took off, and then it spread to the UK. The danger it poses is that of no debate, this leads to the silencing of dissenting voices; I don’t care what nonsense they spout (I’ve heard it at many philosophy seminars), but I want to be able to disagree vocally and in print
“The future is another world. It’s best we try to resolve what we can now for the people we are now.”
As Jordan Peterson cautions, tidy up your own room before trying to change the world.
I may be pummeled for being off base but I see a connection to CRT/Frankfurt School’s teachings. My son was a disciple of CRT in high school while studying debate. (All the kids are enthusiastic disciples.) I only associated the word continental to a light breakfast so I had no idea about any of it. To remedy my ignorance, I started listening to old lectures on the Frankfurt School. One of the best was Marcuse being interviewed by Bryan Magee. https://youtu.be/0KqC1lTAJx4. I found Marcuse to be an elitist with a disdain for women – women were only a construct but female ways were useful in the short run to further spread the doctrine. (Sort of a riff off of Pygmalion/My Fair Lady.) I find MacAskill ‘s theory fits nicely within the philosophy – though I understand that there are many derivatives of it and I have a very basic understanding. But for me CRT/Frankfurt boils down to planning and bureaucratic planning at that with rigid long term goals as opposed to individuals making the best decisions for themselves with the knowledge they possess. The latter seems easier to allow for pivoting if harm is produced.
“The future is another world. It’s best we try to resolve what we can now for the people we are now.”
As Jordan Peterson cautions, tidy up your own room before trying to change the world.
The problem is that everyone thinks they are doing good in some way – even when the consequences of their actions end up being bad. Pre-war eugenicists thought they were working to improve human beings for the long term. Look where that took us. Communism was a theoretical perfection. Look where that took us. And then look at what we learnt from the ‘badness’ that followed those ideas.
The longer you look to the future the larger the error – too many unforeseen consequences. And if you don’t make the error, you don’t learn to correct for it, or to take it into account later.
The future is another world. It’s best we try to resolve what we can now for the people we are now.
The plans of the long termists surely have to be based on the work of theorists and modellers. They never get anything wrong, do they?
Yes indeed. As the saying goes “Predictions are difficult especially about the future”.
Obviously one of the many problems with his thesis is the idea that we can weigh our future using our current values of that future.using a formula. Indeed if that formula is out a bit it might be out rather a lot in 400,000 years or more!
I’m an admirer of Hayek and his whole point is that smartypantses should stop thinking that they know how to fix stuff.
Innit?
Yes indeed. As the saying goes “Predictions are difficult especially about the future”.
Obviously one of the many problems with his thesis is the idea that we can weigh our future using our current values of that future.using a formula. Indeed if that formula is out a bit it might be out rather a lot in 400,000 years or more!
I’m an admirer of Hayek and his whole point is that smartypantses should stop thinking that they know how to fix stuff.
Innit?
The plans of the long termists surely have to be based on the work of theorists and modellers. They never get anything wrong, do they?
The idea is breathtaking in the scope of irs hubris.
“Knowledge puffs up, but love edifies. And if anyone thinks that he knows anything, he knows nothing yet as he ought to know.” (1 Corinthians 8:1-2)
“As a father pities his children, so the LORD pities those who fear Him. For He knows our frame; He remembers that we are dust.” (Psalm 103:13-14)
“We need to develop genetic engineering technologies and techniques to be able to write circuitry for cells and predictably program biology in the same way in which we write software and program computers.” (US Government, Sept 2022)
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…” (William MacAskill, quoted above)
“We know so much, and we understand so little. Lord, guide us; Lord, graciously guide us.” (Me, with monotonous frequency for the last few years)
A few other verses that spring to mind:
“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward.” Matthew 6
“For our light affliction, which is but for a moment, worketh for us a far more exceeding and eternal weight of glory; while we look not at the things which are seen, but at the things which are not seen: for the things which are seen are temporal; but the things which are not seen are eternal.” 2 Corinthians 4:17
“Let not mercy and truth forsake thee: bind them about thy neck; write them upon the table of thine heart: So shalt thou find favour and good understanding in the sight of God and man. Trust in the Lord with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths.” Proverbs 3:3
The Bible is a fine poem and contains much wisdom.
It is too easily dismissed because we think we are clever
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…”
McAskill has lived for 35 years, five of which he most likely can’t remember. After that small amount of time he claims to be able to calculate what to do now to beneficially affect the world hundreds of years hence. He then tells us to spend many centuries figuring it out. Well, which is it? Use his formula developed after 35 years of sentience or use the wisdom of centuries, i.e. what common sense tells us to do?
THIS. 10,000 stars, wholly agree. Confuses me why billionaires think going to mars is more important than focusing on those right in front of us- the homeless, addicted, abandoned oppressed, uneducated and sick. Improving our own weaknesses and faults. Growing in virtue. Maslowe’s hierarchy of needs is tipped upside down to feed their egos. Most of humanity will never go to mars. But their lives will be improved by clean water, good books, vaccines and safety from violence.
Let’s do the plain “boring” things well and go to mars later. How about supporting the amazing caregivers of our Alzheimer’s population or parents who sacrifice their lives for medically fragile/autistic children?
Epidemic of mental illness and opioid addiction, lack of psychologists and social workers in society and Elon and Jeff want to go to mars. How ludicrous. How about paying every teen in society to exercise for a year and see if their sense of self worth /control/mental health improves?
THIS. 10,000 stars, wholly agree. Confuses me why billionaires think going to mars is more important than focusing on those right in front of us- the homeless, addicted, abandoned oppressed, uneducated and sick. Improving our own weaknesses and faults. Growing in virtue. Maslowe’s hierarchy of needs is tipped upside down to feed their egos. Most of humanity will never go to mars. But their lives will be improved by clean water, good books, vaccines and safety from violence.
Let’s do the plain “boring” things well and go to mars later. How about supporting the amazing caregivers of our Alzheimer’s population or parents who sacrifice their lives for medically fragile/autistic children?
Epidemic of mental illness and opioid addiction, lack of psychologists and social workers in society and Elon and Jeff want to go to mars. How ludicrous. How about paying every teen in society to exercise for a year and see if their sense of self worth /control/mental health improves?
A few other verses that spring to mind:
“Therefore when thou doest thine alms, do not sound a trumpet before thee, as the hypocrites do in the synagogues and in the streets, that they may have glory of men. Verily I say unto you, They have their reward.” Matthew 6
“For our light affliction, which is but for a moment, worketh for us a far more exceeding and eternal weight of glory; while we look not at the things which are seen, but at the things which are not seen: for the things which are seen are temporal; but the things which are not seen are eternal.” 2 Corinthians 4:17
“Let not mercy and truth forsake thee: bind them about thy neck; write them upon the table of thine heart: So shalt thou find favour and good understanding in the sight of God and man. Trust in the Lord with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths.” Proverbs 3:3
The Bible is a fine poem and contains much wisdom.
It is too easily dismissed because we think we are clever
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…”
McAskill has lived for 35 years, five of which he most likely can’t remember. After that small amount of time he claims to be able to calculate what to do now to beneficially affect the world hundreds of years hence. He then tells us to spend many centuries figuring it out. Well, which is it? Use his formula developed after 35 years of sentience or use the wisdom of centuries, i.e. what common sense tells us to do?
Anyone who thinks humans can alter the climate on earth will think they can impact human lives 1000 years from now. It most certainly is hubris.
“Knowledge puffs up, but love edifies. And if anyone thinks that he knows anything, he knows nothing yet as he ought to know.” (1 Corinthians 8:1-2)
“As a father pities his children, so the LORD pities those who fear Him. For He knows our frame; He remembers that we are dust.” (Psalm 103:13-14)
“We need to develop genetic engineering technologies and techniques to be able to write circuitry for cells and predictably program biology in the same way in which we write software and program computers.” (US Government, Sept 2022)
“It would therefore be worth spending many centuries to ensure that we’ve really figured things out…” (William MacAskill, quoted above)
“We know so much, and we understand so little. Lord, guide us; Lord, graciously guide us.” (Me, with monotonous frequency for the last few years)
Anyone who thinks humans can alter the climate on earth will think they can impact human lives 1000 years from now. It most certainly is hubris.
The idea is breathtaking in the scope of irs hubris.
Before we get to the long term we have to survive the short term. For instance, Europe has to get through the coming winter without freezing to death, because lack of Russian gas, and because the Dutch farmers have been well and truly sorted for adding to the 780,000 parts per million of nitrogen in the atmosphere.
The way things are going there may not be a long term for the human race.
The way things are going there may not be a long term for the human race.
Before we get to the long term we have to survive the short term. For instance, Europe has to get through the coming winter without freezing to death, because lack of Russian gas, and because the Dutch farmers have been well and truly sorted for adding to the 780,000 parts per million of nitrogen in the atmosphere.
“Educated beyond all decency and common sense.”
I have a little education in mathematics. It’s scary, I know that I know far less about mathematics that most people don’t know.
Education should destroy hubris but sadly, it seems to do quite the opposite.
I have a little education in mathematics. It’s scary, I know that I know far less about mathematics that most people don’t know.
Education should destroy hubris but sadly, it seems to do quite the opposite.
“Educated beyond all decency and common sense.”
So refreshing to read an informed response to MacAskill’s work. I found his book very interesting, but basically disagreed with some fundamental premises – Mr Ahmed articulates some of this much better than I ever could! To twist Hume, I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me. The other thought I have had derives from my financial background. The applied ethicists seem to me to be using a woefully wrong discount rate in their financial models (or on the evidence of some talks, no discount rate at all). This can end up overvaluing the welfare of future people relative to the people of today. For example, a 5% discount rate suggests we weight the utility of one person today as equal to 3.5 people in 25yrs time, but a 20% discount rate suggests one person today is worth >200 people in 25yrs time. Even at a low 5% discount rate, the model would suggest a unit of utility today is worth 1.9 x 10 to the power of 22 units of utility in 1000 years time. A similar but subtly different argument could also be made for distance, again from a perspective of materialism. Digital connectivity makes the world seem small, but in many ways our good deeds can be much more impactful for being local, and create better positive externalities via network effects (I treat local people/adjacent nodes well, who treat their adjacent nodes well, who treat their nodes well, etc etc). The network effects and positive externalities are lost when you try to jump 50 nodes to a country and people thousands of miles away, ignoring those physically closest to you. Anyway, thanks Unherd, really enjoyed this.
The work being done to protect against meteor strikes is a good example of long-termist thinking! Of course it is an example where we do have an understanding of the threat and some idea of what needs to be done to combat it.
Exactly. This article should be read an argument against misdirected long-termism, not valid long-termism.
In our world of hyper division of labour and specialisation some of us go to work each day to look, accurately, billions of years into the past and into the future. But at the end of the day, we all come home to the much smaller orbits of familiar concerns dominated by those nearest and dearest to us.
Unlike Mrs Jellby, billionaires have typically more than catered for the needs of their circle of intimates and they rightly feel the need to understand what to best do with the rest of their burgeoning wealth. This led Bill Gates, for example, to invest much more into eradicating malaria than into cures for cancer.
We should all be glad that some of us are out there conducting experiments to divert asteroids or drilling Antarctic ice cores to determine the patterns of global climate variation. But we do need a robust institutional and political edifice to process their discoveries and knowledge into meaningful collective actions and to prevent and contain the outbreak of moral panics and frauds based on the propagation if misinformation and mass manipulation of public sentiment. Effective Altruism clearly “needs more work” to avoid the worst but keep the best.
Billy Gates is looking out for himself. His so-called philanthropic efforts have wreaked much harm.
Billy Gates is looking out for himself. His so-called philanthropic efforts have wreaked much harm.
Exactly. This article should be read an argument against misdirected long-termism, not valid long-termism.
In our world of hyper division of labour and specialisation some of us go to work each day to look, accurately, billions of years into the past and into the future. But at the end of the day, we all come home to the much smaller orbits of familiar concerns dominated by those nearest and dearest to us.
Unlike Mrs Jellby, billionaires have typically more than catered for the needs of their circle of intimates and they rightly feel the need to understand what to best do with the rest of their burgeoning wealth. This led Bill Gates, for example, to invest much more into eradicating malaria than into cures for cancer.
We should all be glad that some of us are out there conducting experiments to divert asteroids or drilling Antarctic ice cores to determine the patterns of global climate variation. But we do need a robust institutional and political edifice to process their discoveries and knowledge into meaningful collective actions and to prevent and contain the outbreak of moral panics and frauds based on the propagation if misinformation and mass manipulation of public sentiment. Effective Altruism clearly “needs more work” to avoid the worst but keep the best.
“I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me.”
Indeed, this is a large part of the reason why I ended up doing a PhD on modality, the study of necessity, possibility, the contingent, and the actual vs the non-actual.
The work being done to protect against meteor strikes is a good example of long-termist thinking! Of course it is an example where we do have an understanding of the threat and some idea of what needs to be done to combat it.
“I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me.”
Indeed, this is a large part of the reason why I ended up doing a PhD on modality, the study of necessity, possibility, the contingent, and the actual vs the non-actual.
So refreshing to read an informed response to MacAskill’s work. I found his book very interesting, but basically disagreed with some fundamental premises – Mr Ahmed articulates some of this much better than I ever could! To twist Hume, I have a problem with people who ascribe equal weight to “is” and “may be”. There may be humans in 25,000AD, or there may not (meteor, anyone?) – but there *are* humans now. That reality, that materialism, seems to matter to me. The other thought I have had derives from my financial background. The applied ethicists seem to me to be using a woefully wrong discount rate in their financial models (or on the evidence of some talks, no discount rate at all). This can end up overvaluing the welfare of future people relative to the people of today. For example, a 5% discount rate suggests we weight the utility of one person today as equal to 3.5 people in 25yrs time, but a 20% discount rate suggests one person today is worth >200 people in 25yrs time. Even at a low 5% discount rate, the model would suggest a unit of utility today is worth 1.9 x 10 to the power of 22 units of utility in 1000 years time. A similar but subtly different argument could also be made for distance, again from a perspective of materialism. Digital connectivity makes the world seem small, but in many ways our good deeds can be much more impactful for being local, and create better positive externalities via network effects (I treat local people/adjacent nodes well, who treat their adjacent nodes well, who treat their nodes well, etc etc). The network effects and positive externalities are lost when you try to jump 50 nodes to a country and people thousands of miles away, ignoring those physically closest to you. Anyway, thanks Unherd, really enjoyed this.
Before I retired I was a computer programmer. One of the guidelines I was given was: when writing a piece of software avoid adding stuff because you think it might be useful in the future. The reasons are A) it makes the software harder to write in the first place and B) the thing you added almost certainly won’t be required and what’s more it will make it harder to change the software to add in the thing you hadn’t thought of first time around but really need now.
I’ve often thought that this rule is more generally applicable.
Also can I recommend the book “Why most things fail” by Paul Ormerod which mostly amounts to “The best laid schemes o’ mice an men gang aft agley”. And I should say that I have no financial or other interest in recommending this book.
It’s good to plan for the future but you have to be flexible because there will surely be something you haven’t thought of.
Before I retired I worked in architecture firms customizing computer applications for designing budlings. I was constantly on the lookout for the one program that “did it all”. Looking back I see that it was a fool’s errand. The programming world and the technology advanced too quickly and, more importantly, people didn’t work well – or create well – in a straightjacket designed by a few nerds – like me.
Or, as the brick that subsequently hit Father Ted on the head & had been on the accelerator of the flying milk float was labelled:
“Shit happens”
Or, as the brick that subsequently hit Father Ted on the head & had been on the accelerator of the flying milk float was labelled:
“Shit happens”
Absolutely. I can only assume the US government’s stated intention to “predictably program biology in the same way in which we write software and program computers” was made by people with little to no understanding of writing software and programming computers, let alone “programming” biology.
“I have friends, who shall remain nameless right now, who were part of writing the original OS for Apple. I don’t know where we are right now in Mac OS, but they can’t help themselves and open up the OS anytime a new version comes out and go, “Wow, there are things I wrote when I was a kid, and they weren’t very good, but they’re still in there because the thing can’t run without it.” It’s so rare that a protocol doesn’t get changed.” Hans Zimmer (https://www.kvraudio.com/interviews/a-kvr-interview-with-hans-zimmer-55982).
But, hey, yeah, let’s get this programming embedded into our DNA and cellular molecular structures because it’s so predictable and nothing whatsoever could possibly go wrong.
Before I retired I worked in architecture firms customizing computer applications for designing budlings. I was constantly on the lookout for the one program that “did it all”. Looking back I see that it was a fool’s errand. The programming world and the technology advanced too quickly and, more importantly, people didn’t work well – or create well – in a straightjacket designed by a few nerds – like me.
Absolutely. I can only assume the US government’s stated intention to “predictably program biology in the same way in which we write software and program computers” was made by people with little to no understanding of writing software and programming computers, let alone “programming” biology.
“I have friends, who shall remain nameless right now, who were part of writing the original OS for Apple. I don’t know where we are right now in Mac OS, but they can’t help themselves and open up the OS anytime a new version comes out and go, “Wow, there are things I wrote when I was a kid, and they weren’t very good, but they’re still in there because the thing can’t run without it.” It’s so rare that a protocol doesn’t get changed.” Hans Zimmer (https://www.kvraudio.com/interviews/a-kvr-interview-with-hans-zimmer-55982).
But, hey, yeah, let’s get this programming embedded into our DNA and cellular molecular structures because it’s so predictable and nothing whatsoever could possibly go wrong.
Before I retired I was a computer programmer. One of the guidelines I was given was: when writing a piece of software avoid adding stuff because you think it might be useful in the future. The reasons are A) it makes the software harder to write in the first place and B) the thing you added almost certainly won’t be required and what’s more it will make it harder to change the software to add in the thing you hadn’t thought of first time around but really need now.
I’ve often thought that this rule is more generally applicable.
Also can I recommend the book “Why most things fail” by Paul Ormerod which mostly amounts to “The best laid schemes o’ mice an men gang aft agley”. And I should say that I have no financial or other interest in recommending this book.
It’s good to plan for the future but you have to be flexible because there will surely be something you haven’t thought of.
Bloody brilliant article. Once again it boils down to bad science. Trying to use scientific method to solve problems that are presently insoluble by that method & I suspect never will be.
I’m presently reading Plato’s Republic, & it’s scarier than anything M. R. James ever wrote. If it is actually (as described) the cornerstone of Western civilization then we’re screwed.
The Lord preserve us from clever people with all the answers; like it or lump it, true democracy is the only way forward – as Popper (him again) pointed out it’s at the heart of the proper science that has improved most of our lot beyond all recognition.
Bloody brilliant article. Once again it boils down to bad science. Trying to use scientific method to solve problems that are presently insoluble by that method & I suspect never will be.
I’m presently reading Plato’s Republic, & it’s scarier than anything M. R. James ever wrote. If it is actually (as described) the cornerstone of Western civilization then we’re screwed.
The Lord preserve us from clever people with all the answers; like it or lump it, true democracy is the only way forward – as Popper (him again) pointed out it’s at the heart of the proper science that has improved most of our lot beyond all recognition.
An excellent piece that explains what is, imo, a flawed philosophy. The argument that we should be utilitarian, in the sense of weighting future human beings equally with current human beings, has an interesting implication for the abortion debate. Since every human being has the potential to produce one or more children, by aborting an unborn child the mother denies life to a potentially infinite number of future human beings. Does this mean that MacAskill would oppose abortion under all circumstances? If not, why not?
An excellent piece that explains what is, imo, a flawed philosophy. The argument that we should be utilitarian, in the sense of weighting future human beings equally with current human beings, has an interesting implication for the abortion debate. Since every human being has the potential to produce one or more children, by aborting an unborn child the mother denies life to a potentially infinite number of future human beings. Does this mean that MacAskill would oppose abortion under all circumstances? If not, why not?
The road to hell is paved with good intentions.
MacAskill is clearly an, inhuman, idiot and maybe a savant as well.
The road to hell is paved with good intentions.
MacAskill is clearly an, inhuman, idiot and maybe a savant as well.
MacAskill’s “SPC framework” is a perfect example of a common flaw in the work of those who consider themselves to be logical and scientific. All of the inputs in his formula are his own assumptions. He has no more insight into the effects of, for instance, nuclear war in the 23rd century than I do. (Let’s not forget that less than three years ago Covid was widely assumed to be the end of civilization.)
But that doesn’t stop him from spinning his assumptions into something that pretends to be data.
MacAskill’s “SPC framework” is a perfect example of a common flaw in the work of those who consider themselves to be logical and scientific. All of the inputs in his formula are his own assumptions. He has no more insight into the effects of, for instance, nuclear war in the 23rd century than I do. (Let’s not forget that less than three years ago Covid was widely assumed to be the end of civilization.)
But that doesn’t stop him from spinning his assumptions into something that pretends to be data.
ironically isn’t his argument rather a “pro-life” view. I doubt that is something his Silicon Valley supporters would go along with.
ironically isn’t his argument rather a “pro-life” view. I doubt that is something his Silicon Valley supporters would go along with.
MacAskill sounds not unlike Leibniz and his ‘Calculemus!’ –
“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right”.
MacAskill sounds not unlike Leibniz and his ‘Calculemus!’ –
“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right”.
I think Effective Altruism is a disguised justification for extreme measures for environmentalism. “Sure, billions will be forced into poverty today, but look at the long term benefits.”
I think Effective Altruism is a disguised justification for extreme measures for environmentalism. “Sure, billions will be forced into poverty today, but look at the long term benefits.”
MacAskill is an intellectual moron. People like him over intellectualise everything.
Bankman-Fried is a victim of ironic nominative determinism.
The answer is simple. Look after the present and the future will look after itself.
The writer refers to a hypothetical crowd control situation. It’s beyond the scope of my reply here to explain why hypothetical scenarios have no value in the real world.
There is a lot of emphasis on the “effective” side of the equation, but I might also point out that there is really no actual “altruism” from these financial masters of the universe since they definitely want credit for any perceived good deed.
There is a lot of emphasis on the “effective” side of the equation, but I might also point out that there is really no actual “altruism” from these financial masters of the universe since they definitely want credit for any perceived good deed.
MacAskill is an intellectual moron. People like him over intellectualise everything.
Bankman-Fried is a victim of ironic nominative determinism.
The answer is simple. Look after the present and the future will look after itself.
The writer refers to a hypothetical crowd control situation. It’s beyond the scope of my reply here to explain why hypothetical scenarios have no value in the real world.
Talking about Killer Toasters. I wish someone would invent an AI toaster which would toast bread perfectly without burning or coming up underdone.
Now that would be something worthwhile.
Now that would be something worthwhile.
Talking about Killer Toasters. I wish someone would invent an AI toaster which would toast bread perfectly without burning or coming up underdone.
My Grandfather always said “look after the cents, and the dollars will take care of itself.”
I think that if we bring this analogy to ourselves and look after our own patch, raise our child(ren) to be decent, capable of looking after themselves and raising children to do the same, the future will be in good hands.
My Grandfather always said “look after the cents, and the dollars will take care of itself.”
I think that if we bring this analogy to ourselves and look after our own patch, raise our child(ren) to be decent, capable of looking after themselves and raising children to do the same, the future will be in good hands.
It used to be said if you cannot change yourself, you cannot change the world, and assumed if you could change yourself (for the better), the world would be very slightly improved.
It used to be said if you cannot change yourself, you cannot change the world, and assumed if you could change yourself (for the better), the world would be very slightly improved.
“As if the upshot of all this discussion would be a final, ideal system, which the statesmen-philosophers of tomorrow could impose on their unwilling subjects with a clear conscience.”
The wannabe statesmen-philosophers of today, primarily populating the World Economic Forum and similar alphabet NGOs, use just this type of thinking, i.e. the wet dream that they know the distant outcome of their preferred imposed actions, to assuage what little conscience they might have. That their nightmare to-do list for the supposed benefit of the (not so distant) future just so happen to benefit them in the present, well, “Look! A squirrel!”.
“As if the upshot of all this discussion would be a final, ideal system, which the statesmen-philosophers of tomorrow could impose on their unwilling subjects with a clear conscience.”
The wannabe statesmen-philosophers of today, primarily populating the World Economic Forum and similar alphabet NGOs, use just this type of thinking, i.e. the wet dream that they know the distant outcome of their preferred imposed actions, to assuage what little conscience they might have. That their nightmare to-do list for the supposed benefit of the (not so distant) future just so happen to benefit them in the present, well, “Look! A squirrel!”.
Excellent. I have been of a mind for the last 30 years that the boomers and Xer’s really believe that everything can be measured, dissected, quantified, and dealt with by algorithms. Basically, pure nerdism. Ain’t really true, though. Just like Eckels accidentally crushing the butterfly, shit happens, and it defies measurement and algorithms. Also, I have a cynical thought that if the ‘elite” believe money can be made by pushing these theories, so much the better, whether they are BS or not.
Of course it’s right that not everything can be measured, and people cannot be forced to care normatively, about this particular thing or that. But how do we fall from that into the simplification of the author who seems to suggest then that there is no space for quantitative thinking in ethics? First, it can have a use when a person, institution or society has already decided what their moral priorities are… because the means we have to address any issue we have decided is important are not only quantifiable but we ourselves have already quantified them – we all live lives in which we are keenly aware of our use and limited availability of time and money. If helping blind people is the goal I have set for myself, it will then be useful to me to know how I can maximise impact within my time or money budget. Second, without forcing anyone I think we can assume there are certain things most people agree are a good idea – like the continuance of civilisation for example. So trying to rank all the potential existential threats in terms of relative probability – AI, epidemics, asteroids, climate change – and the cost of doing anything about them… is then, in fact, useful. I dont understand the reductionism of saying if X is not a silver bullet to solve everything then it must be useless.
Of course it’s right that not everything can be measured, and people cannot be forced to care normatively, about this particular thing or that. But how do we fall from that into the simplification of the author who seems to suggest then that there is no space for quantitative thinking in ethics? First, it can have a use when a person, institution or society has already decided what their moral priorities are… because the means we have to address any issue we have decided is important are not only quantifiable but we ourselves have already quantified them – we all live lives in which we are keenly aware of our use and limited availability of time and money. If helping blind people is the goal I have set for myself, it will then be useful to me to know how I can maximise impact within my time or money budget. Second, without forcing anyone I think we can assume there are certain things most people agree are a good idea – like the continuance of civilisation for example. So trying to rank all the potential existential threats in terms of relative probability – AI, epidemics, asteroids, climate change – and the cost of doing anything about them… is then, in fact, useful. I dont understand the reductionism of saying if X is not a silver bullet to solve everything then it must be useless.
Excellent. I have been of a mind for the last 30 years that the boomers and Xer’s really believe that everything can be measured, dissected, quantified, and dealt with by algorithms. Basically, pure nerdism. Ain’t really true, though. Just like Eckels accidentally crushing the butterfly, shit happens, and it defies measurement and algorithms. Also, I have a cynical thought that if the ‘elite” believe money can be made by pushing these theories, so much the better, whether they are BS or not.
Interesting essay thanks and a nice logical take down of the thinking errors in long termism etc. Hard to believe how many enormous dead ends human thinking can head down if you give too much attention to any one attention grabbing set of whiz kids
Interesting essay thanks and a nice logical take down of the thinking errors in long termism etc. Hard to believe how many enormous dead ends human thinking can head down if you give too much attention to any one attention grabbing set of whiz kids
I like their optimism that “civilization might last millions, billions, or even trillions of years”. Humans are currently depleting soil and flooding the world with plastic, we really need to focus all of our minds and energy on not causing our own extinction in the next 100 years, instead of thinking about what might happen thousands of years from now. I want what these guys are smoking…
I like their optimism that “civilization might last millions, billions, or even trillions of years”. Humans are currently depleting soil and flooding the world with plastic, we really need to focus all of our minds and energy on not causing our own extinction in the next 100 years, instead of thinking about what might happen thousands of years from now. I want what these guys are smoking…
One of the more specious reasons for rejecting the Yucca Mountain site for nuclear waste storage was that in 10,000 years local humans might not be able to read the warning signage.
Yes, the reason may be specious. But we have to consider that a criminal absence of maintenance – because those sites although forsaken are important so you can’t forget about them – or an almost complete loss of civilisation could lead to the inability to read those signs. It’s not completely unlikely that such things could happen, so we have to consider some events that could prevent updating warning signs in the future.
More importantly, nuclear semiotics has to deal with a lot of variables, some of them completely incalculable. We deal with 5000 years of history – more or less – and yet there’s the need to predict what might happen and how to deal with possible events that may happen during 10000 years to humans or post-humans! Pretty scary in my opinion, considering our limited experience. A sign of unbridled ambition and presumption, two of my major gripes with longtermism. Summing it up, nuclear semiotics tells more about our real present than about a hypothetical future.
For me, there are far better options than a long term view: for instance, phasing out nuclear energy transitioning to clean renewable energy, or solving the problem of nuclear waste storage in a better way.
Yes, the reason may be specious. But we have to consider that a criminal absence of maintenance – because those sites although forsaken are important so you can’t forget about them – or an almost complete loss of civilisation could lead to the inability to read those signs. It’s not completely unlikely that such things could happen, so we have to consider some events that could prevent updating warning signs in the future.
More importantly, nuclear semiotics has to deal with a lot of variables, some of them completely incalculable. We deal with 5000 years of history – more or less – and yet there’s the need to predict what might happen and how to deal with possible events that may happen during 10000 years to humans or post-humans! Pretty scary in my opinion, considering our limited experience. A sign of unbridled ambition and presumption, two of my major gripes with longtermism. Summing it up, nuclear semiotics tells more about our real present than about a hypothetical future.
For me, there are far better options than a long term view: for instance, phasing out nuclear energy transitioning to clean renewable energy, or solving the problem of nuclear waste storage in a better way.
One of the more specious reasons for rejecting the Yucca Mountain site for nuclear waste storage was that in 10,000 years local humans might not be able to read the warning signage.
I see that Sam Harris is busy defending the concept of Effective Altruism on his latest podcast. He has had both William MacAskill and Sam Bankman-Fried on earlier podcasts, and it is not a good week to defend EA. The above essay is an excellent counterweight to the picture of super-bright young guys earning truckloads of money with the aim of giving it away in a very “rational” manner. What did anyone expect of a guy named “Bankman” who was being publicly praised by ultra-spivs like Bill Clinton and Tony Blair?
I can see the logic behind EA when, a few years ago, I saw a local couple raising £250,000 for experimental treatment for their little daughter. Money flooded in, the daughter got the highly controversial treatment and died shortly afterwards. I saw her funeral procession heading for the cemetery. But wishing that people would give £250,000 so quickly and save 1,000 children in Africa using proven medicines and hygiene is to wish that people were constitutionally different
I see that Sam Harris is busy defending the concept of Effective Altruism on his latest podcast. He has had both William MacAskill and Sam Bankman-Fried on earlier podcasts, and it is not a good week to defend EA. The above essay is an excellent counterweight to the picture of super-bright young guys earning truckloads of money with the aim of giving it away in a very “rational” manner. What did anyone expect of a guy named “Bankman” who was being publicly praised by ultra-spivs like Bill Clinton and Tony Blair?
I can see the logic behind EA when, a few years ago, I saw a local couple raising £250,000 for experimental treatment for their little daughter. Money flooded in, the daughter got the highly controversial treatment and died shortly afterwards. I saw her funeral procession heading for the cemetery. But wishing that people would give £250,000 so quickly and save 1,000 children in Africa using proven medicines and hygiene is to wish that people were constitutionally different
and I thought silicon valley was the first thing one saw when meeting Katie Price?
and I thought silicon valley was the first thing one saw when meeting Katie Price?
A little more zen common sense amongst us all would obviate this whole discussion.
If what matters is what you have before your eyes, then helping suffering people is more important than preventing situations from arising in which people are bound to suffer. In my opinion, establishing a dense network of birth control clinics across Black Africa is more important than saving people from drowning in the Mediterranean Sea.
It’s hubris and naivety to think we should actively implement a utopia, even as we strive to improve our lot and of those to come. Utopia sounds great, until you realize it will be populated with humans. Go back and read Seneca or Marcus Aurelius, their take on human nature rings every bit as true today, we haven’t changed, and I don’t suppose our natures will (despite the transhumanist’s attempts). “Utopians” have killed more people in recent memory than one would care to count.
Consider the irony: SBF had appointed MacAskill to be in charge of his “effective altruism” fund. How many people falsely trusted SBF and invested in FTX because of the moral cover that people like MacAskill unwittingly gave him? If MacAskill can’t even predict the accidental harm that he may have helped cause in the immediate present, how can anyone possibly take him seriously when he talks about “long termism” stretching over decades and centuries, and even millions of years? The recently arrested SBF is very likely to spend at least 10 years in prison, and quite possibly up to 50 years because he defrauded so many. Should MacAskill be held at least partly morally liable for accidentally helping to give SBF moral credibility when he in fact had none whatsoever? Many actual existing people will suffer greatly because of this crypto farce. MacAskill has recently disavowed SBF. That’s too little too late. Perhaps he should rethink the very meaning of his own “long term” project when he has so obviously failed the “short term” project of doing no harm in the here and now.