top of page

Below you will find some critiques of longtermism (and a few of Effective Altruism more generally), along with excerpts from those critiques. All titles are hyperlinked. More will be added as they are published. If we have missed anything, please email philosophytorres@gmail.com. (17 links below.)

The Toxic Ideology of Longtermism

By Alice Crary. Radical Philosophy. Published in April 2023.

Excerpt: Longtermism’s sins are different and more ominous, but there are points of convergence. Longtermism deflects from EA’s wonted attention to current human and animal suffering. It defends in its place a concern for the wellbeing of the potentially trillions of humans who will live in the long-term future, and, taking the sheer number of prospective people to drown out current moral problems, exhorts us to regard threats to humanity’s continuation as a moral priority, if not the moral priority. This makes longtermists shockingly dismissive of "non-existential" hazards that may result in the suffering and death of huge numbers in the short term if, as they see it, there is a reasonable probability that the hazards are consistent with the possibility of a far greater number of humans going on to flourish in the long term.

Against Longtermism

By Émile P. Torres. Aeon. Published on October 19, 2021.

 

Excerpt: We can now begin to see how longtermism might be self-defeating. Not only could its ‘fanatical’ emphasis on fulfilling our longterm potential lead people to, eg, neglect non-existential climate change, prioritise the rich over the poor and perhaps even ‘justify’ pre-emptive violence and atrocities for the ‘greater cosmic good’ but it also contains within it the very tendencies—Baconianism, capitalism and value-neutrality—that have driven humanity inches away from the precipice of destruction. Longtermism tells us to maximise economic productivity, our control over nature, our presence in the Universe, the number of (simulated) people who exist in the future, the total amount of impersonal ‘value’ and so on. But to maximise, we must develop increasingly powerful—and dangerous—technologies; failing to do this would itself be an existential catastrophe. Not to worry, though, because technology is not responsible for our worsening predicament, and hence the fact that most risks stem directly from technology is no reason to stop creating more technology. Rather, the problem lies with us, which means only that we must create even more technology to transform ourselves into cognitively and morally enhanced posthumans.

Understanding "Longtermism": Why this Suddenly Influential Philosophy Is So Toxic

By Émile P. Torres. Salon. Published on August 20, 2022.

Exerpt: It makes sense that such individuals would buy-into the quasi-religious worldview of longtermism, according to which the West is the pinnacle of human development, the only solution to our problems is more technology and morality is reduced to a computational exercise ("Shut-up and multiply"!). One must wonder, when MacAskill implicitly asks "What do we owe the future?" whose future he's talking about. The future of indigenous peoples? The future of the world's nearly 2 billion Muslims? The future of the Global South? The future of the environment, ecosystems and our fellow living creatures here on Earth? I don't think I need to answer those questions for you. ... If the future that longtermists envision reflects the community this movement has cultivated over the past two decades, who would actually want to live in it?

The Hinge of History

By Peter Singer. Project Syndicate. Published on October 8, 2021.

 

Excerpt: The dangers of treating extinction risk as humanity’s overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth. ... But, as [Émile P.] Torres has pointed out, viewing current problems—other than our species’ extinction—through the lens of "longtermism" and "existential risk" can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth. Marx’s vision of communism as the goal of all human history provided Lenin and Stalin with a justification for their crimes, and the goal of a "Thousand-Year Reich" was, in the eyes of the Nazis, sufficient reason for exterminating or enslaving those deemed racially inferior.

 

 

The Heavy Price of Longtermism

By Alexander Zaitchik. New Republic. Published on August 24, 2022.

 

Excerpt: Midway through What We Owe the Future, MacAskill acknowledges that all theories of population ethics have “some unintuitive or unappealing implications.” But he does not go into quite the detail that other longtermist thinkers—found throughout the book’s body and footnotes—have in their own publications and interviews. Bos­trom has concluded that, given a 1 percent chance of quadrillions of people existing in the theoretical future, “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth one hundred billion times as much as a billion human lives.” Nick Beckstead, in an influential 2013 longtermist dissertation, discusses how this fact calls for reexamining “ordinary enlightened humanitarian standards.” If future beings contain exponentially more “value” than living ones, reasons Beckstead, and if rich countries drive the innovation needed to bring about their existence, “it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.” Hilary Greaves has likewise acknowledged that longtermist logic clearly, if sometimes unfortunately, points away from things that once seemed ethically advisable, such as “transferring resources from the affluent western world to the global poor.”

The Dangerous Ideas of "Longtermism" and "Existential Risk"

By Émile P. Torres. Current Affairs. Published July 28, 2021.

 

Excerpt: In the same paper, Bostrom declares that even "a non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback," describing this as "a giant massacre for man, a small misstep for mankind." That’s of course cold comfort for those in the crosshairs of climate change—the residents of the Maldives who will lose their homeland, the South Asians facing lethal heat waves above the 95-degree F wet-bulb threshold of survivability, and the 18 million people in Bangladesh who may be displaced by 2050. But, once again, when these losses are juxtaposed with the apparent immensity of our longterm "potential," this suffering will hardly be a footnote to a footnote within humanity’s epic biography.

 

Climate Crisis and the Dangers of Tech-Obsessed "Long-Termism"

By Rupert Read. The Conversation. Published on February 17, 2022.

 

Excerpt: If you think that Torres and I are exaggerating, here is an example. Oxford academic and leading "long-termist" Nick Bostrom proposes that everyone should permanently wear an Orwellianly-named "freedom tag": a device that would monitor everything that you do, 24/7 for the remainder of your life to guard against the minuscule possibility that you might be part of a plot to destroy humanity. ... This might sound like satire. When I first read Bostrom’s piece, I assumed he was proposing the "freedom tag" idea for rhetorical effect only, or something like that. But no – he means it quite seriously. ... And here’s the real trouble: these long-termists, in backing to the hilt the idea of a big-tech, industry-heavy future appear to be calling for much more of the very things that have brought us to this desperate ecological situation.

Selling "Longtermism": How PR and Marketing Drive a Controversial New Movement

By Émile P. Torres. Salon. Published September 10, 2022

Excerpt:

Although the longtermists do not, so far as I know, describe what they're doing this way, we might identify two phases of spreading their ideology: Phase One involved infiltrating governments, encouraging people to pursue high-paying jobs to donate more for the cause and wooing billionaires like Elon Musk — and this has been wildly successful. ... Phase Two is what we're seeing right now with the recent media blitz promoting longtermism, with articles written by or about William MacAskill, longtermism's poster boy, in outlets like the New York Times, the New Yorker, the GuardianBBC and TIME. Having spread their influence behind the scenes over the many years, members and supporters are now working overtime to sell longtermism to the broader public in hopes of building their movement, as "movement building" is one of the central aims of the community. ... But buyer beware: The EA community, including its longtermist offshoot, places a huge emphasis on marketingpublic relations and "brand-management," and hence one should be very cautious about how MacAskill and his longtermist colleagues present their views to the public.

Defective Altruism

By Nathan Robinson. Current Affairs. Published September 19, 2022.

Excerpt: [T]he biggest difference between “longtermism” and old-fashioned “caring about what happens in the future” is that longtermism is associated with truly strange ideas about human priorities that very few people could accept. Longtermists have argued that because we are (on a utilitarian theory of morality) supposed to maximize the amount of well-being in the universe, we should not just try to make life good for our descendants, but should try to produce as many descendants as possible. This means couples with children produce more moral value than childless couples, but MacAskill also says in his new book What We Owe The Future that “the practical upshot of this is a moral case for space settlement.” And not just space settlement. Nick Bostrom, another EA-aligned Oxford philosopher whose other bad ideas I have criticized before, says that truly maximizing the amount of well-being would involve the “colonization of the universe,” and using the resulting Lebensraum to run colossal numbers of digital simulations of human beings. You know, to produce the best of all possible worlds. 

Why I Am Not a Longtermist

By Boaz Barak. Windows on Theory. Published on May 23, 2022.

Excerpt: Physicists know that there is no point in writing a measurement up to 3 significant digits if your measurement device has only one-digit accuracy. Our ability to reason about events that are decades or more into the future is severely limited. At best, we could estimate probabilities up to an order of magnitude, and even that may be optimistic. Thus, claims such as Nick Bostrom’s, that "the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives" make no sense to me.  This is especially the case since these "probabilities" are Bayesian, i.e., correspond to degrees of belief. If, for example, you evaluate the existential-risk probability by aggregating the responses of 1000 experts, then what one of these experts had for breakfast is likely to have an impact larger than 0.001 percent (which, according to Bostrom, would correspond to much more than 10²⁰ human lives). To the extent we can quantify existential risks in the far future, we can only say something like "extremely likely," "possible," or "can’t be ruled out." Assigning numbers to such qualitative assessments is an exercise in futility. … I cannot justify sacrificing current living humans for abstract probabilities.

Why "Longtermism" Isn't Ethically Sound

By Christine Emba. Washington Post. Published on September 5, 2022.

Excerpt: As much as the effective altruist community prides itself on evidence, reason and morality, there’s more than a whiff of selective rigor here. The turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents’ ability to predict the future and shape it to their liking. It suggests that playing games with probability (what is the expected value calculus of taming a speculative robot overlord?) is more important than helping those in the here-and-now, and that top-down solutions trump collective systems that respond to real people’s preferences. ... To be even more cynical: Longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies while patting themselves on the back for their intelligence and superior IQs. The future becomes a clean slate onto which longtermists can project their moral certitude and pursue their techno-utopian fantasies, while flattering themselves that they are still "doing good."

Towards Ineffective Altruism

By Hal Triedman, edited by Archana Ahlawat. Reboot. Published May 22, 2020.

Exerpt: Longtermism and existential risk are particularly influential ideologies among those who made fortunes in technology and in elite institutions. Elon Musk has cited the work of Nick Bostrom (who coined the term existential risk in 2002) and has donated millions to the Future of Humanity Institute and Future of Life Institute, sister organizations based out of Oxford. Jean Tallinn, a founder of Skype worth an estimated $900 million in 2019, also cofounded the Center for the Study of Existential Risk at Cambridge, and has donated more than a million dollars to the Machine Intelligence Research Institute (MIRI). Vitalik Buterin, a cofounder of the Ethereum cryptocurrency, has donated extensively to MIRI as well. Peter Thiel, the radical libertarian donor, early Trump supporter, and funder of JD Vance’s Ohio Senate campaign, delivered the keynote address at the 2013 Effective Altruism summit.

The Washout Argument Against Longtermism

By Eric Schwitzgebel. The Spintered Mind. Published August 23, 2022.

Excerpt: Here's another argument: Longtermists like MacAskill and Toby Ord typically think that these next few centuries are an unusually crucial time for our species -- a period of unusual existential risk, after which, if we safely get through, the odds of extinction fall precipitously. (This assumption is necessary for their longtermist views to work, since if every century carries an independent risk of extinction of, say, 10%, the chance is vanishingly small that our species will survive for millions of years.) What's the best way to tide us through these next few especially dangerous centuries? Well, one possibility is a catastrophic nuclear war that kills 99% of the population. The remaining 1% might learn the lesson of existential risk so well that they will be far more careful with future technology than we are now. If we avoid nuclear war now, we might soon develop even more dangerous technologies that would increase the risk of total extinction, such as engineered pandemics, rogue superintelligent AI, out-of-control nanotech replicators, or even more destructive warheads. So perhaps it's best from the longterm perspective to let us nearly destroy ourselves as soon as possible, setting our technology back and teaching us a hard lesson, rather than blithely letting technology advance far enough that a catastrophe is more likely to be 100% fatal.

Against Strong Longtermism: A Response to Greaves and MacAskill

By Ben Chugg. Curious. Published December 18, 2020.

 

Excerpt: If you wanted to implement a belief structure which justified unimaginable horrors, what sort of views would it espouse? A good starting point would be to disable our critical capacities from evaluating the consequences of our actions, most likely by appealing to some vague and distant glorious future lying in wait. And indeed, this tool has been used by many horrific ideologies in the past. Definitely and beyond all doubt, our future or maximum program is to carry China forward to socialism and communism. Both the name of our Party and our Marxist world outlook unequivocally point to this supreme ideal of the future, a future of incomparable brightness and splendor.

—Mao Tse Tung, "On Coalition Government." … Inadvertently, however, longtermism is almost tailor-made to disable the mechanisms by which we make progress.


 

A Case Against Strong Longtermism

By Vaden Masrani. Vaden Masrani blog. Published on December 15, 2020 (this date may be wrong).

 

Excerpt: Longtermism has been described as one of the most important discoveries of effective altruism so far and William MacAskill is currently writing an entire book on the subject. I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.

 

Is "Longtermism" the Cure or the Sickness?

By Rupert Read. ABC. Published on July 19, 2022.

 

Excerpt: A kind of Big-Tech/industrial/academic complex has sprung into existence, which is sucking up money and attention that could be directed toward thinking about how we could become genuinely long-termist, and is instead preoccupied with the idea that the best way to prevent us from destroying ourselves is to have much more tech, much more growth, much more government — much more of the very things that, one might naïvely point out, brought us to dire straights in which we find ourselves.

How Elon Musk Sees the Future

By Émile P. Torres. Salon. Published on July 17, 2022.

 

Excerpt: But the even more important conclusion that Bostrom draws from his calculations is that we must reduce "existential risks," at term that refers, basically, to any event that would prevent us from maximizing the total amount of value in the universe. … It's for this reason that "dysgenic pressures" is an existential risk: If less "intellectually talented individuals," in Bostrom's words, outbreed smarter people, then we might not be able to create the advanced technologies needed to colonize space and create unfathomably large populations of "happy" individuals in massive computer simulations.

 

Elon Musk, Twitter, and the Future

By Émile P. Torres. Salon. Published on April 30, 2022.

 

Excerpt: In brief, the longtermists claim that if humanity can survive the next few centuries and successfully colonize outer space, the number of people who could exist in the future is absolutely enormous. According to the "father of Longtermism," Nick Bostrom, there could be something like 10^58 human beings in the future, although most of them would be living "happy lives" inside vast computer simulations powered by nanotechnological systems designed to capture all or most of the energy output of stars. (Why Bostrom feels confident that all these people would be "happy" in their simulated lives is not clear. Maybe they would take digital Prozac or something?) Other longtermists, such as Hilary Greaves and Will MacAskill, calculate that there could be 10^45 happy people in computer simulations within our Milky Way galaxy alone. That's a whole lot of people, and longtermists think you should be very impressed.

 

Democratising Risk: In Search of a Methodology to Study Existential Risk

By Zoe Cremer and Luke Kemp. SSRN. Published on December 28, 2021.

 

Excerpt: Studying potential global catastrophes is vital. The high stakes of existential risk studies (ERS) necessitate serious scrutiny and self-reflection. We argue that existing approaches to studying existential risk are not yet fit for purpose, and perhaps even run the risk of increasing harm. We highlight general challenges in ERS: accommodating value pluralism, crafting precise definitions, developing comprehensive tools for risk assessment, dealing with uncertainty, and accounting for the dangers associated with taking exceptional actions to mitigate or prevent catastrophes. The most influential framework for ERS, the "techno-utopian approach" (TUA), struggles with these issues and has a unique set of additional problems: it unnecessarily combines the study of longtermism and longtermist ethics with the study of extinction, relies on a non-representative moral worldview, uses ambiguous and inadequate definitions, fails to incorporate insights from risk assessment in relevant fields, chooses arbitrary categorisations of risk, and advocates for dangerous mitigation strategies. Its moral and empirical assumptions might be particularly vulnerable to securitisation and misuse. We suggest several key improvements: separating the study of extinction ethics (ethical implications of extinction) and existential ethics (the ethical implications of different societal forms), from the analysis of human extinction and global catastrophe; drawing on the latest developments in risk assessment literature; diversifying the field, and; democratising its policy recommendations.

 

Calamity Theory: Three Critiques of Existential Risk

By Joshua Schuster and Derek Woods, University of Minnesota Press, 2021.

Excerpt: A new philosophical field has emerged. "Existential risk" studies any real or hypothetical human extinction event in the near or distant future. This movement examines catastrophes ranging from runaway global warming to nuclear warfare to malevolent artificial intelligence, deploying a curious mix of utilitarian ethics, statistical risk analysis, and, controversially, a transhuman advocacy that would aim to supersede almost all extinction scenarios. The proponents of existential risk thinking, led by Oxford philosopher Nick Bostrom, have seen their work gain immense popularity, attracting endorsement from Bill Gates and Elon Musk, millions of dollars, and millions of views. Calamity Theory is the first book to examine the rise of this thinking and its failures to acknowledge the ways some communities and lifeways are more at risk than others and what it implies about human extinction.

 

Democratising Risk—Or How EA Deals with Critics

By Zoe Cremer. EA Forum. Published on December 28, 2021.

Excerpt: We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset. ... We believe that critique is vital to academic progress. Academics should never have to worry about future career prospects just because they might disagree with funders. We take the prominent authors whose work we discuss here to be adults interested in truth and positive impact. Those who believe that this paper is meant as an attack against those scholars have fundamentally misunderstood what this paper is about and what is at stake. The responsibility of finding the right approach to existential risk is overwhelming. This is not a game. Fucking it up could end really badly.

Why I Am Not an Effective Altruist

By Erik Hoel. The Intrinsic Perspective. Published on August 15, 2022.

Exerpt: This poison, which originates directly from utilitarianism (which then trickles down to effective altruism), is not a quirk, or a bug, but rather a feature of utilitarian philosophy, and can be found in even the smallest drop. And why I am not an effective altruist is that to deal with it one must dilute or swallow, swallow or dilute, always and forever. ... The end result is like using Aldous Huxley’s Brave New World as a how-to manual rather than a warning. Following this reasoning, all happiness should be arbitraged perfectly, and the earth ends as a squalid factory farm for humans living in the closest-to-intolerable conditions possible, perhaps drugged to the gills. And here is where I think most devoted utilitarians, or even those merely sympathetic to the philosophy, go wrong. What happens is that they think Parfit’s repugnant conclusion (often referred to as the repugnant conclusion) is some super-specific academic thought experiment from so-called “population ethics” that only happens at extremes. It’s not. It’s just one very clear example of how utilitarianism is constantly forced into violating obvious moral principles (like not murdering random people for their organs) by detailing the “end state” of a world governed under strict utilitarianism. But really it is just one of an astronomical number of such repugnancies. Utilitarianism actually leads to repugnant conclusions everywhere, and you can find repugnancy in even the smallest drop.

The New Moral Mathematics

By Kieran Setiya. The Boston Review. Published August 15, 2022.

 

Excerpt: Longtermists deny neutrality: they argue that it’s always better, other things equal, if another person exists, provided their life is good enough. That’s why human extinction looms so large. A world in which we have trillions of descendants living good enough lives is better than a world in which humanity goes extinct in a thousand years—better by a vast, huge, mind-boggling margin. A chance to reduce the risk of human extinction by 0.01 percent, say, is a chance to make the world an inconceivably better place. It’s a greater contribution to the good, by several orders of magnitude, than saving a million lives today.

Against "Effective Altruism"

By Alice Crary. Radical Philosophy. Published Summer 2021.

Excerpt: Initially attractive as such gestures are, there is every reason to be sceptical about their significance. They come unaccompanied by any acknowledgment of how the framework of EA constrains available moral and political outlooks. That framing excludes views of social thought on which it is irretrievably perspectival – views associated with central strands of feminist theory, critical disability studies, critical race theory, and anti-colonial theory. Despite its signaling towards diversity of ideas, EA as it stands cannot make room for individuals who discover in these traditions the things they believe most need to be said. For EA to accommodate their voices, it would have to allow that their moral and political beliefs are in conflict with its guiding principles and that these principles themselves need to be given up. To allow for this would be to reject EA in its current form as fatally flawed, finally a step towards doing a bit of good. 

bottom of page