top of page

Below you will find some critiques of longtermism, along with excerpts from those critiques. All titles are hyperlinked. More will be added as they are published. If we have missed anything, please email

Against Longtermism

By Émile P. Torres. Published on October 19, 2021.


We can now begin to see how longtermism might be self-defeating. Not only could its ‘fanatical’ emphasis on fulfilling our longterm potential lead people to, eg, neglect non-existential climate change, prioritise the rich over the poor and perhaps even ‘justify’ pre-emptive violence and atrocities for the ‘greater cosmic good’ but it also contains within it the very tendencies – Baconianism, capitalism and value-neutrality – that have driven humanity inches away from the precipice of destruction. Longtermism tells us to maximise economic productivity, our control over nature, our presence in the Universe, the number of (simulated) people who exist in the future, the total amount of impersonal ‘value’ and so on. But to maximise, we must develop increasingly powerful – and dangerous – technologies; failing to do this would itself be an existential catastrophe. Not to worry, though, because technology is not responsible for our worsening predicament, and hence the fact that most risks stem directly from technology is no reason to stop creating more technology. Rather, the problem lies with us, which means only that we must create even more technology to transform ourselves into cognitively and morally enhanced posthumans.

The Hinge of History

By Peter Singer. Published on October 8, 2021.


The dangers of treating extinction risk as humanity’s overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth. ... But, as [Émile P.] Torres has pointed out, viewing current problems—other than our species’ extinction—through the lens of “longtermism” and “existential risk” can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth. Marx’s vision of communism as the goal of all human history provided Lenin and Stalin with a justification for their crimes, and the goal of a “Thousand-Year Reich” was, in the eyes of the Nazis, sufficient reason for exterminating or enslaving those deemed racially inferior.

The Dangerous Ideas of "Longtermism" and "Existential Risk"

By Émile P. Torres. Published July 28, 2021.


In the same paper, Bostrom declares that even “a non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback,” describing this as “a giant massacre for man, a small misstep for mankind.” That’s of course cold comfort for those in the crosshairs of climate change—the residents of the Maldives who will lose their homeland, the South Asians facing lethal heat waves above the 95-degree F wet-bulb threshold of survivability, and the 18 million people in Bangladesh who may be displaced by 2050. But, once again, when these losses are juxtaposed with the apparent immensity of our longterm “potential,” this suffering will hardly be a footnote to a footnote within humanity’s epic biography.

Against Strong Longtermism: A Response to Greaves and MacAskill

By Ben Chugg. Published December 18, 2020.


If you wanted to implement a belief structure which justified unimaginable horrors, what sort of views would it espouse? A good starting point would be to disable our critical capacities from evaluating the consequences of our actions, most likely by appealing to some vague and distant glorious future lying in wait. And indeed, this tool has been used by many horrific ideologies in the past. Definitely and beyond all doubt, our future or maximum program is to carry China forward to socialism and communism. Both the name of our Party and our Marxist world outlook unequivocally point to this supreme ideal of the future, a future of incomparable brightness and splendor.

— Mao Tse Tung, “On Coalition Government”. Selected Works, Vol. III, p. 282. (emphasis mine) … Inadvertently, however, longtermism is almost tailor-made to disable the mechanisms by which we make progress.


A Case Against Strong Longtermism

By Vaden Masrani. Published on December 15, 2020 (although this date may be wrong).


Longtermism has been described as one of the most important discoveries of effective altruism so far and William MacAskill is currently writing an entire book on the subject. I think, however, that longtermism has the potential to destroy the effective altruism movement entirely, because by fiddling with the numbers, the above reasoning can be used to squash funding for any charitable cause whatsoever. The stakes are really high here.


Why I Am Not a Longtermist

By Boaz Barak. Published on May 23, 2022.


Physicists know that there is no point in writing a measurement up to 3 significant digits if your measurement device has only one-digit accuracy. Our ability to reason about events that are decades or more into the future is severely limited. At best, we could estimate probabilities up to an order of magnitude, and even that may be optimistic. Thus, claims such as Nick Bostrom’s, that “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives” make no sense to me.  This is especially the case since these “probabilities” are Bayesian, i.e., correspond to degrees of belief. If, for example, you evaluate the existential-risk probability by aggregating the responses of 1000 experts, then what one of these experts had for breakfast is likely to have an impact larger than 0.001 percent (which, according to Bostrom, would correspond to much more than 10²⁰ human lives). To the extent we can quantify existential risks in the far future, we can only say something like “extremely likely,” “possible,” or “can’t be ruled out.” Assigning numbers to such qualitative assessments is an exercise in futility. … I cannot justify sacrificing current living humans for abstract probabilities.


How Elon Musk Sees the Future

By Émile P. Torres. Published on July 17, 2022.


But the even more important conclusion that Bostrom draws from his calculations is that we must reduce "existential risks," at term that refers, basically, to any event that would prevent us from maximizing the total amount of value in the universe. … It's for this reason that "dysgenic pressures" is an existential risk: If less "intellectually talented individuals," in Bostrom's words, outbreed smarter people, then we might not be able to create the advanced technologies needed to colonize space and create unfathomably large populations of "happy" individuals in massive computer simulations.


Elon Musk, Twitter and the Future

By Émile P. Torres. Published on April 30, 2022.


In brief, the longtermists claim that if humanity can survive the next few centuries and successfully colonize outer space, the number of people who could exist in the future is absolutely enormous. According to the "father of Longtermism," Nick Bostrom, there could be something like 10^58 human beings in the future, although most of them would be living "happy lives" inside vast computer simulations powered by nanotechnological systems designed to capture all or most of the energy output of stars. (Why Bostrom feels confident that all these people would be "happy" in their simulated lives is not clear. Maybe they would take digital Prozac or something?) Other longtermists, such as Hilary Greaves and Will MacAskill, calculate that there could be 10^45 happy people in computer simulations within our Milky Way galaxy alone. That's a whole lot of people, and longtermists think you should be very impressed.


Democratising Risk: In Search of a Methodology to Study Existential Risk

By Zoe Cremer and Luke Kemp. Published on December 28, 2021.


Studying potential global catastrophes is vital. The high stakes of existential risk studies (ERS) necessitate serious scrutiny and self-reflection. We argue that existing approaches to studying existential risk are not yet fit for purpose, and perhaps even run the risk of increasing harm. We highlight general challenges in ERS: accommodating value pluralism, crafting precise definitions, developing comprehensive tools for risk assessment, dealing with uncertainty, and accounting for the dangers associated with taking exceptional actions to mitigate or prevent catastrophes. The most influential framework for ERS, the “techno-utopian approach” (TUA), struggles with these issues and has a unique set of additional problems: it unnecessarily combines the study of longtermism and longtermist ethics with the study of extinction, relies on a non-representative moral worldview, uses ambiguous and inadequate definitions, fails to incorporate insights from risk assessment in relevant fields, chooses arbitrary categorisations of risk, and advocates for dangerous mitigation strategies. Its moral and empirical assumptions might be particularly vulnerable to securitisation and misuse. We suggest several key improvements: separating the study of extinction ethics (ethical implications of extinction) and existential ethics (the ethical implications of different societal forms), from the analysis of human extinction and global catastrophe; drawing on the latest developments in risk assessment literature; diversifying the field, and; democratising its policy recommendations.


Calamity Theory: Three Critiques of Existential Risk

By Joshua Schuster and Derek Woods, University of Minnesota Press, 2021.

A new philosophical field has emerged. “Existential risk” studies any real or hypothetical human extinction event in the near or distant future. This movement examines catastrophes ranging from runaway global warming to nuclear warfare to malevolent artificial intelligence, deploying a curious mix of utilitarian ethics, statistical risk analysis, and, controversially, a transhuman advocacy that would aim to supersede almost all extinction scenarios. The proponents of existential risk thinking, led by Oxford philosopher Nick Bostrom, have seen their work gain immense popularity, attracting endorsement from Bill Gates and Elon Musk, millions of dollars, and millions of views. Calamity Theory is the first book to examine the rise of this thinking and its failures to acknowledge the ways some communities and lifeways are more at risk than others and what it implies about human extinction.


Democratising Risk—Or How EA Deals with Critics

By Zoe Cremer. Published on December 28, 2021.

We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset. ... We believe that critique is vital to academic progress. Academics should never have to worry about future career prospects just because they might disagree with funders. We take the prominent authors whose work we discuss here to be adults interested in truth and positive impact. Those who believe that this paper is meant as an attack against those scholars have fundamentally misunderstood what this paper is about and what is at stake. The responsibility of finding the right approach to existential risk is overwhelming. This is not a game. Fucking it up could end really badly.

bottom of page