THE COLLEGE HILL INDEPENDENT


Thinking In Big History

Nick Bostrom, transhumanist visions, and the rise of macrostrategy

by Leo Stevenson

Illustration by Alex Hanesworth

published October 26, 2018


now everything is happening

faster than you can think

at the speed of genius

at the speed of a thousand geniuses competing

at the speed of a civilization-powered light beam

 

this special time, maybe to be revisited later

but not really experienced as it unfolds

we can see what should be done

the vase falling slo-mo to the ground

but we cannot help it

the signals take too long from brain to muscle

-Nick Bostrom, from “Göttingen”

 

Transhumanism is hard to define. Humanity+, founded by Swedish philosopher Nick Bostrom in 1998 under the name “World Transhumanist Organization,” introduces its long definition of the term by describing transhumanism as, “a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.” Transhumanist thinking is often fantastical, dreaming up futures of a later, better phase of humanity that veer into sci-fi. There are many shades of transhumanism—ones with political currents like libertarian “extropianism,” democratic “technoprogressivism,” and environmentalist “technogaianism;” strands that focus on reversing aging, bodily augmentation, or cognitive enhancement; critical-theory driven “cyborg feminism” and “postgenderism.” While they vary in their priorities, obsessions, and assumptions, these currents are united by the ideal of moving beyond what we currently know as the human condition. They all see through the lens of what Bostrom calls “big history”—zooming out to place humanity in a longer timeline, and trying to plan ahead.

Bostrom, who runs the Future of Humanity Institute (which he founded at Oxford in 2005), is arguably the most prominent face of transhumanism, but most of the attention he gets is about the apocalypse. Bostrom became famous outside of the transhumanist community for his thinking on what he calls “existential risks to humanity,” and specifically for his 2015 New York Times Bestseller, Superintelligence: Paths, Dangers, Strategies. Superintelligence argued that an “intelligence explosion,” an event in which AI (artificial intelligence) learns to improve itself and starts growing exponentially, has the single highest potential—over and above candidates like climate change and nuclear war—to make humanity go extinct. Bostrom has explained that possibility with the simple thought experiment of a “Paperclip Maximizer”:

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. If humans did so, there would be fewer paper clips. It would realize as well, that human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Point being, a superintelligent AI, given one specific goal, would pursue that goal to the exclusion of everything else, which could have apocalyptic consequences. Bostrom’s logic is that since human extinction would be infinitely worse than even the worst non-extinction catastrophe, avoiding extinction should be our highest strategic priority. This means that addressing the risk of runaway AI is currently far more important than any other ethical cause. Bostrom calls planning about humanity’s long-term outcomes “macrostrategy.” A profile in the New Yorker called him “The Philosopher of Doomsday.”

In his writing on transhumanism, Bostrom envisions potential futures where humanity evolves past our current physical and intellectual limitations, conquers aging and death, colonizes the universe (invoking the violent imagination of an ‘open frontier’), and merges with computer intelligence to create forms of life both quantitatively and qualitatively so far beyond what we are now that we literally cannot imagine it. That unimaginable scope of wellbeing is the central message of Bostrom’s “Letter From Utopia,” a 2010 essay in which he, speaking as a hypothetical future-dweller, points to the flourishing that a transhuman future could reach: “what you had in your best moment [i.e. the current heights of human happiness] is not close to what I have now—a beckoning scintilla at most. If the distance between base and apex for you is eight kilometers, then to reach my dwellings would take a million light-year ascent. The altitude is outside moon and planets and all the stars your eyes can see. Beyond dreams. Beyond imagination.” Bostrom thinks it’s terribly short-sighted to be attached to the kind of “human condition” that we’ve been used to for this long first act of human history—he calls that “status quo bias.” His hope, as expressed in “Letter From Utopia,” is that we overcome the evils we take as inevitable (from war and oppression to sickness and age) and into a “life that is truly humane.”

But the first step to that future is to secure life. That’s how Bostrom’s positive vision got him concerned that humans might not live long enough to get there, and how macrostrategy became his priority. In his view of big history, we stand at a turning point much like the agricultural or industrial revolution, where a technological leap (the development of computers) has upped the rate of our technological change. Everything is moving faster, which presents both special opportunities and special threats. That bonfire of blinding wellbeing is only possible if humanity’s small flame of consciousness doesn’t get snuffed out, by AI or some other threat.

 

+++

 

People are listening to Bostrom. Especially a lot of developers, researchers, and academics who see themselves as actively designing the AI future, with all the possibility for human good (or at least massive wealth creation, in the short term) that that entails. As the feeling has mounted that “move fast and break things” might break too many things, a voice in the room warning that the entirety of human existence is at stake has perked up a lot of ears.

Bostrom isn’t the only voice currently warning of the dangers of AI, but he’s certainly the loudest. Just about all discussions on the topic include or at least allude to his arguments, so his thinking can basically be taken as representative of the rest of the intellectual current. He’s the one who gets called before the UK parliament or the UN to speak about the dangers and governance of AI (both of which he did in 2017).

Since the publication of Superintelligence, funding has poured into organizations and research focused on AI safety, AI governance, and ethical AI. These organizations’ research can range from developing international policy on controlling AI to developing AI architecture that could build in values and control functions. Organizations that already focus on existential threats, like the Future of Life Institute and Cambridge University’s Center for the Study of Existential Risk, have moved AI to the center of their programs. Open Philanthropy, a tech-world-driven organization which tries to maximize the decision-theoretical “expected value” of the good it can do with its money, lists “Potential Risk from Advanced Artificial Intelligence” among its priority causes. Open Philanthropy has given about $71.5 million in that category of grants since 2015, including a pledge this fall to Bostrom’s Future of Humanity Institute of about $17.5 million.

The grant is striking. It is a massive vote of confidence to the reality of Bostrom’s vision: it means that, at least in the tech world, the possibility of thinking in “big history” (and of planning for it) is being taken seriously. The worldview that sees macrostrategy, far-future thinking, and existential threats from AI as a reality, rather than a mere extension of transhumanist sci-fi thinking, is gaining a serious measure of legitimacy.

And though Bostrom’s current work on AI and Macrostrategy may be bringing in his funding, they’re not his only ideas that have crept towards the mainstream. Remember that moment in 2016 when Elon Musk publicly declared that he believes it’s nearly certain that we’re living in a simulation? That was a direct reference to a Bostrom essay. Musk also donated $10 million in 2015 to the above-mentioned Future of Life Institute’s program on existential risk from AI. Macrostrategy isn’t fringe—it’s getting the tech world to talk about ethics.

 

+++

 

Bostrom’s discourse sets him apart from other strands of transhumanist and speculative futurist thinking. Body-focused transhumanism, for instance, is excitable to the point of pure sci-fi speculation, thinking far ahead to full-body prostheses and mind-uploading. Critical theory transhumanism, like the work of Donna Haraway, is written in dense academic language that’s deeply meaningful to scholars, but tends not to reach too far outside such circles. Hardly any futuristic thinking provides suggestions that are actionable enough to be anything past “interesting.” Bostrom, on the other hand, uses a language that the powers that be (whether deep-pocketed tech donors or members of parliament) can understand, and that they respect. Bostrom uses the rigor of analytic philosophical logic, statistical risk assessment, and Bayesian decision theory to back up his points. His texts are full of quantitative justifications, thought experiments (such as parables in which humanity is pictured as sparrows trying to breed an owl—the AI—to do their work for them), and symbolic logic, using ‘ethical frames of aggregative utility’ (roughly, the position that good can be quantified and tallied) to estimate comparative expected values. He speaks in terms of optimizations, maximizations, and rational agents. This is the language of capital-R Rationalism, and it’s very good at setting up arguments in a way that, with the arguer controlling the terms, makes them seem unobjectionable. Even the parables make the argument’s terms especially invulnerable; how to formally object that you don’t think AI is much like an owl at all (or a paper clip)?

 

That’s not to say that that difficulty isn’t simply a side-effect of Bostrom’s deeply rigorous thinking; I don’t mean to mount an epistemological critique, or to say that he’s wrong. But Bostrom’s Rationalist thinking is a highly effective way to get people (especially people in power) to listen, and to take the ideas born out of his transhumanism seriously. His Rationality builds a hard, convincing shell around his visionary core, and it's compelling enough to justify anything from his position that AI strategy should be the world’s top ethical priority to his view that we’re almost certainly living in a simulation. Bostrom is brilliant enough to communicate why everyone else should think his vision is reality, and real enough to be a priority. None of those other transhumanists are getting $17.5 million grants.

I don’t mean to imply that Bostrom is insincere, either: a recurring strand in Bostrom’s thinking is a concern with human bias and shortsightedness, the worry that left to our own devices we’re liable not to plan ahead. His faith in Rationality lies in its ability to break through those biases and get us to see what’s too distant or counterintuitive to notice. But his Rationalist methods, whether they’re for convincing others or for thinking more clearly, aren’t where his ideas come from—they come from the sci-fi transhumanist discussions that he came to intellectual maturity with.

 

+++

 

I want to understand Bostrom’s vision because I want to know what worldviews are slipping in at the core of his well-reasoned macrostrategy. Bostrom doesn’t hide that core. It’s clearer in his more literary writings, like “Letter From Utopia,” where he expresses himself without the shell of technically legitimizing language. It’s clearest in his poetry, which is nestled at the bottom of his website, a short selected stanza and a link to a subsection of NickBostrom.com. Bostrom describes his recent poetic endeavors as “relapses” from a previous stage of his life—presumably the part where he was expressing his vision raw, before he started packaging it to convince the world to prioritize macrostrategy. Some of it is personal, like his musings on settling into his career. But others show the clearest glimpses of his cosmology, his view of where we stand in the scope of long history. For example, one poem from 2002 shows his vision of human history at a precipice: the promise of crossing triumphantly to the other side, the abyss of the eternal night.

 

 On the Bank

On the bank at the end

Of what was there before us

Gazing over to the other side

On what we can become

Veiled in the mist of naïve speculation

We are busy here preparing

Rafts to carry us across

Before the light goes out leaving us

In the eternal night of could-have-been

 

There’s so much hope for what humanity could be in Bostrom’s poetic view, and fear, and precarity. You can see his attempts at strategic planning hanging in uncertainty, as small as the old “blue speck” photo of earth hanging in the void. But that precipice also comes with serious urgency: given where humanity stands at present, there’s no time to waste. Another of his poems (titled “Göttingen”) articulates that, opening:

 

the rush the rush the rush

the fuse that’s burning down

 

information glitters

rain of idea sparks

the thing is sprinting

sipping, taking off

to the waiting black powder

 

That poem is his most recent, from 2017, and it carries the rush of his current work: it’s serious now, there’s more at stake, the risk of apocalypse in a potentially impending “intelligence explosion” so neatly symbolized by actual explosives. And out of that comes the sense of moral urgency: you’d better run to deal with the problems this precipice presents if you have any kind of resources to do so. If our historical moment is so precarious, at this place where our acceleration could take us to utopia or the abyss, it’s no time to be sitting around. A trace of that outrage shows in an excerpt from “Juicy Exceptions”:

 

the young ones glimmer briefly

like fourth of july firework [sic]

then fall to dust

[…]

strut on you arrogant pricks

shine on you daughters of ivy

occupy your privilege like a desert garden

 

fig-nude amongst almonds and apricots

let us feast our eyes on your impudence

as you slurp that rough-shelled coconut with a pastel straw

 

After meditating on the finitude of youthful pleasure, this poem breaks into pure moral outrage at the very thought of elite students enjoying their vacations. It only makes full sense in the context of the previous two; this is no time to be lounging around. If you’ve been granted the power, the privilege, of that kind of education, given the precipice we stand on, how could you use it for yourself, even for a moment? When our species might be at stake?

At the end of the bio on his website, Bostrom writes, “I am in a very fortunate position—having no teaching duties, being supported by a staff of brilliant research colleagues and assistants, and facing no restrictions on what I can work on. Must try hard to be worthy of such privilege!” It’s “Juicy Exceptions” that shows how deeply Bostrom means that. He’s not just humbly saying he’s lucky—he’s saying that privilege gives him a duty to pull his weight for the greater good, hurrying to keep up with the urgency of this strange historical moment. Which makes sense of how Bostrom is leading his life, basically locked in a room coming up with the best arguments he can for why people should listen to his apocalyptic/utopian message, publishing and running between speaking circuits and parliamentary panels, doing his best to right the course of history before it’s too late.

        Bostrom’s worldview, as you dive into the visionary corners of his mind, is compelling: it draws you in, makes you think in big history, makes you and your concerns start to look very, very small. I trust Bostrom’s intentions more after reading his poetry; his vision of the long future seems like a genuine hope for what he calls (in the last line of Superintelligence) “a compassionate and jubilant use of humanity’s cosmic endowment.”

I’m also worried by the implications of his ideas. If you follow his logic completely, then putting our energies towards the highest-priority issue of our age (AI strategy) should mean dropping our other, immediate projects of world-fixing and world-making, as any suffering we might alleviate now is secondary to the possibility of extinction. I have too many reservations about Bostrom’s strategic ideas, his thoughts on how change gets made, and his ability to consistently distinguish sci-fi from reality, to buy that fully.

Big-history thinking raises questions, though, that go beyond Bostrom’s exclusive focus on AI scenarios. What if, to have any hope of building the world we want, the first step is to prioritize catastrophic threats, even others like climate change or biosecurity? Maybe the state of tech does place us at a unique moment in history, with the unique urgency Bostrom feels. Which would mean we ought to think more strategically about where to apply that urgency. Terrifying as it is, Bostrom might have a point.

 

LEO STEVENSON B’20 wishes more theorists wrote poetry.