Each summer, I find myself serving as a philosopher-in-residence somewhere interesting.
Last summer, I held that role at Daylight Computers (see here and here), a humane tech company that created the world’s first blue-light-free computer (imagine if a Kindle and an iPad made a baby), where I wrote the manifesto with their founder.
Now, this summer, I’m serving as the philosopher-in-residence for
, the largest AI group in the world. This is a group with significant reach and potential influence on the broader AI conversation. Once again, I’m writing a manifesto with one of the founders, .I am also hosting both online and in-person series in Toronto with the collective, bringing philosophical and ethical inquiries into the larger AI community. One series we’re launching next Wednesday, July 16 (the anniversary of the Trinity test, 🤯), is called After Intelligence (AI).
The essence of the series is this: if or when artificial intelligence surpasses human intelligence (we’ll have to define what “intelligence” actually means, by the way), it reveals what’s truly important…
Wisdom.
I define wisdom as “existential wayfinding,” or more simply, as
puts it: “Know what to do, and when.” (See more definitions here.) As Joe Hudson, the executive coach to Sam Altman (CEO of ChatGPT), argues, we are moving away from knowledge work and toward wisdom work. This shift requires emotional clarity, discernment, and attunement to what matters most, rather than just raw intelligence and agentic hustling.Basically, it’s time to get some wisdom.
One understanding of wisdom cultivation is having what psychological researchers into wisdom call “perspectival meta-cognition” (PMC): the ability to take multiple perspectives on what is and what ought to be, without collapsing into one without proper examination.
My first foray online was a 2018 white paper exploring “memetic tribes”: perspectives that people become tribal around, which shape the collective conversation. I am ready to return to that kind of perspectival cartography again within the AI conversation, but this time in a way that helps shape it.
Lately, I’ve been working with this broad dialectic:
AI-Optimism: AI good
AI-Pessimism: AI bad
AI-Realism: AI can be good or bad, and we need to become wise.
We can understand optimism and pessimism as having something in common: both involve certainty that something specific will happen—one sees it as positive, the other as negative.1 A realistic position, like wisdom more generally, lacks that certainty and requires the courage to stay with the ambiguity that complexity brings.
In my opening presentation, I’ll be expanding this dialectic into more positions. On the optimism side, I see the following perspectives active:
AI Solutionism: The belief that AI will solve all major societal problems and that the “metacrisis” will be no more.
AI Evangelism: Advocacy of AI that carries an explicit or implicit spiritual dimension. In essence, the envisioned ASI (Artificial Superintelligence) is seen as a God or godlike being, with a redemptive quality for humanity, especially for those who midwife it into existence.
AI Accelerationism: Advocates for rapid AI development without regulation, often aligned with a post-humanist agenda (e.g., e/acc), which holds that intelligence will, and should, evolve beyond the biological substrate of human bodies.
Now, some optimists hold all of these positions, while others hold a combination or just one. The most common position is AI Solutionism, which is often found among regular people working with AI in some capacity, whether as employees or founders.
On the pessimist side, I see the following positions:
AI Criticism: A political or sociological critique of how AI contributes to the concentration and centralization of power, often through a leftist lens. This perspective critiques phenomena like technofeudalism and the TESCREAL bundle that philosophically underpins much of AI's development.
AI Doomerism: The belief that AI will lead to catastrophic outcomes and existential risks for humanity. “Decels” (or decelerationists) in this group advocate for regulation and safety measures, focusing on concerns like misalignment, AI arms races, and “world-in-chains” scenarios.
AI Luddism: A rejection or resistance to AI adoption, with a preference for human-scale or analog alternatives. In essence: touch grass, get a dumbphone, go on a digital detox, and make sure you “stay human.”
Again, many pessimists hold all of these positions. However, it’s also possible to hold a combination or focus on just one. Delineating these positions will help us see their strengths and weaknesses, what they get right and wrong, and how they can make us more wise, or at least less foolish.
During the presentation, I’ll be asking participants to list the top three positions from the list above, in order of which resonate most. It’s an exercise I invite you, the reader, to try right now. I’ll wait.
…
Here’s mine:
AI Luddism
AI Solutionism
AI Criticism
I am a mix: actively attempting to become more sovereign with devices in general (see “The Pull” series) and be on my devices less, yet I feel the tremendous potential of AI and feel excited by it. Given my interest in power literacy, I can also see that those who have power over AI will have power more generally.
Relatedly, one of my favourite models of power comes from sociologist Steven Lukes, called the “three faces of power.”
The first face is decision-making power: those who can make decisions have greater power than those who cannot. The second face is agenda-setting power: those who set the agenda hold greater power than those who merely make decisions. The third face is ideological power: those who shape worldviews and philosophies hold the greatest power of all.
Hence, the motto of After Intelligence is this:
Philosophy shapes AI and AI shapes philosophy.
What philosophies are actively shaping AI, and how will AI shape our philosophies? And, in a meta sense, another question: how will engaging in philosophical inquiry, as a collective, empower us with wisdom?
Philosophical inquiry, done collectively, just like what we have been doing at The Stoa for the last five and a half years, will be at the heart of this new series.
Will you join us?
The link to the Toronto event on July 16 can be found here. Use STOA as the promo code. More online and in-person events will be announced throughout the year.
“There is no fundamental difference between optimism and pessimism. One mirrors the other. For the pessimist, time is also closed. Pessimists are locked in ‘time as a prison’. Pessimists simply reject everything, without striving for renewal or being open towards possible worlds. They are just as stubborn as optimists. Optimists and pessimists are both blind to the possible.” - Byung-Chul Han, The Spirit of Hope