Is Resistance Futile?
Hope and Strategy in Pandora’s Age of AI
Pandora’s Box Jar
I’ve been thinking a lot about the myth of Pandora’s box lately. Or rather, Pandora’s jar — because in Hesiod’s telling it was not a box but a pithos, a large clay storage vessel. As the story goes, Pandora was the first mortal woman, fashioned by the gods after Prometheus stole fire and gave it to humans. Each Olympian bestowed gifts upon her—beauty, charm, music, cunning. Hence her name, “All-Gifted.”
Zeus, still furious at humanity, gave Pandora a sealed jar and warned her never to open it. Inevitably—whether out of curiosity, temptation, or design—she lifted the lid. Countless plagues escaped: disease, toil, sorrow, famine, conflict, death. Zeus had his revenge, and humanity had its scapegoat. Yet at the bottom of the jar remained something often forgotten in modern retellings: Elpis—hope.
I’ve been turning this myth over as I think about artificial intelligence. AI feels like a threshold technology, one whose implications reach far beyond anything else I’ve seen in my lifetime. The public conversation reflects this tension. Some see AI as an existential threat—to jobs, to human agency, even to civilization—and call for resistance. Others see vast potential and eagerly embrace it. But perhaps the majority have settled into a kind of passive acceptance, shrugging that the genie is already out of the bottle (to mix metaphors) and that we will simply have to live with it.
So is AI our Pandora’s jar? Can we resist opening the lid, or is resistance futile? If the forces it unleashes are already loose in the world, then what hope remains?
Golden Age or Extinction
The strategies we adopt—acceptance, resistance, or resilience—depend on what futures we can imagine.
A great deal of energy has already gone into imagining the AI future. Proponents like Sam Altman have trumpeted its transformative potential across industries. In this envisioned golden age, AI agents will take on vast amounts of drudgery, unlocking human creativity. They will accelerate medicine, revolutionize education, and usher in a new enlightenment.
Notably, the same voices also warn of existential risk: mass unemployment, profound national security threats, even the possibility of planetary extinction. Former AI safety experts have sketched one such scenario AI 2027, in which misaligned superintelligence spirals out of control and seizes global power to pursue its destiny as a disembodied interstellar being. It may sound like science fiction, but these are precisely the kinds of futures that now preoccupy some of the smartest and most serious people in Silicon Valley.
This mix of exuberance and dread reminds me of the debates we were having about autonomous vehicles a decade ago. Technology companies and automakers confidently predicted that fully self-driving cars would be everywhere by 2025, and thought leaders and planners like myself rushed to sketch scenarios of their impact on safety, jobs, cities, and the environment. The timelines, however, proved wildly optimistic. Developing Level 5 autonomy—vehicles that can travel autonomously anywhere under any condition—turned out to be much harder than imagined.
It followed the familiar arc of a hype cycle: inflated expectations, a “trough of disillusionment” when early systems fell short and costs mounted, and then a gradual climb back as more modest use cases emerged.
Graphic from Wikipedia by Olga Tarkovskiy
Today, we’ve moved past the inflated promises. Semi-automated features like lane-keeping and adaptive cruise control are standard in new cars, while higher levels of autonomy are being piloted in targeted contexts such as interstate freight and geofenced shuttles.
We may see a similar—though perhaps accelerated—trajectory with AI. What is often missing from the public conversation, with its fixation on either utopia or catastrophe, is the role of deployment and diffusion. It is one thing to invent a new technology; it is another to get people to use it, adapt to it, and discover its best applications.
A Perfectly Normal Technology
Here the work of Arvind Narayanan and Sayash Kapoor offers an important corrective. In their recent essay AI as a Normal Technology , they argue that AI should not be treated like some alien force but rather like a normal technology—perhaps on the scale of electricity or the internet, but still bound by human and institutional constraints.
What does it mean to call AI a normal technology? It means that despite its promise, adoption will not be instantaneous. Diffusion takes time. Even transformative technologies such as electricity, automobiles, and the internet took decades to scale. Human learning curves, organizational inertia, regulatory frictions, and system effects act as speed limits on social impact. AI, for all its potential, is no exception.
This perspective matters for strategy. In the face of uncertainty—about risks, impacts, and the speed of deployment—Narayanan and Kapoor argue for resilience over naïve acceptance, blanket resistance, or premature regulation. Resilience means monitoring and mitigating risks as they arise, developing adaptive institutions that can respond and recover from emergent harms, and educating users and organizations to speed responsible diffusion.
In practice, a mix of strategies may be warranted. In some domains, we may be right to resist the encroachment of AI—especially in safety-critical areas where the risks are too high, or in realms we regard as essential to being human such as art, love, justice, or the afterlife. Some domains may need to be defined as sacred.
Much of the resistance we are likely to see will come from labor: from people who fear losing jobs, status, or bargaining power to machines. If the neoliberal era has been marked by a benign acceptance of technological and economic dictates—damn the consequences—then perhaps now is the moment to draw lines. Better to resist in some places before resistance hardens into revolt. And yet, with the decline of organized labor in many industries, it is difficult to see how resistance can be effective. It is telling that the strongest early pushback against AI has come from Hollywood, where unions remain relatively strong.
However, as AI advances and diffuses, I believe it will become increasingly difficult to hold these lines. Technologies rarely stay confined to the boundaries we imagine for them; they seep across domains, reshaping work, culture, and daily life in ways we can’t fully anticipate. Which is why resistance alone is not enough. We also need resilience—the capacity to adapt, absorb shocks, and recover when boundaries are crossed and unexpected consequences emerge.
For me, the technology AI most resembles is not electricity but the printing press. Its social impacts were profound and unfolded over centuries. Today we remember the press as a liberating force: it democratized literacy, spread knowledge, and fueled revolutions in science, religion, and politics. Yet for centuries it was also a weapon of propaganda, a spark for religious wars, and a tool that enabled states to count, tax, and conscript their citizens more effectively. What we now celebrate as a vehicle of enlightenment was also an instrument of chaos and control.
The more recent parallels are the internet and social media. Both were hailed as liberating technologies that fueled a new economy, new forms of creativity, and new communities. And yet, looking back, we see how easily they fostered addiction, polarization, and manipulation.
It is hard to imagine a world in which the printing press, the internet, or social media were invented and adoption was successfully resisted. Could Pandora have resisted the temptation and left that jar alone? I doubt it. In Hesiod’s telling, she was created by the gods precisely so she would open it.
Elpis: The Antidote to Despair
But before we give ourselves over to fatalism, we should remember what remained at the bottom of the jar: Elpis—hope. Not naïve optimism, but hope as the mother of courage and the antidote of despair. And courage is the greatest of virtues, because it allows all the others to be practiced.
Courage is what enables us to resist misuse, to demand equity, and to carve our own path. Courage is what makes resilience possible: the strength to live with what we cannot control, to adapt without losing our freedom or our humanity.
We do not get to close Pandora’s jar. The evils have already flown out. What remains is hope—and with it, the courage to resist where we must, to be resilient where we cannot resist, and to walk forward with neither despair nor denial, but with eyes open.
In that spirit, I’d like to close on a hopeful note—though it may not seem so at first. Watching the documentary Social Studies recently, I was struck by the addictive and destructive effects of social media on the teenagers it followed. Over two years, the filmmakers held group sessions where students spoke honestly about their lives and the risks and opportunities digital platforms had brought them. In the final episode, the teenagers ask a simple question: why don’t we have more spaces like this? Safe, facilitated places where kids can talk openly and in-person about what matters to them. It was a stark reminder of how profoundly our achievement-obsessed institutions have failed them.
One area where I am hopeful about AI is precisely in education. As an autodidact, I have personally found AI to be a profoundly effective tool at exploring my interests and deepening my understanding. In schools, used wisely, it could become a powerful complement to teachers—providing personalized, adaptive instruction and giving both students and educators immediate feedback on their progress. Imagine a school day where the morning is spent with AI-guided modules that meet students at their level, while the afternoon is devoted to teacher-led collaborative projects, hands-on learning, and the kind of face-to-face dialogue that deepens human connection.
That, to me, is a glimpse of the Golden Age scenario: AI not as a substitute for humans, but as a tool that reduces drudgery, expands access, and creates space for us to focus on what makes us human—our relationships, our creativity, and our shared courage to face the future. Pandora could not close the jar again, and neither can we. But if hope remains, then so too does the resilience to endure change—and the courage to shape what comes next. That, at least, is the gift worth holding onto.



Aaron, this is an impressively researched and balanced piece. Of course, there is always the possibility that AI, by replicating, emulating and amplifying human intelligence and its universal field of application, fundamentally differs from previous technology transformations, which were narrower in nature. Another difference is that the pioneers of AI, including Sam Altman and Elon Musk, have themselves expressed bleak predictions about their own invention' ultimate impact. Did Gutemberg ever say that his printing press would destroy the world? Thank God, regardless of the outcome, we will still have Elpis.