So I walked into the Cowell Theater at Fort Mason in San Francisco one night recently, eagerly awaiting another optimistic vision for the potential future of humanity, or some new concept to blow my mind like Sara Walker’s assembly theory or Benjamin Bratton’s vision of emerging planetary intelligence. The night’s speaker was named Indy Johar, and his talk had an odd title about Civilizational Optioneering.
I’ll take the option that guarantees an extended lifetime in a sci fi utopia please!
Unfortunately, I got something a little more akin to Hari Seldon’s defense of his psychohistorical predictions before the dour Ling Chen and his Commission of Public Safety upon Trantor. Not so much a guarantee of the imminent end of human civilization, but something not too far from it. He talked of a structural inadequacy of governance, instability of economic and monetary systems misaligned with ecological limits, the deepening environmental and climate stress reflecting systemic externalization of cost, the erosion of social cohesion and shared meaning that undermines collective action, and a worldview crisis i which outdated paradigms blind us to the relational complexity of the challenges ahead.
In short, I found myself ready to support banishing Johar to the distant edge of the galaxy to stop the barrage of unwelcome news emanating from the stage. But wait… this is the moment when our contemporary Hari Seldon should begin to describe his efforts to gather the collective wisdom of the human race and ask to be exiled to a colony on Mars to begin a new Foundation far from the inevitable collapse of the apparently thriving technological society on Earth before it feel headlong into the abyss. This was not, however, what followed. Instead, Johar began to describe a way to engage productively with these issues without abandoning our home world.
Johar spoke of something entirely different and more grounded — literally. He emphasized that the damage of these changes are not in the end game, but in the volatility of the whiplash as we enter into a world in which these kinds of changes are occurring faster than ever before. We are facing them already and the problems are real and need immediate solutions. We need to re-think the very structure of our society in light of this long emergency before us, and this is where he brought in the jargony term “optioneering.”
Essentially, optioneering refers not to a single “grand plan” as Seldon would have done, but to adopt a stance, a design discipline that would increase society’s freedom to maneuver as conditions destabilize. Rather than predicting a specific path forward to a high degree of probability, to keep an array of possible futures viable. Practically, that means building slack, redundancy, and adaptability into the systems our societies depend upon. These include systems including energy, food, housing, finance, public health, and forms of governance to enable such a design discipline.
It also means shifting from closed projects with brittle points of failure to what Johar calls “open gardens” — systems that can learn, evolve, and reorganize as the inevitable shocks arrive. Optioneering is the opposite of the fortress fantasy of walling off sectors of affluence and power while leaving large portions of Earth and its inhabitants to chaos and misery. The ultimate goal is to build a shared capacity for resilience so that we still have choices when the big crises arrive. This seems less like the philosophy of inevitable doom and despair, and more a strategy for keeping alive a concept that is too little regarded in these fraught and perilous times — hope.
Hope. That brigs to mind another thinker I’ve come to respect, Rebecca Solnit, and her usage of that term. In her essay, Hope is the embrace of the unknown, she echoes Johar’s description of optionality:
“It is important to say what hope is not: it is not the belief that everything was, is or will be fine. The evidence is all around us of tremendous suffering and destruction. The hope I am interested in is about broad perspectives with specific possibilities, ones that invite or demand that we act. It is also not a sunny everything-is-getting-better narrative, though it may be a counter to the everything-is-getting-worse one. You could call it an account of complexities and uncertainties, with openings.”
I think we can agree that Hari Seldon was in the “everything is gettig worse” camp camp — at least for the empire and its quadrillions of citizens — though he also was capable of imagining a path forward for a select few. I can’t help wondering what he might have done if he’d taken up a “failure is not an option” mindset, and dedicated his life and his science to finding options for the empire to the creation of a design discipline that would increase the options the empire had as it devolved into chaos, rather than jetting off to Terminus and letting the empire collapse.
Now I want to talk a little bit about how AI fits into all of this. These days you tend to hear AI discussed in one of two ways; AI is going to solve all of our problems and usher in a golden age for humanity, or AI is going to cause the collapse of modern civilization — if it doesn’t kill us all. The second is becoming a lot more common, and it’s a lot more like the Seldon view of empire. Rapid extinction is right around the corner and you’d better get on board the Deliverance and head to Terminus or you’re screwed. Our Deliverance is becoming one of the few people who can master prompting and integration with AI and our Terminus is the world in which AI lets us live with some dignity — or maybe just live.
The Solnit/Johar alternative is not claiming that we’re on the doorstep of utopia, but that we have the capacity to understand the risks, reject any claims of inevitability, and focus on collective optionality consisting of productive regulation, alignment research, monitoring, incentives, and public capacity. This is the path of hope as Solnit describes it. Utopia and doom are seen as inevitable endpoints, whereas optionality and hope are the realistic and pragmatic paths forward to get the most out of technological breakthroughs for the vast majority rather than a select few.
I’ve been an early adopter of LLMs like ChatGPT and have tried to integrate them into my life by understanding how they can help me with some of my most persistent cognitive flaws, largely stemming from ADHD. It helps me a lot in trying to tackle complex topics and to organize my many interests as productively as possible. I’m genuinely excited and enthusiastic about this technology and want to make the most of it.
That said, I’ve become intimately aware of the downsides. ChatGPT intially blew my mind with its capabilities. I could take a few scattered thoughts and input them into a prompt and it would gather it all up for me, put it into a coherent structure, and output an essay with perfect grammatical structure. I was astonished and taken aback. Why should I even try to write for myself when it could do it better than I ever could?
One of the things I started using it for was letters to the editor for my climate group, Citizens’ Climate Lobby. I even “vibe-coded” an app to help write LTEs on climate topics. It had all of the CCL talking points already baked in, you could point to an article in the news or just mention a current topic being discussed, and it would create a beautiful letter of the right length targeted for your specific region and newspaper. At first I had a lot of success with it, until I noticed how bland it tended to be. It did a lousy job of capturing my actual perspective — what made me want to write the letter in the first place. It tended to genericize. When I’d re-read something it had produced after a couple of days I was much less impressed.
The big buzzword these days is agency. The big AI companies are selling the vision of an automatic companion to do all of our tedious work for us, to become our effective and highly competent agents in the real world. This sounds nice, until you realize that agency is a big part of being human. If AIs take away all of our agency, we become mere spectators to life — riders on the big bus — and are no longer capable of taking our own turns at the wheel. We might not even get to tell the driver where we want the bus to go or where we want let off. Maybe getting off of the bus ceases to be an option.
If the AI can be regulated to allow human agency to be enhanced, however, this could be very different. In the wild west kind of situation we’re in right now, this is still something we have control over if we wish. I can use my LLM to help to empower my agency by lifting me up over my weak spots, organizing and retrieving the information I need, allowing me to reach my goals effectively. It takes continuous effort though. The products don’t make human agency inevitable, and they seem to be selling us on disposing of it.
Similar pathways are occurring for other AI related technologies. In San Francisco, near where I live, the streets are dominated by driverless cars. As an occasional Uber driver, I accompany them as what looks like a vanishing breed, finding fewer and fewer rides as the Waymos, Cybercabs, and Zoox take over the market. I really enjoy driving and helping people get from place to place, and I wonder if someday I’ll need a special license to operate my own vehicle among the machines. That kind of future doesn’t appeal to me and I’ll fight it in any way I can. I want my agency, driving my own bus where and when I want.
And that is where Johar’s “optioneering” becomes more than just a climate framework, but a way to evaluate technology itself. Not gauged on capability, profitability, or even safety, but on whether it increases the range of futures in which ordinary people can retain dignity and truly thrive as human beings.
And that loops me back to the Foundation metaphor one last time. Seldon’s genius is compelling, but his certainty is also a kind of trap: once you accept inevitability, your role becomes management, not transformation. You become the technician of decline. Maybe that’s why Johar’s talk ultimately landed for me, not because it denied the scale of the crisis, but because it refused to let catastrophe become the only story available. It insisted that the work is to keep options alive: ecological, institutional, cultural, technological, human.
I know, I know. Asimov was writing a story, and psychohistory was his real protagonist. He wanted a science of prediction to take center stage, and for that it was necessary to have the scientist who created it abandon any hope for the empire. Real life wouldn’t fit that drama so neatly, and I don’t expect our future here on Earth to play out in any simplified narrative of inevitable collapse or ascension to utopia. I expect it to be messy as hell, and have some real thrills along the way, along with some incredibly sorrowful tragedies. Some of those have already been felt my many, and there will be many more to come.
We can’t afford to give in to despair and expect everything to go bad. We have to have the courage — like Johar — to see how bad things could be and imagine — like he and Solnit — how we could get past them with our planet’s health and our species’ dignity intact.
What would it look like to build one small “open garden” where a closed project is failing? In climate, in AI, in a neighborhood, in a workplace, in your own habits?
Because if we’re going to live through the whiplash, we’ll need more than predictions. We’ll need practice. We’ll need coordination. We’ll need new definitions of value. And we’ll need to defend the most fragile resource of all: the human capacity to participate in shaping the future.

