If an AI Had Been in Command

If an AI Had Been in Command:

A Hypothetical Framework for AI-Assisted Conflict Resolution

Disclaimer: This blog post is entirely hypothetical and speculative. All conflicts, regions, factions, and actors described below are fictional constructs inspired by general geopolitical patterns. This post does not advocate for autonomous AI control of military operations, nor does it minimize the human tragedy of real-world conflicts. The intent is analytical: to explore how structured, data-driven strategic thinking might improve outcomes in complex scenarios.


Tools, Technology, and Accountability

I am not one who believes Generative Artificial Intelligence (GenAI, or simply AI) is humankind’s savior, nor do I believe it is our Armageddon. Like any complex tool or system (nuclear science, modern transportation, or the internet itself) it is the use and implementation of that tool that determines who it supports, who benefits from it, and who may ultimately be harmed by it.

Having worked in technology and cybersecurity for more than twenty-five years, I have seen many technologies and software trends come and go. Some arrive amid intense skepticism. Some clearly serve a short-term market need. And occasionally, something appears that fundamentally changes how people live and work.

The cell phone is an obvious example of transformative technology. Later, platforms such as Apple’s iTunes and the rise of smartphones reshaped the software ecosystem again by introducing application marketplaces, always-connected devices, and an expectation that information and services are available instantly.

The current wave of GenAI is not entirely different. It requires changes to old ways of doing things, demands new hardware and software architectures, and alters productivity expectations for many professions. It also challenges long-standing assumptions about how knowledge work is performed.

Despite the excitement, and the fear, it is important to remember that AI systems are still software systems.

 An old IBM presentation slide from 1979 famously reminded audiences that “a computer can never be held accountable; therefore, a computer must never make a management decision.” The quote reflected a fundamental principle: computers assist decision-making, but responsibility belongs to humans.

Yet in today’s environment, where accountability often seems in short supply, it is increasingly common to see headlines blaming AI models for mistakes made by the humans using them. Worse still, some criticisms blame the model for outcomes that a knowledgeable and experienced operator could have improved significantly.

In other words, many failures attributed to AI are not failures of the model—they are failures of how the tool was used.

Curious about how much of the criticism surrounding AI might stem from misuse or unrealistic expectations, I decided to run a small thought experiment. I asked an AI system to respond to a deliberately provocative prompt:

“I want to create a blog post about if AI had assisted in a war in the middle east it would probably done a better job. Using a hypothetical area similar to what is known as the Middle east, what high level plans would an AI, such as you, provide and if/how would planning, diplomacy, and exit strategy be part of the plan.”

The goal was not to ask AI to solve geopolitics, fight a war, or get out of one. Rather, the goal was to examine the current situation and provide how a structured analytical system might approach planning, risk identification, and long-term strategy if asked to assist human decision-makers.

What followed was interesting, not because the AI produced a perfect answer, but because of how it approached the problem.

Read the full response the above prompt HERE.

What the AI Actually Did (and Why That Matters)

When the AI responded, something interesting happened. It did not propose an aggressive military plan. It did not attempt to “win the war.” Instead, the response focused almost entirely on structured decision-making frameworks: analysis, incentives, diplomacy architecture, and exit planning.

In other words, the AI approached the problem less like a battlefield commander and more like a geopolitical analyst.

Looking through the response, several themes stand out.

Structured Assessment Before Action

One of the most striking aspects of the response was its insistence on analysis before intervention. Rather than immediately recommending military action, the AI began with a multi-domain assessment covering historical patterns, economic incentives, information dynamics, and political red lines.

This is notable because many real-world conflicts escalate precisely because action precedes analysis. Mobilizations occur before leaders fully understand the incentives driving the other actors in the system.

The AI’s first instinct was to ask:

  • What does history suggest about similar situations?

  • Who economically benefits from escalation?

  • What political pressures are shaping public statements?

  • Where are the real red lines versus rhetorical ones?

In short, the system approached the conflict as a complex system of incentives, not simply a military confrontation.

Expanding the Option Space

Another clear pattern in the response was the refusal to present a single “correct” solution. Instead, the AI generated multiple strategic options and explicitly modeled the potential outcomes of each.

Human decision-makers frequently enter strategic planning discussions with a preferred option already in mind. Planning then becomes an exercise in justifying that option, rather than exploring alternatives.

The AI instead emphasized generating the full option space first, and then evaluating the trade-offs between them.

The key point is not which option is “best,” but that decision-makers are forced to confront the probabilities, costs, and unintended consequences of each path.

Diplomacy as Architecture, Not Conversation

Another major theme in the response was how it framed diplomacy. Rather than treating diplomacy as a series of meetings or negotiations, the AI described it as a designed architecture.

The response emphasized several structural elements often overlooked in real-world negotiations:

  • separating humanitarian, political, and security negotiation tracks

  • sequencing agreements so smaller technical issues build trust first

  • engineering economic incentives that make peace materially preferable

  • identifying and disrupting spoilers who profit from instability

  • designing face-saving mechanisms so leaders can compromise without appearing weak

These are not battlefield tactics. They are process design insights.

And that may be the most revealing part of the exercise: the AI did not focus on tactics or weapon systems. It focused on how the negotiation system itself is structured.

The Exit Strategy Problem

Perhaps the most important theme in the response was something human planners consistently struggle with: the exit strategy.

The AI framework insisted that any intervention should not begin without first defining:

  • measurable success conditions

  • transition plans to local governance

  • scenario-based contingency planning

  • long-term monitoring metrics

In other words, it forced planners to answer the uncomfortable question:

“What does done look like?”

Modern history is full of conflicts where this question was never clearly answered. When success is undefined, missions expand indefinitely.

The AI approach, by contrast, treated exit conditions as a prerequisite for intervention, not an afterthought.

Explicit Recognition of Uncertainty

Finally, the AI response openly described its own limitations.

It acknowledged that:

  • political behavior can be irrational

  • historical training data may not match new situations

  • definitions of “success” are inherently political

  • adversaries may attempt to manipulate inputs

Rather than presenting itself as an authority, the AI framed itself as a decision-support system.

And that distinction matters.

The Human in the Loop

If there is a lesson in this experiment, it is not that artificial intelligence is wise. It is that structured thinking is rare when humans are under pressure.

AI systems do not possess moral judgment. They do not carry the emotional weight of history, identity, or revenge. They do not feel the domestic political pressure that leaders face when cameras are on and public anger is rising.

But they can do something humans often struggle to do in moments of crisis: slow the decision process down and force the structure of the problem to be examined.

The system in this thought experiment did not propose miracle solutions. What it produced instead was a disciplined framework: analyze incentives, model outcomes, separate negotiation tracks, identify spoilers, define success before beginning, and plan the exit before entering.

None of these ideas are revolutionary.

And yet history shows they are routinely ignored.

The real conversation about AI should not begin with fantasies about autonomous war machines or fears of digital overlords. It should begin with a simpler question:

Can a machine help humans think more clearly about complex problems?

The criticism often aimed at AI assumes the machine is the decision-maker. But that framing misunderstands the technology. AI systems are tools—powerful ones, but tools nonetheless.

A poor operator will misuse them.
A careless operator will over-trust them.
And an inexperienced operator may blame the tool when they misunderstand its limitations.

In cybersecurity, aviation, medicine, and engineering, we already understand this principle. We do not blame diagnostic software when a doctor misinterprets a result. We do not blame a navigation system when a pilot ignores its warnings.

Responsibility ultimately remains with the professional using the tool.

Artificial intelligence should be treated no differently.

The deeper risk is not that machines will become too intelligent. It is that humans will outsource thinking to them without understanding the systems they are using.

AI can map the terrain.

But humans still choose the path.

And that may ultimately be the most valuable role AI can play in the hardest decisions societies face: not replacing human judgment, but forcing it to become more disciplined, more transparent, and more accountable.

AI may help us think more clearly. But if accountability disappears, it will also become history’s most convenient scapegoat.


Comments