The Way Forward: why AI Drones didn't take over the Battlefield... (at least not yet)
by Benjamin Cook (intro by Tom Cooper)
Hello everybody!
By all the private affairs preventing me from writing and posting this week, must admit, it was a sort of ‘relief’ to watch the current US administration continuing to demolish the USA not only internally, but on the international plan as well. At least, the situation is now clear. Indeed, by imposing its ‘(flat line) tariffs’ upon almost everybody and everything, the US government (the government in charge of an economy it was imposing upon everybody else, and the economy that was prohibiting the most from free trade, the affair known as ‘neo-liberalisation’ of the last 50-80 years) was also as kind as to force everybody else into uniting (at least on the local plan) - against the USA - and acting, too.
Moreover, it also provided an excellent example for what happens when one refuses to accept reality, and refuses to reform, too. Where this is bringing me (indeed: ‘us’) to the topic of Ukraine. As ironic as it might sound at first, that is so because the government there is also refusing to introduce the necessary reforms - if not itself, then at least its armed forces. This, in turn, is then the core reason for why things about to be described by Benjamin in the following feature, are not yet ‘ruling’ the battlefields of the war in Ukraine.
On the positive side: thanks to having a number of contacts in that branch, can observe that - as much as they’re all frustrated, even white-mad about the corruption, nepotism and incompetence of their own government - and contrary to ‘us’, all others in ‘the West’, the Ukrainians are ‘at least’ meanwhile excelling in finding the ways to bypass the same. To overcome any kind of hurdles their own government is throwing in their way. Think, in this regards, Ukrainians can serve as an excellent example for all of us, and in all other imaginable aspects of public life, for how ‘to do it, nevertheless’.
Therefore, cannot but offer a friendly recommendation: ‘stay tuned’. We’re going to hear much, much more about the following topic in the coming weeks and months already.
What am I talking about?
About AI and robotisation being the future of this war. At least when it comes to the Ukrainian Armed Forces. Perhaps not this year, but certainly the next: robots and the AI are going to become as ‘normal’ as anything else we’ve seen in this war the last three years.
Why?
Because, actually, solutions are already not only ‘in the making’, but in production too. It only takes the time to overcome all the hurdles constantly thrown in the way of involved developers and manufacturers - and that by a government that’s all the time lagging at least 12-18 months behind the developments (but then very happy to claim every achievement for itself).
***
NOTE: Because this is highly technical I have kept this in more of an outline form. As well, I’ve tried to balance information with understanding. Otherwise this could easily be like drinking from a firehose.
The Hidden Complexity Behind a Single Autonomy Stack
There’s a popular notion floating around that AI drones are already everywhere — spotting targets, making decisions, and executing strikes without human input.
In reality, autonomous drones do exist, and they are being used — particularly by Ukraine — but they remain the exception, not the rule. Most battlefield drones are still manually piloted, visually aimed, and dead-reckoned through hostile airspace.
The question is: Why?
The short answer is this: true autonomy is hard. And building even a single capability — like Automatic Target Recognition (ATR) — requires an integrated ballet of systems where any one weak link can bring the entire chain down.
So let’s examine what it actually takes for a drone to “see” a target, track it, predict its motion, and strike — all without human control. And why that remains so rare.
The Stack Behind the Strike
When people hear “AI drone,” they imagine something like a flying Terminator: self-aware, fast, and lethal. But the reality is less cinematic and far more fragile.
Behind every successful autonomous engagement lies a multi-layered stack: detection, tracking, prediction, decision-making, actuation, and security. None of these parts are easy. Put together, they’re a minefield of engineering tradeoffs.
Here’s how it works — and where it breaks.
Hate Subscriptions? Me too! You can support me with a one time contribution to my research at Buy Me a Coffee. https://buymeacoffee.com/researchukraine
Step 1: Detection — The First Look
Before anything can be tracked or attacked, the system needs to know what it’s seeing.
Most military-grade object detection is now done using models like YOLOv5 or YOLOv8 — short for You Only Look Once. These models analyze video frames in real time and return bounding boxes with class labels and confidence scores.
Want to find a tank? A truck? A radar dish? That’s what this step is for.
This detection process needs to run on lightweight hardware (usually on edge AI chips like Nvidia Jetson or Coral TPU), at real-time speeds, in unpredictable lighting, weather, or terrain. The model must also have been trained on realistic combat imagery — not commercial datasets. (Ukraine has the world’s largest annotated database of enemy equipment.)
For perspective: early versions of Tesla’s autonomy stack were based on YOLO-style detection models. But those cars operate on paved roads, with consistent lighting and signage. Drones flying over burned-out fields, camouflaged vehicles, and smoldering wreckage are in a different world entirely.
Step 2: Tracking — Don’t Lose the Target
A single detection isn’t enough. The system needs to track the object across frames, even when it moves, shifts angle, disappears briefly, or gets partially occluded.
This is the job of a tracking algorithm — often DeepSORT or ByteTrack — that maintains a “lock” on the object over time. The goal is continuity: knowing that this blob in frame 47 is the same object from frame 42.
Lose that thread, and the system has to start over — or worse, it makes a wrong assumption about where the target went.
Step 3: Prediction — Where Will It Be?
Tracking gives you “where it is.” Prediction is about where it’s going.
In real combat, targets move. Sometimes slowly, sometimes fast. To successfully hit something, especially with a loitering munition or kamikaze drone, the system must calculate an intercept point based on:
● Speed and heading of the target
● Flight dynamics of the drone
● Wind, terrain, altitude
These calculations must be made quickly and revised constantly as the situation changes. A bad prediction means a miss — or a failed strike mission.
Step 4: Decision Logic and Execution
Once the drone has the detection, tracking, and predicted intercept, it still needs to decide:
● Is this the right target?
● Is now the right time?
● Do I strike, or wait?
This logic is often encoded into a decision engine — a series of rules, thresholds, or even AI classifiers that decide when to act. In some cases, there’s a human-in-the-loop to confirm before the weapon engages. In others, it’s fully autonomous.
Finally, the system must actuate: adjusting its course, triggering a dive, or firing a munition. That handoff — from perception to flight controller — is one of the most failure-prone parts of the system.
Step 5: Integration and Orchestration
None of these parts work on their own. They must be integrated into a single, orchestrated software stack:
● Sensor inputs from camera, IMU, barometer
● Inference models for detection and prediction
● Logic layers to connect decisions with actuation
Common orchestration platforms include:
● ROS (Robot Operating System) — modular, message-based
● Nvidia DeepStream — optimized for AI on video
● Custom logic in Python or C++
Each layer introduces latency, memory usage, and timing risk. If a message fails, a sensor lags, or a CPU thread stalls, the system may crash — or worse, freeze mid-strike.
Security and Comms: The Invisible Layer
A drone that thinks and strikes on its own is also a cyber target. That’s why cryptographic security is a critical part of any autonomous drone stack.
● Targeting data must be encrypted at rest.
● Communications (when present) must be secure and tamper-proof.
● The AI model itself may be classified or proprietary — and must be protected from recovery if the drone is shot down.
This requires cryptographic coprocessors or secure elements — like Microchip’s ATECC608A or STSAFE modules — which add size, weight, power draw, and cost.
But without them, even a brilliant AI stack becomes a liability the moment it crashes behind enemy lines.
Why It Hasn’t Scaled — Yet
Each one of these layers is a challenge.
Together, they’re a stack of potential failure points:
● A foggy lens ruins detection.
● A dropped packet stalls tracking.● A mistimed actuator breaks interception.
● A corrupted key breaks trust.
● A software crash sends the drone spiralling out of control.
This is why most AI drone strikes you hear about today are still hybrid systems: an AI helping to identify, track, or aim — but with a human steering, confirming, or triggering.
True battlefield autonomy exists, but it’s rare. Not because it’s impossible — but because it’s hard to make it work every time.
The Silent Brain Behind the Boom
When it works, though, it’s terrifyingly effective.
An AI system that can detect a target, track its movement, predict its path, confirm legality, and strike — all from a palm-sized chip with no human in the loop — changes everything.
But until that system becomes robust, modular, and cheap, AI drones will remain more myth than menace.
The brain behind the boom is real. It’s just not ready to survive the war on its own — yet.
Benjamin Cook continues to travel to, often lives in, and works in Ukraine, a connection spanning more than 14 years. He holds an MA in International Security and Conflict Studies from Dublin City University and has consulted with journalists on AI in drones, U.S. military technology, and related topics. He is co-founder of the nonprofit UAO, working in southern Ukraine. You can find Mr. Cook between Odesa, Ukraine; Charleston, South Carolina; and Tucson, Arizona.
Hate Subscriptions? Me too! You can support me with a one time contribution to my research at Buy Me a Coffee. https://buymeacoffee.com/researchukraine
Mr. Cook’s Substack:
The win for me would be a loitering AI drone over Moscow that recognises key members of Putler’s inner circle and takes them out automatically….
Thanks for the analysis. As someone who has tried to develop predictive maintenance systems and works with AI people I can only say amen. An add, this is only the beginning of your problems. Your list is obviously correct, and it’s also probably very incomplete. Analysis of faults might tell more. Still. If research and field testing continues, these factors can be overcome. Or at least reduced. So there will be more. And the systems will be better.