Note: I spend a lot of my time on Substack and Telegram. But lately, I’ve been forced to be on LinkedIn. I hate it. It’s full of Snake-Oil salesmen and every manner of UAS manufacturer claiming their drone is “AI”. If you read their marketing materials it is a cacophony of jargon and nomenclature. With no substance to be found. That experience inspired this “checklist”. Oh… Vaporware = software or hardware that has been advertised but is not yet available to buy, either because it is only a concept or because it is still being written or designed.
Overuse of the Term “AI”
In today’s defense marketplace, “AI” is the most abused acronym in circulation. Any product that uses automation, statistics, or a pre-trained model is now sold as “AI-powered.” But what’s actually under the hood?
Often, the reality is far more mundane:
● A basic rules engine.
● Simple automation with no adaptive learning.● Off-the-shelf APIs like OpenAI or computer vision libraries, rebranded with a slick UI.
In short: not intelligence, and not autonomous in any meaningful sense.
In plain terms, AI refers to performing tasks that would normally require human intelligence. That means tools we’ve had for years—based purely on traditional computing—aren’t truly AI. What makes something “artificially intelligent” isn’t the hardware itself, but the models and the training behind it leading to a human-like conclusion. Given enough time, electricity, and compute—say, by chaining together enough home PCs—you could eventually reach the same output, assuming identical inputs and model architecture. The result isn’t unique to specialized hardware. What changes is how fast and efficiently you get there. Speed enhances utility, especially in real-time applications, but speed alone doesn’t make something AI. I can farm Bitcoin on an AI chip like a Jetson Nano. That doesn’t make it AI.
Hate Subscriptions? Me too! You can support me with a one time contribution to my research at Buy Me a Coffee. https://buymeacoffee.com/researchukraine
Evaluating AI Claims in UxS and C4ISR Systems
The problem gets worse in the defense and national security space, where secrecy, complexity, and marketing hype intersect. Many companies claim their UxS (uncrewed systems) or C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance) platforms are “AI-powered,” “smart,” or “autonomous.” But what are they really selling?
Below is a skeptical framework for evaluating battlefield AI claims. These eight categories serve as a checklist—each one a filter for detecting vaporware, exaggeration, or unproven capability.
1. What hardware does the AI run on?
Purpose: Does the compute architecture match the performance being promised?
Ask:
● What onboard processor is being used? Jetson Nano? Xavier NX? Raspberry Pi with Coral? ARM SoC?
● What is the power draw, thermal profile, and battery impact?
● Is inference done on-device, at the edge, or remotely in the cloud?
Red Flags:
● Vague answers like “custom chip” or “proprietary board.”
● No acknowledgment of SWaP (Size, Weight, and Power) constraints.
● Claims of “real-time” performance, but the AI runs only in simulation or offline environments.
2. Who verified the model’s performance, and how?
Purpose: Is performance real or just marketing?
Ask:
● Has the system been tested in GPS-denied or EW-contested environments?
● What are the false positive rates, confidence intervals, or confusion matrix results?
● Who did the validation? A government partner? Independent lab? Operational end-users?
Red Flags:
● All testing done in-house or on synthetic data.
● No battlefield or contested-environment evaluation.
● No third-party audit or verification.
3. What happens when the AI fails?
Purpose: Systems will fail. What happens next?
Ask:
● Is there a human-in-the-loop (HITL) or on-the-loop oversight?
● How does the system behave when confidence is low?
● Can operators override AI decisions in real-time?
Red Flags:
● Full autonomy claims with no safety nets.
● No explainability, logging, or feedback loops.
● Black box behavior in high-stakes scenarios.
4. How is the training data sourced and updated?
Purpose: Good AI depends on good data.
Ask:
● Was the model trained on real-world data or synthetic approximations?
● How was the data labeled and validated?
● Is the model retrained as threats evolve?
Red Flags:
● “Proprietary dataset” with no further details.
● No adversarial training or defense against spoofing/poisoning.
● No stated update cycle or tuning process.
Hate Subscriptions? Me too! You can support me with a one time contribution to my research at Buy Me a Coffee. https://buymeacoffee.com/researchukraine
5. What sensors feed the AI, and how are they fused?
Purpose: Sensor fusion is hard. Most don’t do it well.
Ask:
● What sensors are involved—EO, IR, LiDAR, RF, acoustic?
● Is fusion performed at the edge or backhauled to a command node?
● How does the system handle conflicting sensor inputs?
Red Flags:
● Claims of “full fusion” with only 1–2 sensor types.
● No scoring or prioritization between inputs.
● Fusion logic is non-transparent or operator-inaccessible.
6. What domain or mission is the AI optimized for?
Purpose: There is no such thing as general-purpose AI on the battlefield.
Ask:
● Is this system tuned for urban ISR? Rural convoy detection? Maritime search?
● Has it been customized for the operational theater—Ukraine, Taiwan Strait, U.S. border?
● Are there adversary-specific modules?
Red Flags:
● Claims of plug-and-play utility in any theater.
● No mention of localization or fine-tuning.
● Buzzwords like “multi-domain” with no substance.
7. What bandwidth is required to operate the system?
Purpose: Bandwidth kills. Real AI must survive disconnected.
Ask:
● Can it run on low-bit-rate tactical radios or LTE fallback?
● Is inference done locally or in the cloud?
● What happens when comms go dark?
Red Flags:
● Requires constant uplink or “AI-as-a-service.”
● No fallback or degraded mode.
● Cannot operate in denied environments.
8. What regulatory or legal frameworks have been applied?
Purpose: Especially for NATO and democratic allies, operational law matters.
Ask:
● Has the system been reviewed for ROE, civilian harm thresholds, or autonomy constraints?
● Can decisions be audited after the fact?
● How is escalation avoided?
Red Flags:
● Vendor punts to the operator: “That’s your problem.”
● No audit trail.
● No explainability for how lethal decisions are made.
Benjamin Cook continues to travel to, often lives in, and works in Ukraine, a connection spanning more than 14 years. He holds an MA in International Security and Conflict Studies from Dublin City University and has consulted with journalists and intelligence professionals on AI in drones, U.S. military technology, and open-source intelligence (OSINT) related to the war in Ukraine. He is co-founder of the nonprofit UAO, working in southern Ukraine. You can find Mr. Cook between Odesa, Ukraine; Charleston, South Carolina; and Tucson, Arizona.
Hate Subscriptions? Me too! You can support me with a one time contribution to my research at Buy Me a Coffee. https://buymeacoffee.com/researchukraine
Mr. Cook’s Substack:
Agree that AI is branding. But the rest is driven by the criticism of the branding.
The reality is that if a device has bandwidth, even a low quality cell signal, then it can implement very complex actions. Such as 100 drone swarms and F16 blind kills. Eventually even coordinated 100 robot assault “dog swarms”.
Heck, even the fact that a drone can fly like a hummingbird is astounding.
Similarly if someone (like me) wanted a lengthy research briefing on the mathematics of calibration of operators for representation of solutions to systems of stochastic differential equations, it will give me 70+ references and high quality summaries (with equations).
Skeptical is good, but also good to appreciate important new technical achievements.
Well said. PLA writers use “intelligentization” to describe how AI shortens the kill chain, but their manuals still give artillery and combined-arms brigades the job of taking and holding ground. In my Substack on Chinese military tech I keep seeing that same blend: algorithms to find targets faster, massed fires and infantry to finish the fight. https://ordersandobservations.substack.com