Why Military Autonomy is failing at the Edge and why Overland AI is just the latest shiny distraction

Why Military Autonomy is failing at the Edge and why Overland AI is just the latest shiny distraction

The defense industry has a fever. The only prescription, according to the brochures currently being handed out at AUSA, is more "autonomy." Specifically, the kind of off-road, high-speed autonomy Overland AI is pitching with its Overdrive system. They want you to believe that sticking a sophisticated sensor suite on a robotic vehicle and letting it rip through the dirt is the silver bullet for the modern battlefield.

It isn’t.

In fact, the obsession with "all-terrain" autonomous navigation is a massive architectural misdirection. While the industry cheers for successful 30-mph sprints through scrubland, they are ignoring the brutal physics and digital realities that make these systems a liability in a peer-to-peer conflict. We are building Ferraris for a fight that requires a mule with a brain, not a computer that panics the second the GPS signal gets jammed or the lidars get caked in mud.

The Myth of the "Smart" Vehicle

The prevailing wisdom—the "lazy consensus" you’ll hear in the hallways of the Walter E. Washington Convention Center—is that better algorithms lead to better survivability. The logic goes like this: if the AI can perceive the difference between a puddle and a tank trap at high speed, the mission succeeds.

I’ve seen programs burn through nine-figure budgets chasing this exact ghost. The problem isn't the perception; it’s the brittleness of the entire stack.

Most autonomous systems today rely on a "heavy edge" approach. They pack massive compute power onto the vehicle to process dense point clouds from Lidar and high-res video. This makes for a great demo. But in a real electronic warfare environment? That high-speed processing creates a thermal and electromagnetic signature that might as well be a "shoot here" sign for any adversary with basic RF sensing.

We are trading human lives for expensive, loud, and electromagnetically "bright" robots that can’t handle the one thing humans excel at: intuitive improvisation.

Perception is not Intelligence

Overland AI and its competitors focus heavily on "off-road" navigation. They use machine learning to classify terrain. But here is the nuance everyone misses: Terrain classification is a solved problem; intent is not.

A robot can identify a 20% grade or a dense thicket of trees. It cannot, however, understand why it shouldn't take the most "efficient" path if that path exposes it to a known ATGM (Anti-Tank Guided Missile) lane. Current autonomous stacks are tactically illiterate. They prioritize "not hitting a rock" over "not getting blown up."

When you see these systems at AUSA, ask the presenters a single question: How does the system’s path planning change when the EW (Electronic Warfare) environment goes dark and it loses its PNT (Positioning, Navigation, and Timing) data?

If the answer involves "relying on internal IMUs and visual odometry," they are selling you a paperweight. In high-intensity conflict, visual odometry fails the moment the smoke grenades pop or the dust from an artillery strike obscures the cameras.

The High-Speed Fallacy

There is a strange fetishization of speed in the autonomous vehicle space. "We can navigate at tactical speeds," the PR says.

Speed is a defense, sure. But speed without stealth is suicide. The current crop of autonomous prototypes are loud—mechanically and digitally. By focusing on high-speed traversal of rugged terrain, these companies are solving for a racing problem, not a combat problem.

  • The Weight Penalty: To go fast off-road, you need heavy-duty suspension and massive power. This increases the vehicle's footprint.
  • The Data Penalty: High speed requires high frame rates and low latency. This forces the use of active sensors like Lidar, which emit light. In a world of cheap night vision, an active Lidar sensor is a lighthouse.

We should be moving toward "passive-first" autonomy. If a system can't navigate using only ambient light or thermal signatures at a walking pace, it has no business being on a 2026 battlefield. Instead, we are building screaming-fast robots that tell the enemy exactly where they are.

The Integration Trap

The AUSA crowd loves to talk about "modular architectures." They claim you can just "drop" an autonomy stack like Overdrive onto any platform—an RCV-Light, a converted Humvee, or a logistics truck.

This is a fundamental misunderstanding of vehicle dynamics.

Imagine a scenario where a software update for a steering controller is optimized for a 2,000-pound robotic mule but is deployed on a 10,000-pound armored scout. The latency in the hydraulic actuators, the center of gravity, and the tire slip ratios are entirely different. "Platform-agnostic" is a buzzword used by people who have never had to recalibrate a PID loop in the mud while taking fire.

When autonomy is treated as an "app" rather than a core mechanical component, the result is a system that "porpoises" on uneven ground or flips itself during a high-speed turn because the software didn't account for the slosh of a half-empty fuel tank.

Why We Are Asking the Wrong Questions

People also ask: "When will autonomous tanks replace humans?"

The premise is flawed. We don't need autonomous tanks. We need autonomous functions. The industry is obsessed with the "driverless" aspect because it's sexy and easy to film for a sizzle reel. The real value is in the boring stuff: autonomous power management, automated signature reduction, and collaborative sensing.

Instead of trying to make a robot drive like a rally racer, we should be making robots that can sit silently for three weeks, wake up, move 50 meters to a new hide site without human input, and go back to sleep. But "stationary endurance" doesn't win contracts at AUSA. High-speed dirt roosting does.

The Cost of the "Perfect" Robot

We are over-engineering these platforms into extinction. By demanding that a system like Overland’s handle "any" terrain, we drive the per-unit cost so high that commanders become afraid to lose them.

The whole point of unmanned systems is attrition. They should be "excludable." If a robot costs $2 million because of its exquisite sensor suite, it isn’t a robotic wingman—it’s a liability that requires its own security detail.

True disruption in this space won't come from the company that navigates the forest the fastest. It will come from the company that makes a "good enough" navigator for $50,000 using off-the-shelf mobile phone processors and passive cameras.

The Brutal Reality of the Software-Defined Battlefield

The "Overdrive" approach represents the peak of the "Software-Defined Vehicle" hype. But software cannot override the laws of physics or the reality of a contested spectrum.

When you see the demos this week, look past the ruggedized chassis and the smooth path-planning lines on the monitor. Look for the cables. Look for the massive cooling fans required to keep the GPUs from melting. Look for the active sensors that are screaming "I am here" to every ELINT (Electronic Intelligence) receiver within ten miles.

We are building magnificent toys for a type of war that ended in 1991. The future isn't a high-speed autonomous dash across the desert; it’s a slow, silent, and incredibly cheap crawl through a landscape where every active emission is a death sentence.

Stop buying the sizzle. The steak is a sensor-heavy, power-hungry, tactically deaf machine that will be the first thing to die in a real fight.

Build for the silence, or don't build at all.

LY

Lily Young

With a passion for uncovering the truth, Lily Young has spent years reporting on complex issues across business, technology, and global affairs.