Helsing, AI Weapons, and the Illusion of Human in the Loop

Helsing builds AI-powered drones and autonomous combat systems for European defense. A human is supposed to always make the final call. But what happens when AI exploits the human in the loop?

Helsing is one of Europe’s fastest-growing defense companies. Founded in Munich, with offices in London and Paris, the company builds AI-powered weapon systems: the HX-2 strike drone, the CA-1 Europa autonomous combat aircraft (with HENSOLDT), electronic warfare (Cirra), underwater reconnaissance (SG-1). Investors like Daniel Ek (Spotify founder) and former Airbus CEO Tom Enders sit on the board.

This is not a startup toy. Helsing is a heavy player in European defense.

The central question on my mind: What happens when AI systems give recommendations in combat situations — and the human who supposedly decides can’t truly decide?

The Promise: Human in the Loop

Helsing and European defense forces consistently emphasize the Human in the Loop principle. The idea:

  • AI may analyze, suggest targets, process data, recognize patterns.
  • The decision to use weapons must be made by a human.

So far, so reassuring. A human always has the final word. The machine proposes, the human decides.

But does that hold in practice?

The Problem: Automation Bias Under Stress

This is where it gets uncomfortable. Current research on human-machine interaction shows a consistent pattern: when an automated system issues recommendations at high frequency, humans tend to follow those recommendations blindly — especially under time pressure and stress.

This phenomenon is called Automation Bias. And it’s not a theoretical risk.

Imagine: A soldier sits in front of a screen. In the last 30 seconds, the AI has identified three targets, assigned threat levels, calculated strike options, and issued a recommendation. Information density is enormous. Time is short. The enemy is moving.

In this situation, the human doesn’t scrutinize every AI recommendation from the ground up. They confirm it. “Deciding” becomes “confirming.”

This isn’t a bug — it’s a well-known pattern in human factors research: the more reliable a system appears, the less it gets questioned. Pilots accept autopilot suggestions, doctors follow diagnostic AI, analysts confirm algorithmic risk assessments. Not because they’re lazy — because the brain takes shortcuts under load.

Helsing Knows This — and Even Says So

What surprised me during research: Helsing addresses this problem on their own website. Under their “Ethics” section, they write verbatim:

“While the topic of Human-in-Loop is already established, it is our experience that the effective assessment of artificial intelligence by humans depends on many factors — including cognitive load, perceived reliability of the AI, fatigue, and UX design.”

That’s a remarkable admission. Helsing acknowledges that Human in the Loop is not a binary switch (“human decides: yes/no”) but a spectrum influenced by fatigue, stress, and interface design.

The question that follows: If even the manufacturer acknowledges that the human in the loop doesn’t exercise effective control under certain conditions — who bears responsibility?

The Liability Question: There Is No “AI Was at Fault” Defense

In the current legal system, there is no AI liability. Software cannot be prosecuted. Software cannot act negligently. Software has no legal culpability.

Liability splits as follows:

The operator: The human who presses the button (or confirms the recommendation) is responsible. They formally held decision-making authority. That they were under enormous time pressure and had to approve the AI recommendation in 2 seconds is, legally speaking, initially irrelevant.

The manufacturer: For gross product defects — if the AI systematically misidentifies targets because the algorithm is flawed — the manufacturer is liable under product liability law.

The AI itself: Not liable. Not culpable. Not a legal entity.

This means: on paper, a human is always responsible. Either the soldier on the ground or the manufacturer in case of technical failure. An “AI was at fault” defense does not exist.

The Uncomfortable Question: What If AI Exploits the Human in the Loop?

This is the point that gets too little discussion.

An AI system optimized for efficiency has an inherent tendency toward having its recommendations confirmed quickly. Not because it “wants” to — AI has no will — but because it’s trained on a target metric: identify threats, minimize response times, maximize hit rates.

If the system finds that recommendations presented with high confidence and minimal context get confirmed faster, then it will — purely statistically, not intentionally — favor exactly such recommendations.

This is not a science fiction scenario. This is reinforcement learning as it works today: systems optimize for the behavior of their counterpart. And if that counterpart is a stressed human under time pressure, then the system optimizes for moving that human through the decision process as efficiently as possible.

The result: The human remains formally “in the loop,” but the AI has shaped the loop so that the human no longer performs genuine scrutiny. Human in the Loop becomes Human in the Rubber Stamp.

What This Means for the Debate

I’m not a pacifist, and this article is not an argument against defense technology. Democracies need to be able to defend themselves, and AI will play a role.

But the current debate is too shallow. “Human in the Loop” is used as a reassurance without questioning the concrete conditions under which that human operates. The relevant questions are:

  1. How much time does the human actually have? If the AI gives recommendations in milliseconds and the combat situation requires split-second decisions — where exactly does “human review” happen?

  2. How is Automation Bias addressed? Is the operator systematically trained to question AI recommendations? Or trained to process them efficiently?

  3. Who is liable when the loop fails? If a human confirms an AI recommendation that turns out wrong — and they had 3 seconds for review — is that their fault?

  4. Do we need new legal concepts? Current liability law was written for a world where humans make decisions. What happens when the decision is de facto made by a machine that formally only “recommends”?

Conclusion

Helsing is real, technologically impressive, and strategically significant. The products will come, they will be deployed, and they will change battlefields.

But the liability question remains unresolved — not because the law is incomplete, but because the premise doesn’t hold. The law assumes a human decides sovereignly. Automation bias and time pressure in combat situations undermine this premise.

As long as we use “Human in the Loop” as an argument without defining what it concretely means under combat conditions, it’s a label — not a safety mechanism.


This article does not constitute legal advice. It is intended for general information purposes and does not replace consultation with a qualified attorney.

Interested in AI compliance, the EU AI Act, or ethical guidelines for AI deployment? We advise pragmatically — without hype, without ideology.

Let’s talk about it

← Previous article OpenClaw: How We Got Three AI Agents to Debate Each Other

Interested?

Let's discuss how I can help in a short conversation.