Defense Briefing

Kinetic Autonomy.

The transition from human-commanded weapons to algorithmic decision-making in kinetic conflict requires a new statutory baseline for war.

The Geneva AI Protocol

By early 2026, the UN Security Council is expected to formalize the prohibition of fully autonomous systems capable of selecting and engaging targets without a meaningful human command link. The challenge lies in defining the latency of human control.

LAWS: Lethal Autonomous Weapons Systems

The international community distinguishes between automated weapons (which follow pre-programmed rules) and autonomous weapons (which use machine learning to adapt target selection in real-time). The latter is the subject of intense regulatory debate.

Supply Chain as Enforcement

Unlike traditional treaties, AI arms control is being enforced at the hardware level. Export controls on precision H200 chips effectively limit the deployment of on-edge inference models in tactical environments.

The US Executive Order 14110 and EU dual-use regulation treat frontier AI models as strategic materials—similar to uranium enrichment technology. Any model capable of real-time battlefield decision-making falls under the Wassenaar Arrangement's Category 4 controls.

The Three-Tier Framework

Tier 1: Prohibited

Fully autonomous systems with no human override. Banned under proposed Geneva Protocol.

Tier 2: Restricted

Human-on-the-Loop systems. Allowed with safety certification and international audit.

Tier 3: Permitted

Human-in-the-Loop systems. Defensive applications (e.g., anti-missile systems) approved.

Corporate Compliance

Defense contractors developing AI-powered targeting systems must now demonstrate 'meaningful human control' at every stage of the kill chain. This includes:

  • Mandatory kill-switch mechanisms with <200ms latency
  • Adversarial testing against autonomous evasion tactics
  • Detailed audit logs for post-deployment review
  • Compliance with ISO 23482 (Autonomous Systems Safety)

The Loitering Munition Debate

Drones like the Israeli 'Harpy' and Turkish 'Kargu' blur the line between autonomous and automated. While they can identify targets using computer vision, final engagement requires human approval. However, latency in communication means the approval is often a rubber stamp—raising questions about 'meaningful' control.