How Europe’s new AI rulebook would (and wouldn’t) touch autonomous combat aircraft—and what the defence carve?outs really mean

06th November 2025

By Richard Ryan, barrister and drone lawyer

How Europe’s new AI rulebook would (and wouldn’t) touch autonomous combat aircraft — and what the defence carve-outs really mean.


In Brief…

  • Purely military AI systems are out of scope of the EU AI Act. If an AI system is developed or used exclusively for military/defence or national-security purposes, the Act does not apply. (EUR-Lex)
  • Dual-use is different. If the same autonomy stack, sensors or models are marketed or used for civilian purposes in the EU (for example, civil UAS, border or law-enforcement tasks), the Act can apply — with stringent duties for “high-risk” systems. (EUR-Lex)
  • Real-world testing is regulated. Pre-market R&D is generally excluded, but real-world testing isn’t — it requires specific safeguards and registration. (EUR-Lex)
  • Foundation models (GPAI) have their own rules from 2 Aug 2025; the defence carve-out in the Act is written for AI systems, not explicitly for models. If a model is placed on the EU market generally, the provider’s GPAI obligations can still bite. (EUR-Lex)

Context: sUAS News reports that GA-ASI is showcasing its autonomous fighter portfolio (for example, YFQ-42A CCA, MQ-20 Avenger) at the International Fighter Conference in Rome, 4–6 Nov 2025. This post overlays that scenario with the EU AI Act’s rules.


1) First principles: When does the EU AI Act apply?

The Act has extraterritorial reach. It covers (i) providers and deployers in the EU, (ii) providers placing on the EU market or putting systems into service in the EU — even if they are not established here — and (iii) providers/deployers in third countries where the AI system’s output is used in the EU. (EUR-Lex)

However, Article 2(3) draws a bright line: the Act does not apply to AI systems used exclusively for military, defence or national security. It also does not apply where a system is not placed on the EU market but its output is used in the EU exclusively for those purposes. Recital 24 reiterates this and clarifies that non-defence use falls back under the Act. (EUR-Lex)

What this means in Rome:

  • A closed, defence-only showcase for European militaries: out of scope.
  • A civil-use pitch, civil flight trials, or plans to sell autonomy modules to EU civilian buyers: in scope (see the high-risk section below). (EUR-Lex)

2) The key defence carve-outs (and their limits)

Carve-out #1 — Defence/military:

“This Regulation shall not apply to AI systems … used exclusively for military, defence or national security purposes.” (Article 2(3))

Two important nuances:

  • Exclusivity matters. The moment an autonomy stack or sensor suite is also marketed or used for civilian or law-enforcement tasks, the defence exclusion no longer shields those non-defence uses. (EUR-Lex)
  • Models vs systems. The text explicitly excludes AI systems for defence; it does not create an explicit defence exclusion for general-purpose AI models. If a GPAI model is placed on the EU market, Chapter V obligations for model providers can still apply — even if one downstream customer is a defence user. (More on GPAI below.) (EUR-Lex)

Carve-out #2 — Pre-market R&D:
R&D before placing on the market is generally outside scope, but real-world testing is not. Testing in real-world conditions triggers a dedicated regime (for example, registration, time limits, informed consent or special conditions for law enforcement, incident reporting). (EUR-Lex)

Carve-out #3 — Emergency derogations (non-defence):
For exceptional public-security reasons (or imminent threats to life/health), market surveillance authorities can authorise temporary use of a high-risk AI system before full conformity assessment — subject to strict conditions. Law-enforcement or civil-protection bodies can also use in urgent cases, then seek authorisation without undue delay. This is not a defence-specific carve-out, but it explains emergency deployments outside the military context. (EUR-Lex)


3) If the defence exclusion doesn’t apply, would autonomous fighters tech be “high-risk”?

Very likely yes — for civil variants or dual-use spin-outs:

  • Annex I (product-safety route). AI that is a safety component of products covered by sectoral EU safety laws is high-risk where those products need third-party conformity assessment. That list explicitly includes EU civil aviation law (Reg. 2018/1139) — covering unmanned aircraft and their remotely controllable equipment. In a civil-UAS configuration, an autonomy stack acting as a safety component would be regulated as high-risk. (EUR-Lex)
  • Annex III (stand-alone uses). Separate “high-risk” buckets also capture, for example, remote biometric identification and other sensitive functions (if and where permitted by Union/national law), critical infrastructure safety components, and more. If a fighter-born sensing suite were repurposed for civil border surveillance or public-space identification, you quickly hit these Annex III categories. (EUR-Lex)

What “high-risk” demands in practice
Providers must implement a risk-management system, data governance, technical documentation, logging, transparency/instructions, human oversight, and accuracy/robustness/cybersecurity — then pass conformity assessment, issue an EU Declaration of Conformity, and affix CE marking. Deployers also carry duties (for example, monitoring, data relevance, user notification in some cases). (EUR-Lex)


4) Sensors on show: what about face recognition and other “red lines”?

The EU bans several AI practices outright (from 2 Feb 2025), including:

  • Untargeted scraping of facial images to build recognition databases.
  • Biometric categorisation inferring sensitive traits (for example, race, political opinions, religion).
  • Emotion recognition in workplaces or schools (with narrow safety/medical exceptions).
  • Predictive “risk assessments” of criminality based solely on personality traits/profiling.
  • Real-time remote biometric identification (RBI) in public spaces for law enforcementunless strictly authorised and necessary for narrowly defined objectives (for example, locating a specific suspect in serious crimes, preventing a specific imminent threat, finding missing persons), with prior judicial/independent approval and registration. (EUR-Lex)

Implication for a trade-show demo: training a camera on attendees to test real-time RBI in a public venue would likely be unlawful unless those strict law-enforcement exceptions and procedural safeguards apply — which they typically will not at a commercial defence conference. (EUR-Lex)


5) Real-world testing in the EU (civil or dual-use variants)

If a provider runs real-world flight tests in the EU (outside the defence exclusion), the Act requires — among other things — registration, an EU-established entity or EU legal representative, limits on duration (normally up to six months, extendable once), rules on informed consent (with special handling for law-enforcement tests), qualified oversight, and the ability to reverse/ignore the system’s outputs. Serious incidents must be reported promptly. (EUR-Lex)


6) Foundation models (GPAI): obligations can still attach

From 2 Aug 2025, Chapter V sets baseline transparency and copyright-policy duties for providers of general-purpose AI models (with extra obligations if the model presents systemic risks). The defence exclusion in Article 2(3) is framed for AI systems, not models. So, if a foundation model is placed on the EU market, the model provider can have obligations even if a downstream customer is a defence prime. (Open-source specifics and systemic-risk thresholds also apply.) (EUR-Lex)


7) Timelines you need in Rome (as of 6 Nov 2025)

  • Entry into force: 1 Aug 2024 (20 days after OJ publication).
  • Prohibited practices + core chapters (I–II): apply from 2 Feb 2025.
  • GPAI rules (Chapter V), plus other chapters (III §4, VII, XII, and Article 78): apply from 2 Aug 2025.
  • General application: 2 Aug 2026 (high-risk regime starts to bite broadly).
  • Article 6(1) Annex III classification trigger & related obligations: 2 Aug 2027. (EUR-Lex)

8) Enforcement and penalties

  • Violating prohibited practices (Article 5) can draw fines up to €35m or 7% of worldwide annual turnover, whichever is higher.
  • Other operator obligations can reach €15m or 3%; supplying misleading information can reach €7.5m or 1% (SMEs benefit from caps). Separate fine scales apply to EU institutions. (EUR-Lex)

9) Practical playbook for IFC attendees

If you are a defence OEM showing autonomy stacks:

  1. Map uses: Defence-only (excluded) vs any civil or law-enforcement pathways (potentially in scope). Document the exclusivity of defence deployments if you rely on the carve-out. (EUR-Lex)
  2. GPAI suppliers: If you place a foundation model on the EU market, expect Chapter V duties regardless of defence customers. (EUR-Lex)
  3. No RBI demos on the show floor. Those prohibitions already apply in 2025. (EUR-Lex)
  4. Planning EU flight tests for civil variants? Prepare for real-world testing conditions (registration, oversight, incident reporting). (EUR-Lex)
  5. For civil UAS commercialisation, treat your autonomy as high-risk (EASA product-safety route), budget time for conformity assessment and CE marking. (EUR-Lex)

If you are a European ministry or agency:

  • Distinguish military operations (out of scope) from law-enforcement or border uses (in scope; watch RBI limits and high-risk duties). Consider Article 46 emergency derogations only in exceptional and documented cases. (EUR-Lex)

If you are a civil UAS integrator:

  • Expect the full high-risk package (risk management, data governance, human oversight, cybersecurity, logs, conformity assessment, CE). Build compliance into your system architecture, ML pipelines, safety cases, and ops manuals from day one. (EUR-Lex)

10) Quick decision pathway

  1. Is the use exclusively defence or national security?
    Yes: AI system is out of scope.
    No: continue. (EUR-Lex)
  2. Is it a civil product or law-enforcement/border use?
    Civil product with safety function (for example, civil UAS): High-risk via Annex I ? conformity assessment + CE. (EUR-Lex)
    Stand-alone sensitive use (for example, RBI, critical infrastructure): Annex III high-risk or Article 5 prohibition applies. (EUR-Lex)
  3. Is there a GPAI model being placed on the EU market?
    Yes: Chapter V duties for model providers from 2 Aug 2025, separate from the defence carve-out for systems. (EUR-Lex)
  4. Is this pre-market testing?
    Real-world testing rules apply (registration, oversight, incident reporting). (EUR-Lex)

Bottom line for “Autonomous Fighters in Rome”

  • A military-only display of GA-ASI’s autonomous fighters is outside the AI Act.
  • Any civil spin-off (cargo drones, civil surveillance, airport ops) or law-enforcement application in the EU will trigger the Act — often at the high-risk level — together with tight prohibitions around biometric uses in public spaces. Plan your compliance architecture accordingly. (EUR-Lex)

This article is informational and not legal advice. Citations are to the Official Journal text of the Artificial Intelligence Act (Regulation (EU) 2024/1689) for scope (Art. 2), prohibitions (Art. 5), high-risk regime (Ch. III), real-world testing (Arts. 57–61), GPAI (Ch. V incl. Art. 53), timelines (Art. 113), and penalties (Art. 99–101).


About the author — Richard Ryan

Richard Ryan is a UK barrister (Direct Access), mediator and Chartered Arbitrator (FCIArb), and a Bencher of Gray’s Inn. He practises across defence, aerospace, construction, engineering and commodities, with a leading specialism in drone and counter-drone law, unmanned aviation regulation, and AI-enabled safety and compliance. Richard advises government, primes and operators on EU/UK UAS frameworks, BVLOS, U-space/UTM and the EU AI Act. He leads Blakiston’s Chambers and contributes regularly to industry guidance and policy consultations.