AI systems under the AI Act

What Constitutes an AI System?

When the EU AI Act introduced requirements for AI systems, it seemed like a clear path forward for governance. But there's a fundamental question that remains surprisingly unresolved: What exactly is an AI system?

This isn't just academic hair-splitting. The answer determines which parts of your products need to comply with stringent regulatory requirements, how you conduct risk assessments, and ultimately whether you're meeting your legal obligations.

The Problem: AI Systems vs. Products

The EU AI Act regulates "AI systems" rather than complete products. This creates an immediate challenge: modern products often contain multiple AI components alongside conventional software and hardware. Consider an automated guided vehicle (AGV) in a factory warehouse. It might have:

  • An AI-powered perception system that detects obstacles and people
  • An AI-based route optimization algorithm
  • Conventional motor controls and safety systems
  • Non-AI navigation components

Which of these constitute "the AI system" that needs to comply with the AI Act? Is each AI component a separate system? Are they combined into one? Where does the AI system end and the rest of the product begin?

Why This Matters for High-Risk Classification

According to Article 6(1) of the AI Act, an AI system is considered high-risk if it serves as a safety component of a product or if its failure could endanger health and safety. But determining this requires understanding what you're evaluating in the first place.

A recent expert contribution to the CEN/CLC standardization working group highlights this dilemma through a compelling example: an aircraft with both a navigation system (high-risk) and a passenger entertainment recommendation system (low-risk). How do we know which is which?

The answer requires product-level risk management. You can't determine if an AI component endangers health and safety without understanding:

  • The complete product's intended purpose
  • How components interact with each other
  • How the product interacts with its environment
  • The full chain of events that could lead to harm

The Holistic Risk Management Imperative

The standardization debate reveals something crucial: trying to manage risk at the individual AI component level is both insufficient and inefficient. Effective compliance requires understanding how AI fits into the broader product ecosystem.

Consider the aircraft autopilot example from the CEN/CLC document. The navigation system might brake too hard in certain situations: a robustness issue. The proposed risk control? Mechanical damping in the engines to smooth sudden speed changes. But here's the problem: if you're only looking at the navigation system in isolation, how do you:

  • Identify that engine damping is even needed?
  • Determine acceptable acceleration and braking thresholds for safe flight?
  • Verify that your risk control measure actually works?
  • Assess whether mechanical damping (which may have low reliability) is appropriate?

You can't. These decisions require understanding the complete aircraft, its operational environment, and how components interact.

This principle extends across every aspect of high-risk AI compliance:

Risk Assessment: The Chain of Events Matters

Real-world harm doesn't occur because an AI component fails in isolation. It occurs through a chain of events:

a sensor provides unexpected input → the AI makes a suboptimal decision → an actuator executes that decision → the product behaves dangerously → harm results in the environment.

Product-level risk management can trace this entire chain, identify where risks emerge, and determine which interventions are most effective. Component-level risk management cannot. The probability of harm at the product level depends on all the events that could lead to a hazardous situation, not just the AI's behavior in isolation.

Human Oversight: Humans See Products, Not Components

Here's a fundamental insight from the standardization debate: pilots don't monitor the control commands flowing from the navigation system to the engines. They observe the aircraft's behavior: climbing, descending, banking, turning. They interact with the complete product through its observable behavior.

Human oversight must therefore be designed around what humans can actually perceive and the interventions they can actually make. This is inherently defined at the product level. A pilot needs to understand "the autopilot is descending too aggressively" and have the ability to override it, not to debug why the AI model generated a specific thrust command.

The same principle applies whether you're overseeing an automated vehicle, a medical diagnostic system, or a credit decision algorithm. Effective human oversight requires understanding the complete system's behavior in context.

This is the core of Velatir's value proposition. Human oversight must operate at the right level of abstraction. We enable humans to see meaningful, contextual information about system behavior.

Critical Unanswered Questions

The standardization debate has crystallized several questions that the AI Act and existing Commission guidelines don't adequately address. These aren't theoretical puzzles they're practical challenges that organizations face right now:

Defining System Boundaries: Without clear boundaries, you cannot definitively identify the object of compliance. Where does your AI system end and the rest of your product begin? The Act provides no clear answer.

  • Does the AI system include sensors that provide input data?
  • Does it include actuators that execute AI decisions?
  • What about preprocessing steps that format data before AI analysis?
  • What about post-processing that interprets AI outputs for downstream use?

Handling Multiple AI Components: Modern products often integrate multiple AI components. Consider an autonomous vehicle perception system with separate neural networks for object detection in each camera feed, followed by sensor fusion, then trajectory prediction, then path planning. Some use AI techniques, some use conventional algorithms.

  • Is each neural network a separate "AI system" requiring individual compliance?
  • Should they be combined into one "perception AI system"?
  • If components are connected through non-AI processing, does that separate them into distinct systems?
  • How do you avoid duplicating risk management across interdependent components?

Determining High-Risk Classification: Article 6(1) says an AI system is high-risk if it serves as a safety component whose failure endangers health and safety. But determining this requires product-level risk assessment:

  • What level of risk triggers high-risk classification? Any theoretical risk, or only risks above a certain severity/probability threshold?
  • For Annex III applications without existing product regulations, how do you assess whether an AI component poses "significant risk" without holistic risk management?
  • How do you interpret vague criteria like "narrow procedural task" that might exempt systems from high-risk classification?

These questions aren't edge cases. They're fundamental to compliance for anyone building products with embedded AI.

Why This Matters: Practical Implications for Organizations

This ambiguity creates real operational challenges:

1. Compliance Scope Uncertainty

Without knowing what constitutes your "AI system," you cannot definitively answer:

  • What needs technical documentation?
  • What requires conformity assessment?
  • What needs risk management?
  • What requires human oversight measures?

Different interpretations of "AI system" boundaries lead to vastly different compliance efforts and costs. Define too narrowly, and you may miss safety-critical components. Define too broadly, and you may impose unnecessary requirements on low-risk elements.

2. Third-Party Component Integration

If you're integrating AI components from external providers, the boundary question becomes even more relevant:

  • Is each component a separate AI system requiring individual compliance verification?
  • Should you treat them as parts of a larger system under your responsibility?
  • How do you ensure component-level compliance documentation is sufficient for your product-level needs?
  • When a third-party AI component fails, whose risk management should have caught it?

The AI Act contemplates a value chain with distinct roles (provider, deployer, importer), but this assumes clear boundaries around what "the AI system" is at each stage.

The Path Forward: Product-Level Risk Management

While we await clearer guidance from the European Commission, organizations cannot afford to wait in limbo. The most defensible approach and the one consistent with existing safety standards is to adopt product-level risk management that naturally identifies AI components requiring specific controls.

Here's what this means in practice:

Start with Product Purpose and Context

Begin with the intended purpose and reasonably foreseeable misuse, not with the AI component's narrow function. An automated vehicle's purpose is "safe transportation from A to B," not "object detection in camera feeds." The former provides the context needed to understand what risks matter.

Identify Risk-Relevant Components Through Hazard Analysis

Systematic hazard analysis identifies scenarios where product behavior could cause harm. For each scenario, trace backward to identify which components (AI or otherwise) could contribute:

  • Whose failure could trigger the hazardous situation?
  • Whose malfunction could prevent risk controls from working?
  • Whose degraded performance could increase likelihood or severity of harm?

This naturally identifies which AI components are safety-critical.

Define Requirements as Risk Control Measures

Once you've identified risk-relevant components, define specific requirements for them as part of your risk mitigation strategy:

  • Performance requirements (accuracy, reliability, robustness)
  • Development requirements (data quality, validation methods, testing procedures)
  • Operational requirements (monitoring, human oversight, fallback mechanisms)

These requirements flow from the product-level risk assessment, ensuring they're proportionate to actual risks rather than applied uniformly to all AI components.

Document Decisions and Rationales

Given the regulatory ambiguity, documenting your boundary and scope decisions is crucial:

  • What do you consider your "AI system(s)" and why?
  • How did you identify them as high-risk?
  • What requirements flow from this risk assessment?
  • How do you verify these requirements are met?

This documentation demonstrates thoughtful compliance efforts even if future guidance suggests different approaches. It also provides an audit trail showing that your AI governance is risk-based rather than arbitrary.

Practical Steps for Your Organization

If you're developing or deploying AI systems that may fall under the AI Act, here's how to navigate this definitional ambiguity while building defensible compliance:

1. Conduct Product-Level Risk Assessment First

Before diving into AI-specific compliance activities, establish the product-level risk context:

  • What is your product's intended purpose and operational environment?
  • What hazardous situations could occur in the product-environment interaction?
  • Which components (AI or otherwise) are involved in risk scenarios?
  • What risk controls are needed, and where do they apply?

This gives you a risk-based rationale for scope decisions rather than arbitrary boundaries. It also ensures your compliance efforts focus on components that actually matter for safety.

2. Make and Document Your Boundary Decisions

You need to decide what constitutes your "AI system(s)" even if regulations don't provide clear criteria. Make these decisions deliberately:

  • Define boundaries based on functional coherence and risk relevance
  • Document why you drew boundaries where you did
  • Explain how these boundaries align with product risk management
  • Be prepared to adjust if regulatory guidance emerges

The goal isn't to guess what future guidance will say—it's to show you made thoughtful, risk-based decisions using available information.

3. Build Governance That Spans Boundaries

Don't let definitional ambiguity paralyze your governance efforts. Instead, build layered governance that works regardless of where boundaries ultimately fall:

Governance Layer 1: Strategic risk and compliance context

  • Product-level risk management identifying risk-relevant components
  • Regulatory mapping showing which requirements apply
  • Documentation framework covering both components and integration

Governance Layer 2: AI-specific technical controls

  • Data quality and validation for AI training
  • Model testing including robustness and bias evaluation
  • AI-specific security considerations
  • Continuous monitoring for model drift

Governance Layer 3: Integrated operational oversight

  • Human oversight operating at the product level
  • Logging that captures full context, not just AI decisions
  • Incident response that considers complete event chains
  • Continuous learning that feeds back to layers 1 and 2

This three-layer model is exactly what Velatir's approach provides. Organizations often implement layer 2 (AI-specific controls) in isolation because that's what AI governance frameworks focus on. But without layer 1 context and layer 3 integration, you get full coverage rather than effective pseudo governance. Velatir's solution enables all three layers to work together, with human oversight that spans from strategic risk context through AI-specific monitoring to integrated operational response.

Conclusion: From Definitional Debates to Practical Governance

The question "what constitutes an AI system?" reveals a deeper truth about AI regulation: effective governance cannot be reduced to component-level compliance checklists. The standardization debate makes clear that AI embedded in products must be governed holistically, with clear line of sight from business purpose through product function to component behavior and back again.

This insight has three important implications:

First, regulatory ambiguity shouldn't paralyze action.

While we wait for clearer guidance on system boundaries, organizations can build defensible compliance through product-level risk management that identifies which AI components are safety-critical and defines proportionate requirements. This approach aligns with established safety standards and provides a risk-based rationale for scope decisions.

Second, AI governance must be layered and integrated.

You cannot effectively govern AI components in isolation from the products they're embedded in, the data that feeds them, or the humans who oversee them. Governance frameworks must span from strategic risk context through AI-specific technical controls to integrated operational oversight. Single-layer approaches—whether focused only on model testing, only on documentation, or only on human review—miss the systemic nature of AI risk.

Third, context is everything.

The same AI component may be high-risk in one product context and low-risk in another. The same AI decision may be safe in one operational scenario and dangerous in another. Effective governance must maintain and leverage this context, not strip it away in pursuit of standardized component evaluation.

This is fundamentally about moving from a compliance mindset ("check the boxes for our AI system") to a holistic mindset ("ensure our product operates safely and our organization maintains meaningful control"). The former gets stuck on definitional questions. The latter focuses on the real goal: protecting health, safety, and fundamental rights while enabling beneficial AI deployment.

The standardization debate continues, and clearer regulatory guidance will eventually emerge. But organizations deploying AI today cannot wait. By adopting product-level thinking, building layered governance, and maintaining rigorous documentation of decisions and rationales, you can navigate the current ambiguity while building foundations for long-term responsible AI deployment.

The goal isn't to predict exactly how regulators will resolve these definitional questions, it's to build governance robust enough to adapt when they do.

Navigating AI regulation requires more than just reading the rules, it requires understanding how they apply to your specific products and contexts. Want to discuss how Velatir's layered governance approach can help your organization build defensible, effective AI oversight? Get in touch with our team.