
The EDPS Report Reveals: Nobody Knows What AI Systems They're Actually Using
On December 4, 2025, the European Data Protection Supervisor (“EDPS”) published its High-Risk AI Systems Mapping Report for EU institutions (“EUIs”), agencies, and bodies. As one of the first mapping exercises conducted by a market surveillance authority under the AI Act, it offers a revealing glimpse into how even well-resourced, compliance-focused organisations struggle with a fundamental question: What AI systems are we actually using?
The findings should concern every organisation preparing for AI Act compliance. Not because EU institutions are failing, but because they're experiencing the same challenge facing companies across Europe. And the EDPS report inadvertently highlights why traditional inventory approaches are insufficient for the AI era.
The Discovery Problem: Harder Than It Looks
The EDPS report notes a critical challenge: "At this early stage, EUIs often face difficulties when defining an AI system as high-risk." But there's a problem that comes even before classification: discovery itself.
Traditional IT asset management assumes you know what systems exist in your environment. Software is procured through IT, deployed through controlled processes, and tracked in configuration management databases. But AI doesn't work that way.
Consider these scenarios playing out right now across organisations:
Marketing uses Jasper for content generation. They signed up with a corporate credit card, never involving IT. Is it high-risk under the AI Act? Maybe not. But it's processing customer data and brand messaging. You should probably know about it.
Your development team uses GitHub Copilot. Productivity soars, but proprietary code patterns are being shared with an external AI system. Did anyone assess the risks? Did anyone even report it?
HR discovered an AI-powered candidate screening tool. They're piloting it to handle the flood of applications. Under the AI Act, this is almost certainly high-risk. But it's not in your official AI inventory because it's still "just a pilot."
Finance uses ChatGPT to analyse quarterly reports. Confidential financial data, competitive strategy, forward-looking statements. All uploaded to an unapproved platform because it speeds up analysis by hours.
This is the shadow AI problem. And it's not unique to EU institutions, it's happening in your organisation right now.
Why Traditional Discovery Methods Fall Short
The EDPS report describes a "voluntary mapping exercise" where institutions self-reported their AI usage. This approach, asking departments to tell you what they're using, is the standard first step. It's also rarely yields usable results.
The challenge starts with knowledge itself. Business users across organisations simply can't reliably identify which tools fall under the AI Act's scope without technical expertise. When someone in the finance department uses a system with machine learning for fraud detection, do they recognize it as AI? What about the HR team's automated resume parsing tool, or customer service's chatbot, or the maintenance team's predictive algorithms?
Beyond the knowledge gap, there's the visibility problem. The EDPS report emphasizes the importance of identifying systems that are "in use or planned," yet this framing already misses a critical category: systems being quietly tested or tools that individual employees have adopted without formal approval. These shadow implementations don't appear on official roadmaps or in procurement records, but they're processing organisational data right now. The pilot project that someone in marketing is experimenting with, the productivity tool a developer discovered and started using last week. These are invisible to traditional discovery methods that rely on formal channels.
The scale of the challenge compounds across organisational complexity. The EDPS found that EU institutions are already using AI for recruitment, operational efficiency, and specialized domain tasks. Now multiply this pattern across every department in a mid-sized company. You're not searching for a handful of enterprise AI platforms that went through formal procurement. You're trying to locate tools across dozens of use cases, many adopted at the individual or team level without central visibility. Each department has unique workflows, each team has discovered different productivity enhancers, and the proliferation continues daily.
Perhaps most problematically, the target keeps moving. Static surveys and periodic assessment cycles simply can't keep pace with both the rapid adoption of new tools and the evolving regulatory interpretation of what constitutes a regulated AI system.
The "Map My Stack" Imperative
This is where Velatir's approach fundamentally differs from traditional asset management. Instead of asking users what they're using, we observe what they're actually accessing.
Automatic, Continuous Discovery
Velatir's browser extension operates at the layer where AI usage most often happens, within the browser. When an employee visits ChatGPT, Claude, Midjourney, or any of hundreds of AI platforms, Velatir captures that interaction. Not the content, but the trace: evidence that contact occurred.
This creates a living, automatically updated inventory of AI tool usage across your organisation. You gain visibility into every AI system your employees touch, whether IT approved them or not. Frequency data distinguishes between one-time experiments and regular operational use, helping you understand what's truly embedded in workflows versus what's casual exploration. Finally, our platform automatically categorizes systems by use case, identifying generative AI, code assistants, image generation, data analysis tools, and other categories relevant to risk assessment.
Unlike the EDPS's voluntary survey approach, this discovery mechanism isn't dependent on user self-reporting or technical knowledge. It's systematic, continuous, and comprehensive. We’re capturing the full reality of AI adoption rather than the subset that users recognize and report.
From Discovery to Understanding
But Velatir doesn't stop at listing URLs. The "Map My Stack" functionality provides the context you need for AI Act compliance.
The platform can be setup to provide granular visibility at the department level, revealing which teams are heavy AI adopters and which tools are proliferating organically through the organisation. This visibility informs both risk assessment and policy development. If you discover that your sales team has independently adopted three different AI writing assistants, that tells you something about unmet needs, potential redundancy, and where to focus governance efforts.
Equally important is the long-term perspective. Velatir tracks how AI adoption changes over time, surfacing trends that periodic surveys would miss. Are new tools appearing in your environment weekly? Are approved alternatives being bypassed in favor of shadow solutions? This trend analysis reveals whether your governance approach is working or whether employees are routing around controls to get their work done.
When regulators ask how you maintain oversight of AI systems, Velatir provides the evidence. Instead of presenting a point-in-time spreadsheet that was accurate three months ago, you can demonstrate continuous monitoring with comprehensive audit trails showing systematic, ongoing oversight of AI usage across the organisation.
The Human-in-the-Loop: Making Discovery Actionable
Discovery alone isn't enough. The EDPS report emphasizes the need to "de-mystify high-risk AI systems" and help organisations understand that high-risk classification doesn't mean prohibition. It means appropriate safeguards.
This is where human oversight becomes essential. Velatir's approach combines automated discovery with human judgment at the point of decision.
Real-Time Intervention
When an employee accesses an AI system, Velatir can provide immediate, contextual guidance tailored to the specific situation – and even more so sensitive input can be sanitised before ever reaching the AI system.
In our next iteration, the system will be able to alert the employee that the AI platform they're accessing hasn't been approved for company use, while offering constructive alternatives. Rather than simply blocking access, it suggests approved tools that serve similar purposes and provides clear pathways to request new capabilities if the approved alternatives don't meet their needs. This approach respects employee autonomy while steering them toward compliant choices.
When employees access tools that may fall into high-risk categories, the guidance becomes more specific. An employee navigating to an AI recruitment platform, for instance, receives immediate notification that they're entering a potentially high-risk context under the AI Act. The system reminds them to ensure required assessments are completed before processing candidate data, linking directly to the relevant approval processes and documentation requirements. This just-in-time reminder is far more effective than expecting them to recall training from months earlier.
For generative AI tools that could process sensitive information, Velatir provides crucial data handling reminders at the moment of use. Employees receive clear guidance about what types of information, confidential customer data, proprietary code, personal data subject to GDPR, should never be uploaded to these platforms. These contextual warnings appear exactly when they're most relevant, not buried in an acceptable use policy document.
This isn't just notification, it's education at the moment it matters most. Rather than hoping employees remember a training session from three months ago, you're providing relevant guidance precisely when they need it, in the context where they need it.
Building Organisational Capability
Over time, this approach does something traditional compliance programs struggle with: it builds genuine understanding. Employees learn to recognize AI systems, understand risk categories, and make better choices, not because they memorized a policy document, but because they received practical guidance in real situations.
Meanwhile, compliance teams gain visibility into which types of AI usage are most common, where employees encounter friction with approved tools, and what additional guidance or approved alternatives might be needed.
What the EDPS Report Really Tells Us
The EDPS mapping exercise is valuable not just for its findings about EU institutions, but for what it reveals about AI governance more broadly. Even organisations with dedicated compliance functions, regulatory expertise, and a culture of adherence struggle to answer basic questions about their AI footprint.
The report's emphasis on "early inventories" and encouraging institutions to "understand the AI systems they are using" acknowledges a hard truth: we're in the early stages of figuring this out. The AI Act provides the regulatory framework, but operationalizing compliance requires new capabilities.
The Path Forward: Visibility First, Everything Else Second
You can't classify AI systems as high-risk or not-high-risk if you don't know they exist. You can't conduct data protection impact assessments on tools you haven't discovered. You can't provide adequate human oversight of systems you're unaware of.
The EDPS report demonstrates that even with the best intentions, voluntary reporting and periodic surveys leave gaps. Those gaps represent risk.
Velatir's approach fills those gaps. At its foundation is continuous, automatic discovery that doesn't depend on user reporting, the system observes actual behavior rather than relying on self-assessment. This creates visibility across all AI platforms, not just the approved tools that went through formal procurement. The browser-based approach captures everything from major SaaS platforms to niche productivity tools that individual employees discover and adopt.
Layered on top of this discovery mechanism is real-time human-in-the-loop guidance that educates while protecting. Rather than creating friction through blanket restrictions, the system provides context-aware information that helps employees make better decisions in the moment. Over time, this builds genuine capability rather than grudging compliance.
Finally, the entire system generates compliance evidence that demonstrates systematic oversight. When auditors or regulators ask about your AI governance processes, you can show continuous monitoring, comprehensive inventories, and documented interventions, not aspirational policies and periodic spot checks.
The EU AI Act is forcing a reckoning with AI governance. The EDPS report shows that even public organisations are grappling with the fundamentals. The question for every organisation preparing for compliance is: How will you discover and manage AI systems you don't yet know about?
Because one thing is clear, asking nicely isn't enough.
Ready to map your organisation's AI landscape? See how Velatir's browser extension provides continuous AI system discovery and human oversight at the point of use.
Want to understand the EDPS findings in depth? Read the full High-Risk AI Systems Mapping Report and European Data Protection Supervisor Wojciech Wiewiórowski's accompanying blog post.