Shadow AI has become a governance priority for organisations in 2026. While security teams focus on perimeter defences and endpoint protection, employees are accessing hundreds of AI tools through their browsers, often without a trace in traditional IT systems.
According to IBM's 2025 Cost of a Data Breach Report, one in five organisations experienced a breach due to shadow AI, with these incidents costing an average of $670,000 more than standard breaches. The gap between AI adoption and AI oversight has become a measurable financial risk.
What Shadow AI Actually Means in 2026
Shadow AI refers to the use of artificial intelligence tools and applications by employees without IT approval, security review, or organisational oversight. Unlike traditional shadow IT, which typically involves unapproved software installations or cloud storage, shadow AI introduces unique concerns around data exposure, model outputs, and automated decision-making that can affect customers, operations, and compliance.
This matters because AI tools process and retain data differently than conventional applications. When an employee pastes sensitive information into a public AI service, that data may enter training pipelines, inform future model responses, and potentially appear in outputs to other users. The data leaves your controlled environment in ways that traditional shadow IT rarely does.
The scale of shadow AI has grown rapidly over the past year. Netskope now tracks more than 1,550 distinct generative AI SaaS applications, up from 317 in February 2025. This proliferation means your employees are discovering and using AI tools faster than any policy or approval process can keep pace with.
According to an UpGuard report, more than 80% of workers use unapproved AI tools in their jobs, including nearly 90% of security professionals themselves. Half of workers report using unapproved AI tools regularly.
Real Examples of Shadow AI
The most visible shadow AI tools are browser-based general-purpose assistants like ChatGPT, Claude, Gemini, and Perplexity. These require no installation, no IT ticket, and no approval. An employee can access any of these tools in seconds and begin working immediately without involving IT. The productivity gain is immediate and substantial, which is precisely why adoption spreads faster than governance can follow.
Beyond the obvious names, shadow AI includes AI features embedded in productivity tools that employees may not even recognise as AI. Grammarly suggests rewrites. Notion AI summarises documents. Otter.ai transcribes meetings. These tools often operate in the background of legitimate workflows, processing sensitive content without triggering traditional security controls.
Specialised AI tools add another layer. Code assistants help developers debug and optimise. AI-powered research assistants can condense lengthy reports and pull out key findings. Design tools generate images and presentations. Meeting assistants join calls and produce transcripts. Each solves a specific problem, each processes potentially sensitive data, and each may be invisible to IT.
The consequences of unmanaged usage have already played out publicly. In March 2023, Samsung engineers leaked semiconductor source code and internal meeting notes through ChatGPT within three weeks of the company lifting its ban on the tool.
According to CIO Dive, one employee copied the entire source code of a semiconductor database program into ChatGPT to find a bug fix. Another entered code for identifying defective equipment to get optimisation suggestions. A third converted a smartphone recording of a company meeting into a document and fed it to ChatGPT to generate minutes. Samsung's warning to employees was direct: the data cannot be retrieved because it now resides on OpenAI's servers. The company subsequently developed its own internal AI, called Gauss, to prevent further exposure.
The CISA incident I covered in a previous post shows that even cybersecurity professionals are not immune. The acting director of the agency responsible for federal cybersecurity uploaded sensitive contracting documents to public ChatGPT, triggering security alerts. This happened to someone with explicit authority and awareness of the risks involved.
Why Shadow AI Exists Even in Companies with AI Policies
Employees use AI tools because the tools are useful. This applies equally to senior leadership and frontline staff. The productivity gains are real and immediate, which is why adoption consistently outpaces policy.
A WalkMe survey from 2025 found that 78% of employees have used AI tools their employer never approved. Nearly half of workers say their employer does not pay for any AI tools, so they pay out of pocket or use free tiers. The combination of high demand and low institutional support creates the shadow IT phenomenon at scale.
There is a gap between what organisations provide and what is available. Approved enterprise AI tools often require procurement cycles, security reviews, and training programmes. Meanwhile, an employee can access ChatGPT or Claude in seconds and start working immediately. When someone needs to summarise a document, draft an email, or debug code, they are not going to wait three months for procurement to evaluate options.
The friction asymmetry explains much of shadow AI. Every hurdle in your approval process, every form to fill out, every week of delay, is an incentive for employees to find their own solutions. They are not acting maliciously. They are trying to do their jobs effectively with the tools available to them.
The Security and Compliance Risks Most Teams Miss
Shadow AI creates risk across multiple dimensions that traditional security frameworks were not designed to address.
Data exposure to third-party AI providers is the most immediate concern. When employees paste customer data, source code, financial information, or strategic documents into AI tools, that data leaves your controlled environment. According to TELUS Digital research, 57% of employees admit to entering sensitive information into AI tools, including personal data (31%), unreleased product details (29%), customer information (21%), and confidential financial information (11%). Public AI services may use this input for model training, which means your proprietary information could inform responses to competitors, researchers, or anyone else using the service.
Regulatory violations compound the data exposure problem. GDPR Article 30 requires maintaining records of all data processing activities, which becomes impossible when you cannot track AI uploads. CCPA mandates the ability to delete personal information upon request, but you do not know which AI systems contain your data. HIPAA requires comprehensive audit trails that shadow AI makes unachievable. According to Kiteworks analysis of IBM's data, 32% of breached organisations paid regulatory fines, with 48% of those fines exceeding $100,000. The EU AI Act adds another layer of requirements for high-risk AI applications that organisations cannot meet if they do not know what AI is being used.
Intellectual property leakage represents a strategic risk. Source code, product designs, research data, and competitive intelligence can all flow into AI tools without detection. Cyberhaven's research found that in March 2024, 27.4% of corporate data employees put into AI tools was sensitive, up from 10.7% a year earlier. Source code comprises 12.7% of sensitive data flowing to AI tools, customer support data 16.3%.
Audit and documentation gaps create governance blind spots. When auditors or regulators ask what AI systems you are using, how they process data, and what controls you have implemented, "we do not know" is not an acceptable answer. According to IBM, 63% of breached organisations either do not have an AI governance policy or are still developing one. Among those that do have policies, only 34% perform regular audits to detect unsanctioned AI.
Liability for AI-assisted decisions extends beyond data security. If unvetted AI tools are making or influencing business decisions (screening candidates, pricing products, assessing credit risk, generating customer communications) you are accepting liability for those decisions without implementing appropriate oversight.
Why Traditional IT Tools Can't See AI Usage Properly
Most organisations discover shadow AI the same way CISA did: an alert fires after sensitive data has already been transmitted. By that point, the question shifts from prevention to damage assessment.
Traditional endpoint detection systems were designed for installed software that appears in asset inventories and system registries. Browser-based AI tools leave minimal footprints in these systems. Someone accessing ChatGPT through Chrome generates no installation event, no executable file, no entry in your software inventory.
Network monitoring faces similar limitations. When an employee submits a prompt to an AI service, that traffic is encrypted HTTPS to a legitimate cloud service. It looks identical to any other web traffic. Your network tools see a connection to api.openai.com but cannot distinguish between someone asking for weather information and someone uploading your product roadmap.
Cloud access security brokers (CASBs) can identify known AI services, but the landscape changes too quickly. With more than 1,550 generative AI applications now available, and new ones launching constantly, maintaining comprehensive coverage requires continuous updates. Meanwhile, employees are finding tools your CASB does not recognise yet.
The fundamental architecture gap is that AI usage happens in the browser, not in installed applications or network infrastructure. Traditional IT monitoring was not built for this, and retrofitting it only partially addresses the problem.
Why Banning AI Doesn't Work
Many organisations respond to AI risk by blocking tools entirely. This creates more problems than it solves.
Blocking pushes usage to personal accounts and unmonitored channels. If employees cannot access ChatGPT on the corporate network, they will use their phone on mobile data, their personal laptop at home, or their personal account on a coffee shop wifi. The AI usage continues. Your visibility disappears completely.
Blocking concentrates risk with those who can override the restrictions. The CISA incident happened after the acting director obtained special permission to use ChatGPT while other staff remained blocked. Exceptions create the very exposure you were trying to prevent, often with higher-ranking employees who have access to more sensitive information.
Blocking removes the visibility that governance requires. When AI usage happens through sanctioned channels, you can implement controls, monitor activity, and enforce policies. When it is driven underground, you lose the ability to see what is happening at all.
The productivity reality also matters. WalkMe found that 80% of employees believe AI improves their productivity. Blocking these tools entirely means either accepting the productivity loss or accepting that employees will find workarounds. Neither outcome serves the organisation well.
How Companies Are Detecting Shadow AI
The shift is toward visibility and governance. Detection becomes the foundation that enables proportionate controls.
Browser-level discovery addresses the fundamental architecture gap. Because AI usage happens in browsers, detection must happen in browsers. This means monitoring identifies AI services as employees access them, in real time, without blocking productivity or requiring behaviour changes. The detection system sees when someone navigates to ChatGPT, Claude, Gemini, or any of hundreds of other AI tools, and builds a comprehensive inventory automatically.
Real-time visibility turns uncertainty about AI usage into specific, actionable information. Instead of wondering whether employees are using AI tools, you know which tools are in use, who is using them, how frequently, and for what purposes. This visibility becomes the foundation for every subsequent governance decision.
Policy-based controls allow different responses for different risk levels. Someone using Claude to draft a marketing email carries different risk than someone uploading customer data to an unvetted AI service. Smart guardrails can allow the former while flagging or blocking the latter, based on your policies rather than blanket restrictions.
Living AI inventories solve the documentation challenge. Manual surveys and static inventories are outdated the moment they are completed. Automated discovery maintains a current, comprehensive map of your AI landscape, which becomes the foundation for compliance documentation, risk assessments, and audit responses.
This enables adoption while maintaining oversight. Employees get access to tools that make them more productive. Security and compliance teams get the visibility they need to manage risk. The organisation captures the benefits of AI without accepting unmonitored exposure.
Moving Forward
You already have shadow AI in your organisation. The question is whether you can see it and whether you have controls in place.
The organisations managing this well are not blocking AI outright. They have visibility into which AI tools are being used across their environment. They apply proportionate controls based on risk rather than blanket restrictions that drive usage underground. They maintain documentation that supports compliance and audit requirements. They have shifted from asking "how do we stop AI usage" to "how do we enable AI usage safely."
Governance requires visibility. Discovery is the prerequisite for everything else.
Ready to understand what AI tools are running across your organisation? Get in touch to see how Velatir provides the visibility and governance you need.