Five worthy reads: From shadow IT to shadow AI, history repeats itself

Five worthy reads is a regular column on five noteworthy items we’ve discovered while researching trending and timeless topics. This week, we are exploring shadow AI as it rewrites workplace norms through unsanctioned use of AI tools inside organizations—much like shadow IT once did. Shadow AIWhen shadow IT appeared in organizations more than a decade ago, it caused alarm in boardrooms and IT departments. Employees bypassed official systems to use personal apps and services like Dropbox or Slack. At the time, business leaders viewed shadow IT as reckless behavior, but over the years, companies realized that it was less about disobedience and more about employees seeking solutions for their unmet needs: they wanted more agile tools to collaborate and work efficiently. That realization paved the way for structured cloud governance, approved SaaS adoption, and the modern IT stack.

History is now repeating itself with artificial intelligence (AI). The new phenomenon of its widespread and unsanctioned use, called shadow AI, is already taking root inside organizations of every size. A marketer might turn to ChatGPT or Gemini to draft a presentation. An analyst could lean on Copilot to crunch numbers faster. Others might install browser plug-ins or try out small AI apps that make everyday tasks less painful.

But there's an unseen issue: behind speed and convenience, shadow AI can quietly create risks that are far bigger than they appear.

A financial analyst pastes confidential spreadsheets into a generative AI tool to summarize quarterly results. A legal assistant uploads parts of a client contract to draft a quick review. A software engineer relies on a personal AI coding tool that stores prompts on external servers. In each case, these hidden activities create invisible data trails and expose organizations to risks that leadership teams may not be prepared for. Moreover, when sensitive data is uploaded to public systems, compliance regulations may be breached without anyone realizing it, and business decisions may start to rely on outputs that leadership cannot manage or control.

Learning from the past

The parallels with shadow IT are striking. In the early days of cloud adoption, IT teams tried to block unapproved apps, but with adoption being already widespread, that approach failed. Over time, companies shifted to building policies, procurement models, and governance frameworks that allowed safe cloud usage while still meeting compliance requirements. We are now at a similar stage with AI. Some organizations are still debating whether employees should be allowed to use tools such as ChatGPT at all, while workers have already integrated them into daily workflows. The lesson from shadow IT is clear: banning technology will not work. Organizations need to formalize governance models for AI and provide safe, sanctioned alternatives before shadow usage grows beyond control.

However, simply banning shadow AI will only push it further underground. Studies show that more than half of global workers already use some form of AI without approval. Instead of prohibition, companies should focus on illumination. This means organizations must build clear policies on what data can and cannot be shared, provide approved enterprise AI tools, train employees on responsible usage, and monitor activities transparently. Just as shadow IT eventually pushed companies to adopt sanctioned cloud solutions, shadow AI could become the catalyst for developing enterprise AI maturity. The real task is not how to stop shadow AI, but how to bring it to light.

This edition of Five Worthy Reads explores the rise of shadow AI: its risks, why employees turn to it, and what leaders can do to turn this hidden trend into a strategic advantage.

1. What is shadow AI?

Shadow AI refers to employees using AI tools without IT approval or governance. IBM explains that while this can boost productivity, it introduces blind spots, data leaks, compliance gaps, and reputational risks that CIOs and CISOs cannot afford to ignore.

2. Shadow AI: The Silent Security Risk Lurking in Your Enterprise

Shadow AI is a growing security and compliance threat emerging from unsanctioned AI use. Even as businesses embrace powerful AI tools, unauthorized use can leave them exposed to serious vulnerabilities if not managed. Learn the risks of shadow AI in this article.

3. AI at Work Is Here. Now Comes the Hard Part

If you’ve ever wondered why shadow AI is spreading so quickly, the answer is simple: employees want to get work done better and faster. They turn to use tools like ChatGPT, Gemini, or Copilot when official systems feel too slow or limited. Explore this report on AI and work trends by Microsoft and LinkedIn.

4. The American Trust in AI Paradox: Adoption Outpaces Governance 

Shadow AI often looks harmless on the surface. But beneath the productivity gains lie hidden risks, from data leaks to compliance gaps. Learn how unchecked AI use can quietly undermine trust, governance, and business resilience.

5. Shadow AI Explained: Causes, Consequences, and Best Practices for Control

Core risks like regulatory non-compliance creeps into the system through shadow AI when people bypass slow or inadequate official tools. This blog explains how managing shadow AI should be less about stopping innovation and more about adding smart guardrails instead.

Shadow AI is quickly becoming a part of everyday work. Most employees turn to these tools with good intentions, looking for faster ways to get things done. But the risks around data, compliance, and governance are real and cannot be ignored. The goal for leaders should not be to ban AI tools, but rather to understand why people use them and set the right guidelines. With the right approach, shadow AI can move from being a risk to becoming a driver of innovation and trust within your organization.