Top tips: Managing the risks of BYOAI at work

Top tips is a weekly column where we highlight what’s trending in the tech world today and list ways to explore these trends. This week, we’re discussing the rise of AI tools in the workplace—and the growing risks around their unregulated use.

It started quietly. A few employees using ChatGPT to rewrite emails. A project manager testing Notion AI to summarize meetings. A developer relying on GitHub Copilot to speed up code. Now? These AI tools are everywhere in the workplace—and most of them aren’t being tracked or approved by IT.

Welcome to the rise of bring your own AI (BYOAI): The intersection where GenAI meets shadow IT, introducing a new layer of risk and complexity for businesses.

What Is BYOAI?  

BYOAI refers to the unsanctioned use of consumer-grade or third-party AI tools by employees in their day-to-day work. Just like BYOD disrupted IT security policies a decade ago, BYOAI is now raising new questions around data governance, compliance, and visibility.

Why it’s a growing risk

AI tools offer undeniable advantages: faster workflows, improved efficiency, and enhanced creativity. However, when adopted without oversight, they can introduce serious security, compliance, and operational risks that organizations must address before it's too late.

Data leakage: Employees may unintentionally enter sensitive business or customer data into AI platforms that store, log, or reuse user inputs—especially in free or consumer-grade versions. Without knowing how AI tools store or use input data, organizations risk exposing sensitive or proprietary information.

Compliance violations: In regulated industries, unsanctioned use of GenAI tools can result in non-compliance. This includes unauthorized data transfers, missing audit trails, or breaches of data protection laws such as the GDPR and HIPAA.

Misinformation from AI distortions: GenAI models can produce inaccurate, misleading, or biased content that appears credible. If used unchecked in reporting, analysis, or external communication, this can lead to serious consequences for your organization's reputation and operations.

Lack of accountability: When AI-generated content is used in emails, reports, or source code, it becomes difficult to verify authorship or trace decision-making processes. This lack of transparency complicates governance and quality control.

5  practical strategies to manage BYOAI without hindering innovation  

Unregulated use of AI tools doesn’t have to become a liability. With the right approach, organizations can support innovation while maintaining control and compliance. Here are five strategies to help manage BYOAI effectively:

1. Acknowledge, don’t prohibit 

Attempting to block all AI tools won't be effective. Instead, begin by identifying how AI is already being used across teams. Open, non-punitive discussions foster awareness and create opportunities to shape responsible usage patterns.

2. Evaluate AI tools based on risk 

Not all AI tools carry the same level of risk. Develop a classification framework that categorizes tools based on potential impact. For example, writing assistants may pose minimal concern, whereas code-generation platforms or AI bots interacting with clients could present higher risks.

3. Establish an approved AI toolkit 

Provide employees with access to approved, secure AI platforms that meet your organization’s standards for data protection, compliance, and auditability. Solutions such as Microsoft Copilot, Zoho Zia, or other enterprise-grade AI tools can serve as trusted alternatives to unvetted consumer apps.

Additionally, your organization can choose to license professional or business-tier versions of AI tools—such as ChatGPT Enterprise, or Notion AI—and issue access keys to employees. These tools often offer stricter data handling policies, including assurances that user data is not retained or used for model training.

By standardizing access to powerful AI tools through approved, enterprise-level accounts, organizations can reduce exposure and maintain better control over data flows. This not only ensures better data governance, but also gives teams access to more powerful features while remaining under IT oversight.

4. Build AI literacy across the organization 

Equip teams with the knowledge to use AI responsibly. Training should include guidance on:

  • Structuring effective and safe prompts.

  • Avoiding the input of sensitive or regulated data.

  • Verifying and fact-checking AI-generated content.

Improving AI literacy reduces misuse and ensures that employees understand both the potential and limitations of these tools.

5. Monitor usage transparently 

Implement endpoint and browser-level monitoring to understand how AI tools are being accessed and used. This should be framed as a governance initiative rather than surveillance, with the goal of developing informed policies and mitigating risk without disrupting workflows.

Ignore it now, regret it later

AI adoption in the workplace isn’t a question of if—it’s already happening. Employees are using GenAI tools to work faster and smarter, often without formal approval or oversight. And while the benefits are real, so are the risks.

Organizations that take a proactive approach—by understanding usage patterns, defining clear policies, and enabling responsible adoption—will help employees unlock AI’s full potential. Those that delay action may soon find themselves dealing with compliance issues, security gaps, and a workforce that has already moved on without them.

The AI era is here, and the time to manage it is now.