Watch this Video to see... (128 Mb)

Prepare yourself for a journey full of surprises and meaning, as novel and unique discoveries await you ahead.

Study Reveals Widespread Use of Unapproved AI at Work

Spoiler alert: If you think your team isn’t using AI at work, they probably just aren’t telling you.

A growing stack of studies is painting the same picture: employees all over the world are quietly using unapproved AI tools to get their work done faster, better, and with fewer late-night panic sessions. At the same time, companies are scrambling to catch up with policies, security rules, and governance frameworks that feel like they’re always one step behind.

In other words, we’re living through the rise of “shadow AI”the unsanctioned, off-the-books use of AI tools in the workplace. IBM describes shadow AI as any AI tool used without the approval or oversight of IT, similar to “shadow IT” but with higher stakes because AI tools don’t just move data, they interpret and transform it. Zendesk and other workplace surveys now estimate that nearly half of some employee groups are using unsanctioned AI tools at work, particularly in customer service and knowledge roles.

This article breaks down what the latest research actually says about unapproved AI at work, why employees are doing it anyway, and how smart organizations can move from “Don’t you dare use that chatbot” to “Here’s how to use AI safely and effectively.”

What the Latest Study Reveals About Unapproved AI

A 2024 workplace study spotlighted by HubSpot found that a large share of workers are using unapproved AI tools at work, often in ways that fall outside official company policiesor in the many companies that still don’t have AI policies at all.

The headline findings from this and related surveys are remarkably consistent across multiple sources:

  • Unapproved AI is now normal, not fringe. Recent “shadow AI” research suggests that 98% of organizations have employees using unsanctioned apps, including AI tools, and about three-quarters have active “bring your own AI” (BYOAI) behavior in the workforce.
  • Workers are doing it in secret. Surveys of workers using ChatGPT and similar tools show that around 45–70% of employees who use AI at work don’t tell their managers.
  • They’re sharing more sensitive data than anyone is comfortable admitting. One 2024 cybersecurity-focused study found that 38% of workers had shared sensitive data in AI tools without employer knowledge.
  • In many companies, policy just hasn’t caught up. Research summarized by TechRadar and other outlets suggests that fewer than one in three organizations have clear, formal AI policies, even as most report shadow AI usage.

Put simply, AI tools are already embedded in day-to-day workemails, reports, code, slide decks, customer replieswhether or not your IT or legal team has blessed them.

Who’s Using AI at Work (And Why They’re Hiding It)

AI use is rising across the workforce

According to recent Pew Research Center data, about one in five U.S. workers (21%) now say at least some of their job tasks involve AI, up from 16% just a year earlier. And that’s just counting people who admit it in a survey.

Other workplace polls show even higher numbers when you ask about “ever used AI at work” or include things like grammar tools, autocomplete, and summarizers. One 2025 survey found that four in five workers use AI at work in some capacity, and over a third say it’s now “essential” to their job.

The secrecy problem: Shadow AI in action

So why are so many people keeping their AI use a secret? Studies highlight a few recurring themes:

  • Fear of looking lazy or replaceable. Workers worry that if managers see how much AI helps with their tasks, they’ll think, “If the bot can do it, why do we need you?” Surveys repeatedly show that over half of workers are worried AI will reduce job opportunities.
  • Lack of clear rules. In multiple studies, nearly two in five workers reported that their company has no clear AI use guidelines, so people wing it and hope for the best.
  • Official tools are clunky or nonexistent. Research on shadow AI consistently shows that many employees turn to unapproved tools because they either don’t have sanctioned ones or find them too limited compared to consumer tools they use at home.
  • Culture and trust issues. When employees assume that management’s default reaction will be “No,” they’re more likely to experiment in private than ask for permission.

All of this creates a strange paradox: organizations want innovation and productivity, workers want to be efficient, and yet both sides are quietly nervous about the very tools that could bridge the gap.

The Risks of Unapproved AI at Work

Let’s be honest: unapproved AI isn’t just a cute rogue productivity hack. It can be a serious risk vector.

1. Data leaks and confidentiality problems

Cybersecurity researchers are increasingly worried about what happens when employees paste confidential information into public AI tools. Studies estimate that a majority of shadow AI users have entered some form of sensitive data, including customer records, financial details, internal strategy documents, and proprietary code.

Once that data is in an external system, companies lose visibilityand sometimes controlover where it lives, how it’s stored, and who might use it in the future.

2. Compliance, privacy, and regulatory headaches

Regulations around privacy, data protection, and AI transparency are tightening worldwide. Shadow AI usage can put organizations at risk of violating laws like GDPR, HIPAA, and industry-specific regulations, especially in finance and healthcare. Reports show that companies with high shadow AI use tend to suffer more severe and costly data breaches.

3. Accuracy and “hallucinations”

Generative AI is powerful, but not infallible. Studies and expert commentary highlight that many employees don’t fully understand how to evaluate AI output. When workers copy-and-paste AI-generated content into emails, contracts, or analysis without proper fact-checking, they risk introducing subtle (or spectacular) errors into business decisions.

4. Erosion of trust between workers and leadership

When leaders discover that a significant portion of their workforce has been quietly using AI behind the scenes, it can trigger a trust crisis. On the flip side, workers often feel that leadership is out of touch if the official stance is “just don’t use it,” while deadlines keep shrinking.

That’s why progressive organizations are shifting to a more nuanced stance: AI is allowedhere’s how to use it safely.

The Upside: Why Workers Use AI Anyway

The positive side of these studies is that AI isn’t just a toy. It’s delivering real benefits that organizations should want to harnesssafely.

  • Productivity boosts. Surveys of desk workers report that over 80% of AI users see improved productivity, faster completion of repetitive tasks, and more time for higher-value work.
  • Reduced burnout. Workers cite AI’s ability to handle “boring admin stuff” so they can focus on creative, strategic, or interpersonal tasks that actually require human judgment.
  • Better quality in some tasks. AI helps catch typos, simplify complex language, generate alternate phrasings, or outline ideas. It’s like having a tireless junior assistant who never gets offended if you rewrite everything.

The real challenge isn’t whether employees should use AI; it’s how to channel that usage into something safe, transparent, and aligned with company goals.

How Organizations Can Respond to Shadow AI

1. Acknowledge reality

Step one: assume people are already using AI. The data is overwhelmingshadow AI is the norm, not the exception. Leaders who cling to “We banned it; therefore it’s not happening” are basically the security version of “If I close my eyes, the problem disappears.”

2. Create clear, simple AI policies

Effective AI policies don’t need to be 40-page PDFs no one reads. The best ones are written in plain language and answer basic questions:

  • Which AI tools are approved for useand for what kinds of tasks?
  • What types of data can never be entered into public AI tools?
  • How should employees fact-check and label AI-generated content?
  • Who can employees ask when they’re unsure about what’s allowed?

Studies show that when companies provide approved tools and clear guidance, workers are far more likely to stay within guardrails and less likely to experiment with risky consumer tools.

3. Provide good, sanctioned AI tools

Workers use shadow AI because it works. If the only official option is an outdated internal tool that’s slow, limited, or hard to access, employees will default back to their favorite public chatbot.

Forward-thinking companies are rolling out vetted AI tools embedded in the tools people already useemail, docs, CRM, help desksand layering security controls like data loss prevention (DLP), logging, and access management on top.

4. Train people (and not just once)

AI literacy is quickly becoming as important as basic digital literacy. Practical training should cover:

  • How AI tools work and where they can go wrong
  • What “good prompts” look like for specific job roles
  • How to review, edit, and fact-check AI responses
  • What data is safeand unsafeto use

Surveys show that when employees understand the risks and the rules, they’re more confident using AI and less likely to hide it.

5. Build a culture of trust instead of fear

Shadow AI is ultimately a trust problem. If workers believe they’ll be punished for trying new toolsor that every AI experiment is “cheating”they’ll use AI in secret or not at all.

On the other hand, when leaders say, “We expect you to use AI, and we’ll help you do it safely,” AI usage becomes something employees can talk about instead of hide. That’s when organizations can start measuring impact, improving workflows, and catching risks early.

Conclusion: Shadow AI Is a Signal, Not Just a Threat

The widespread use of unapproved AI at work isn’t just a security nightmareit’s also a very loud signal. It tells us that workers are hungry for better tools, faster workflows, and more support in handling the endless stream of emails, documents, and digital noise that defines modern work.

Organizations that respond by banning AI outright will likely see more secrecy and more risk. Those that respond with thoughtful policies, strong governance, and usable approved tools will tap into AI’s benefits and reduce the chaos of shadow AI.

The genie is not only out of the bottle; it’s rewriting your slide deck and rephrasing your emails. The question now is whether your organization is ready to collaborate with itor keep pretending it’s not there.

SEO Summary & Metadata

sapo: A new wave of workplace studies reveals what many managers suspected but couldn’t prove: employees are quietly using unapproved AI tools to get their work done faster, often without telling IT or leadership. This article unpacks what “shadow AI” looks like in real offices, how common it really is, what risks it introduces for data security and compliance, and why workers keep turning to AI anyway. You’ll also learn practical steps organizations can take to move from secret experimentation to safe, transparent, and productive AI use across the business.


Real-World Experiences with Unapproved AI at Work

Statistics tell one story, but the lived experiences around unapproved AI at work are where things get really interesting. If you’ve ever quietly dropped a prompt into a chatbot between Zoom calls, you’ll probably recognize at least one of these situations.

The overworked analyst and the “secret intern”

Picture a mid-level analyst at a large company. She’s juggling dashboards, reports, and endless “quick updates” for leadership. One night, after a particularly soul-crushing spreadsheet session, she tries a generative AI tool “just to see what happens.” It summarizes a 40-page PDF in a few seconds and drafts a coherent email explaining the highlights. She double-checks the numbers, tweaks the wording, and sends it off.

The next morning, her manager replies, “This is super clearthank you!” There’s no mention of AI. The analyst quietly decides, “Okay, you’re my secret intern now.” From then on, AI helps with outlines, first drafts, and recaps, while she focuses on the nuance and judgment AI can’t provide.

Is she breaking policy? Maybe. Is she trying to do a better job? Absolutely.

The developer who’d rather ask AI than file a ticket

In many engineering teams, the unwritten rule is: if you can fix it yourself in five minutes, do it. That’s part of why developers are frequently among the heaviest shadow AI users. Instead of filing an internal ticket or waiting for a code review, some quietly paste snippets into AI coding assistants, asking for suggestions, refactors, or explanations.

When this happens in a vacuumwith no guidance about what code can be safely sharedrisk creeps in fast. But from the developer’s perspective, it often feels like the most efficient way to solve a simple problem without slowing the whole team down.

The companies that handle this best acknowledge the tradeoff and respond with clear guidelines: for example, no confidential algorithms or client-specific logic in public tools, but approved in-house AI copilots for routine boilerplate code, tests, and documentation.

The manager who suspects AIbut can’t talk about it

Managers are increasingly seeing “AI fingerprints” on their team’s work: suddenly more polished emails, consistent formatting, or summaries that sound just a little too structured. They’re pretty sure AI is involved, but they don’t want to accuse anyone unfairlyor open a can of worms they don’t know how to manage.

So, instead of bringing it up, they quietly accept the results. The work gets done, deadlines are met, and the conversation never happens. The downside? No shared best practices, no alignment on what’s allowed, and no opportunity to systematically reduce risk.

When organizations encourage managers to say, “If you’re using AI, let’s talk about how to use it well,” the tone shifts. AI becomes part of the workflow discussion, not a secret tool shoved under the rug.

The employee who doesn’t use AIand feels left behind

There’s another side of the story: workers who don’t use AI at all, either because they’re nervous, skeptical, or just overwhelmed. As colleagues quietly use AI to draft faster, reply quicker, and crank out more polished deliverables, these workers can start to feel like they’re running the same race with heavier shoes.

Some surveys show that while many workers worry AI will hurt their job prospects, others worry they’ll be left behind if they don’t learn it. That tension is realand it’s one more reason leadership should offer training and support instead of letting AI skills develop in the shadows.

What these experiences have in common

Across these scenarios, a few themes repeat:

  • AI is filling real gaps. People use unapproved tools because they feel pressure to deliver more work, faster, with fewer resources.
  • The line between “helpful assistant” and “risky shortcut” is blurry. Without guidance, employees are making judgment calls about data, accuracy, and ethics on their own.
  • Silence makes everything harder. When AI use is taboo, no one shares lessons learned, and organizations lose the chance to turn scattered experiments into shared advantages.

The big takeaway from both research and lived experience is this: unapproved AI use at work isn’t just about rule-breaking. It’s about workers trying to surviveand sometimes thrivein a demanding digital workplace. The question for employers isn’t “How do we stop this?” so much as “How do we bring it into the open, make it safer, and help everyone get the benefits without the landmines?”

:

×