47% of employees using generative AI tools are accessing them through personal accounts.
Not company-sanctioned platforms. Personal accounts. With personal credentials. Using company data.
That’s the shadow AI problem.
What shadow AI looks like
A rep is preparing for a key account meeting. They need talking points customised for the customer’s situation. So they open ChatGPT on their phone -their personal account -and paste in the customer’s purchasing history, competitive situation, and strategic challenges.
The AI generates excellent talking points.
Also: company data just left the building. It’s now sitting on servers the company doesn’t control, subject to terms of service the company never agreed to, potentially training models that competitors might access.
Nobody authorised this. Nobody tracked it. Nobody knows it happened.
That’s shadow AI. And it’s happening at scale in pharmaceutical commercial operations.
The governance vacuum
Most organisations have policies for data security. Access controls. Encryption requirements. Approved vendor lists.
Those policies assumed data would stay within systems the organisation controls.
Generative AI broke that assumption. Employees can now extract data, process it through external AI, and get results -all without triggering any security alert.
The controls weren’t designed for this. They’re not catching it.
The R12.1M cost difference
Research on data breach costs reveals that breaches involving shadow IT (including shadow AI) cost organisations R12.1M more on average than breaches that don’t.
Why? Because shadow systems aren’t monitored. Breaches take longer to detect. When they’re detected, the data exposure is harder to scope. Remediation is more complex because you don’t know what went where.
Shadow AI amplifies this. Every prompt containing company data is a potential leak. Every response might be stored, trained on, or accessed by the AI provider.
The pharmaceutical specific risk
In pharmaceutical commercial operations, the data at risk is particularly sensitive.
Customer intelligence. Prescriber relationships. Prescribing patterns. Competitive positioning. This is the commercial foundation of the business.
Pricing information. Discounting strategies. Tender responses. Margin structures. Competitively devastating if exposed.
Patient data. In some contexts, call reports and customer records contain patient-adjacent information. Regulatory exposure compounds commercial risk.
A rep pasting customer data into a personal AI account might think they’re just getting help with a presentation. They’re actually creating a compliance incident that regulators would view seriously.
What governance requires
Visibility first. You can’t govern what you can’t see. Before policy, organisations need to understand the extent of shadow AI usage. Anonymous surveys. Network monitoring. Honest conversations.
Sanctioned alternatives. Employees use shadow AI because it helps them work. Prohibition without alternative just drives behaviour underground. Provide approved AI tools with appropriate guardrails.
Clear policies. What data can be used with AI? What can’t? Where are the lines? Vague guidance produces inconsistent behaviour. Specific rules produce compliance.
Technical controls. Where possible, implement controls that prevent data from reaching unapproved AI tools. Data loss prevention systems can be configured for generative AI endpoints.
Training that matters. Not checkbox compliance training. Genuine education about why the rules exist and what’s at stake. Employees who understand the risk behave differently than employees who just know there’s a policy.
The uncomfortable conversation
Many organisations don’t want to know about shadow AI. Knowing creates obligation. Ignorance provides deniability.
That’s a dangerous position. Regulators don’t accept “we didn’t know” when the knowing was available. And the breach cost research suggests that finding out late is much more expensive than finding out early.
The conversation about shadow AI is uncomfortable. It reveals behaviour that’s been happening without oversight. It exposes gaps in controls that were assumed to be sufficient.
But the alternative -pretending the problem doesn’t exist until a breach forces awareness -is worse.
47% of GenAI users on personal accounts. In your organisation, that’s not a hypothetical. It’s happening now.
The question is whether you’ll govern it proactively or discover it during incident response.
Written by
Dieter Herbst
CEO & Founder at Herbst Group. Working with pharmaceutical commercial leaders across South Africa, Kenya, and Brazil to transform sales force effectiveness through evidence-based approaches.
Connect on LinkedIn