Skip to main content
AI Governance: Your Reps Are Using AI. Just Not Yours.
AI Governance

AI Governance: Your Reps Are Using AI. Just Not Yours.

81% of employees use unapproved AI tools. Here's how pharma leaders can channel that energy into governed systems that enable speed.

| 4 min read
Dieter Herbst

Dieter Herbst

CEO & Founder

AI Governance Data Compliance Commercial Excellence Leadership

Your reps are using AI. Just not yours.

They’re copying territory lists into ChatGPT on their phones. Pasting customer segments into Google’s AI. Asking WhatsApp bots to help analyse sales figures.

Not because they’re careless. Because they’re trying to do good work faster.

The Reality Most Leaders Are Navigating

This is the reality most pharmaceutical commercial leaders are navigating right now. Research from late 2024 found 81% of employees use AI tools that haven’t been formally approved. And 59% don’t mention it to their managers.

The driver is understandable: only 52% of companies provide approved AI tools, and just a third of employees say those tools actually meet their daily work requirements. People are finding ways to be more productive. The challenge is channelling that energy into governed systems.

Our 18-Month Preparation

We’ve been working through this ourselves.

We spent 18 months preparing for AI. Not experimenting. Preparing. When we started evaluating platforms for processing client commercial data, we asked the questions that kept us up at night:

  • Where does the data sit?
  • Who can access it?
  • What happens to it after processing?
  • Can we prove what happened if something goes wrong?

I suspect many of you are asking the same questions.

Lessons from Samsung and Others

Samsung faced this directly. Within 20 days of allowing ChatGPT, three separate data incidents occurred. Their response was instructive: rather than tightening restrictions, they built a governed in-house alternative that met the same needs safely.

The lesson? Restrictions without alternatives push usage underground. Governed enablement brings it into the light.

Governance Enables Speed

What’s encouraging is that the companies getting this right are proving governance doesn’t slow things down.

Novartis updated their Code of Ethics in 2024 to explicitly address responsible AI use. Their CEO-chaired ESG committee approves AI governance frameworks for strategic decisions. The result? An AI-generated drug candidate moved from computer to clinic in 17 months.

Pfizer established AI governance back in 2014. Last year, their AI implementation saved scientists 16,000 hours of search time annually and cut infrastructure costs by 55%.

Governance enabled their speed. It’s enabling ours too.

Our Four Principles

For our practice, we settled on four principles:

1. Humans Lead, Data Informs

AI generates options and surfaces patterns. Humans make decisions. Every insight we deliver comes with clear reasoning a Commercial Director can interrogate and own.

2. Transparency Over Opacity

If we can’t explain how we reached a conclusion, we don’t present it.

3. Client Data Stays Client Data

No commingling. No training. No enrichment.

4. Governance Enables Speed

The controls remove the friction of uncertainty. When everyone knows the boundaries, they move confidently within them.

We selected a platform with isolated environments, audit-grade change logs, and a contractual guarantee that client data never trains AI models. Not because we’re risk-averse. Because we wanted to be confident.

That confidence is what allows us to move quickly where hesitation would otherwise slow us down.

The Talent Dimension

The talent dimension adds urgency. 93% of Gen Z workers use two or more AI tools weekly. 75% are using AI to upskill. For pharmaceutical companies competing for analytics and commercial talent, this is the new baseline expectation.

Five Questions to Frame the Conversation

If your organisation is working through similar questions, here are five that might help frame the conversation:

  1. Who owns accountability when AI-assisted decisions need to be explained?

  2. What customer and patient data is being processed by AI systems, and can you demonstrate where it sits?

  3. Which AI tools are your teams finding useful, and how might you bring those into a governed framework?

  4. Do you have an incident response approach for AI-specific scenarios?

  5. Are your current tools meeting the work requirements your people actually face?

The Window Is Now

Only 27% of boards have incorporated AI governance into their committee charters. That number will shift quickly over the next year.

The companies implementing governance frameworks now will be well-positioned to capture value while managing risk thoughtfully. It’s not about moving first. It’s about moving with intent.


What’s been most useful in your organisation’s approach to AI governance? I’m genuinely curious what’s working for others.

If you’re navigating similar questions, I’d welcome the conversation.

Dieter Herbst

Written by

Dieter Herbst

CEO & Founder at Herbst Group. Working with pharmaceutical commercial leaders across South Africa, Kenya, and Brazil to transform sales force effectiveness through evidence-based approaches.

Connect on LinkedIn
AI Governance Data Compliance Commercial Excellence Leadership
Share:

Have a Challenge to Discuss?

The insights in this article come from real transformation work. If you're facing similar challenges, let's talk.

Start a Conversation