Skip to main content
The Quiet Revolution: When a Pharmaceutical Team Chose AI for Good
AI Governance

The Quiet Revolution: When a Pharmaceutical Team Chose AI for Good

Nine people. A Monday briefing. Five classification levels. This is what responsible AI looks like in pharma consulting.

| 8 min read
Dieter Herbst

Dieter Herbst

CEO & Founder

AI Governance Leadership Compliance Ethics Pharmaceutical

It’s Monday. 9 February 2026. A video call. Nine people.

Not a crisis meeting. Not a deadline scramble. A training session. On data classification. Voluntary attendance.

Every single person showed up.

That fact alone tells you something. Not about the topic. About the team.

The Conversation Nobody Is Having

Most of the public discourse about AI in business follows a predictable path. Risk. Harm. Bias. Hallucination. Data exposure. Job displacement. Regulation.

All of it valid. All of it necessary.

But somewhere in the noise, a quieter conversation has gone missing. The one about what AI can actually do when used with intention, with structure, with care.

Not the breathless futurism. Not the vendor pitch about productivity gains. The real, grounded, specific good that becomes possible when a team decides to use these tools responsibly.

That’s the conversation we chose to have on a Monday morning.

What Happened in the Room

I walked the team through five data classification levels. Public data at one end. Strictly Confidential at the other. For each level, a clear set of rules: where it can go, who can access it, which AI systems are permitted to process it.

Then a four-question decision tree. Every time someone handles client data, four questions. That’s it. Four questions that determine whether the data touches a consumer AI product, a commercial API with contractual protections, or nothing at all.

Prishen did a live demonstration. He classified a real scenario: looking up the word “attestation” using a consumer-tier AI tool. Harmless. Public information. No client data involved. The point was to show the team that the classification system isn’t about prohibition. It’s about clarity. Some things are fine. Some things need a commercial contract. Some things don’t touch AI at all.

Then Anisha and Prishen practised together. A client CRM export. Personally identifiable information. Commercial relationships. The kind of data that, without clear classification, could end up in a system that trains on it, stores it indefinitely, or exposes it to third parties.

Malan asked questions. The team engaged. It was 40 minutes of people genuinely caring about getting this right.

"

Without clear classification, we risk sending client data to systems that may train on it, store it indefinitely, or expose third parties.

Dieter Herbst - CEO, Herbst Group
Data classification decision framework in a collaborative workspace
The four-question decision tree that determines how every piece of data is handled.

All nine team members signed attestations via DocuSign before the session closed. Not because we forced compliance. Because they understood why it matters.

Why “AI for Good” Is Not a Slogan

The phrase “AI for Good” has been co-opted by marketing departments and conference keynotes until it barely means anything. Let me reclaim it with specifics.

In pharmaceutical consulting, “AI for Good” means:

Better data classification protects patient privacy. When a pharmaceutical company shares sales data with us, that data sometimes includes prescriber information, territory breakdowns, and patient volume indicators. Knowing exactly which AI systems can process that data, and which cannot, is not a compliance checkbox. It’s a direct line to patient trust.

Faster market analysis reaches the right medicines to the right patients sooner. We use AI to analyse market dynamics, competitive positioning, and territory intelligence. When done under proper governance, this work compresses timelines from weeks to days. That speed matters when the outcome is a medicine reaching a pharmacy shelf where someone needs it.

Automated compliance reduces human error in regulated environments. Our systems run 872 automated tests. Not because automation is fashionable. Because a human reviewing 872 scenarios will miss things. A machine won’t. And in pharmaceutical work, what gets missed can have real consequences.

Territory intelligence improves healthcare access mapping. Our Her-Zone platform maps over 14,000 healthcare facilities across South Africa. AI helps us identify gaps: areas where pharmacies are under-served, where doctors are concentrated in some regions and absent in others. That intelligence shapes how pharmaceutical companies deploy their field forces. Done well, it means better access. Done carelessly, it means nothing.

AI-powered training reaches more healthcare professionals. Our training systems have delivered content to pharmaceutical sales teams across the country. AI helps generate, verify, and personalise that content. Every module that helps a rep understand a medicine better is a module that helps a patient receive better care.

This is not abstract. This is Tuesday.

The Helsinki Frame

The University of Helsinki runs an internationally recognised programme on the Ethics of AI. It covers the expected ground: bias, fairness, transparency, accountability. But it also covers something that gets less attention: AI as a tool for human flourishing.

That phrase stopped me when I first encountered it. Human flourishing. Not human replacement. Not human surveillance. Flourishing.

I mentioned the Helsinki programme to the team during the session. It is an internationally recognised certification, and we’re pursuing it as a team. Not because a client demanded it. Because the framework aligns with how we already think about this work.

Ethics as a starting position, not an afterthought. Classification as the first step, not the last. Governance as something that enables speed, not something that slows it down.

The Commercial Advantage of Compliance

Very few companies in South Africa take this specific step: signing a commercial contract with an AI provider that includes data processing guarantees, zero-training clauses, and audit rights. It is part of our commercial advantage and also brings compliant value to governed environments.

The Numbers Behind the Words

Words are easy. Here’s what sits behind ours.

Automated testing and security metrics that back the commitment
872 automated tests. A+ security. 36 critical controls. Numbers that back the words.
872
Automated Tests
Vitest + Playwright
A+
Security Rating
110/100 Mozilla Observatory
36
Critical Controls
Zero-exception enforcement
9/9
Team Attestation
Signed via DocuSign

36 critical controls govern how we handle data, deploy code, process client information, and interact with AI systems. Each control has a specific owner, a specific enforcement mechanism, and a specific consequence for violation.

Our security posture scored 110 out of 100 on the Mozilla Observatory. That’s not a typo. The scoring system awards bonus points for headers and configurations that go beyond the baseline. We implemented all of them.

We are working toward ISO 27001 certification. Our POPIA compliance sits at 80%, with a registered Information Officer (Registration No. 2026-001668). The remaining 20% is active work, not aspiration.

872 automated tests run on every deployment. If a single test fails, the code does not ship. No exceptions. No overrides.

What This Means for Clients

If you’re a pharmaceutical company evaluating partners for commercial intelligence, territory analysis, or training, here’s what this means for you:

Your data will be classified before it touches any system. Every AI interaction involving your information will occur within a commercially contracted environment with zero-training guarantees. Every team member who handles your data has been trained on classification protocols and has signed an attestation confirming their understanding.

This isn’t a future state. This is today.

For the Industry

Compliance and governance has become a key topic for us. And part of that is taking due diligence steps to make sure we are really on the same page as our clients, our regulators, and our own standards.

Most pharmaceutical consulting firms will eventually get here. Regulation will require it. Client procurement will demand it. Insurance will incentivise it.

The question is whether you arrive because you were forced to, or because you chose to.

We chose Monday morning.

"

Nine people. A Monday. No mandate. Just a shared understanding that responsible AI is not a department or a document. It’s a decision made by every person who touches the work.

The real measure

A Final Note

I want to be careful here. This article is not a claim that we’ve solved AI ethics. We haven’t. Nobody has. The field is moving faster than any governance framework can fully contain.

What I can say is this: we started. We classified. We trained. We signed. We built the infrastructure. We wrote the tests. We measured the results.

And then we showed up on a Monday to make sure everyone understood why.

That’s not everything. But it’s not nothing either.


Dieter Herbst is CEO of Herbst Group, a pharmaceutical consulting firm based in South Africa. Herbst Group provides sales force effectiveness, commercial intelligence, and AI-governed data services to pharmaceutical companies across the region.

Dieter Herbst

Written by

Dieter Herbst

CEO & Founder at Herbst Group. Working with pharmaceutical commercial leaders across South Africa, Kenya, and Brazil to transform sales force effectiveness through evidence-based approaches.

Connect on LinkedIn
AI Governance Leadership Compliance Ethics Pharmaceutical
Share:

Have a Challenge to Discuss?

The insights in this article come from real transformation work. If you're facing similar challenges, let's talk.

Start a Conversation