Common Sense vs. Nonsense: A Teardown of “Artificial Expert Intelligence”

Common Sense vs. Nonsense: A Teardown of “Artificial Expert Intelligence”

By JJS Ahdan

I recently read an article by Shabir Dahod titled “Could Artificial Expert Intelligence Fix Pharma’s Supply Chain Gaps?”. I was genuinely interested to read his perspective on the role of AI.

However, I fundamentally disagree with Shabir — not just on the framing of Artificial Expert Intelligence, but on almost everything the article represents.

What I found instead was a polished narrative that sounds ambitious, yet ultimately rests on a flawed premise. From the perspective of someone who has spent years aligning operations, service management, business processes, and technology, this is not a new intelligence model. It is a familiar collection of practices, re-labelled and positioned as something more profound than it actually is.

This pattern isn’t new. A few years ago, similar claims were wrapped in blockchain language. Today, it’s AI, or more precisely, a rebranding of AI as AExI.

Let’s Acknowledge What is True

To be fair, the article does make several objectively correct statements:

  1. Automation has improved efficiency in supply-chain operations
  2. Pharmaceutical supply chains are fragmented
  3. DSCSA compliance is complex

Where the article goes wrong is not in identifying challenges, but in how it frames both the cause and the solution.

Claim 1: “General AI creates risk in pharmaceutical supply chains.”

Nonsense:
General-purpose AI or automation inherently introduces risk in DSCSA-regulated environments.

Common Sense:
Risk does not come from whether technology is labelled “general” or “expert.”
Risk comes from poor alignment between process, standards, data ownership, software design, governance, and accountability.

DSCSA compliance is fundamentally deterministic. Serialization, verification, validation, and traceability are rule-based operational disciplines – not intelligence problems. Rebranding the tooling does not change that reality.

Claim 2: “Artificial Expert Intelligence is the next revolution.”

Nonsense:
AExI represents a fundamental leap beyond existing approaches.

Common Sense:
Everything described under AExI already exists in well-understood enterprise patterns:

  • domain-specific systems
  • constrained automation
  • governed workflows

These are sensible design choices. Calling them a revolution may be good positioning, but it does not make them new.

If I rename my toaster “Thermal Bread Optimization Intelligence,” it still just makes toast.

Claim 3: “Five defining attributes set AExI apart.”

Let’s translate them.

1. Domain-specific sub-agents

Nonsense:
Expert agents coordinating across ecosystems.

Common Sense:
Different components doing different jobs. That’s system design, not a new form of intelligence.

2. Greater precision and accuracy

Nonsense:
AExI decisions are more trustworthy because they are “expert.”

Common Sense:
Precision comes from constraints, clean data, and clear rules, not from what the intelligence is called.

3. Lower cost and faster reasoning

Nonsense:
AExI is inherently more efficient than “general AI.”

Common Sense:
Efficiency is an architectural choice. Any serious implementation avoids unnecessary computation. This comparison relies on a strawman rather than real-world practice.

4. Fine-grained responsibility

Nonsense:
Agents mirror organizational design.

Common Sense:
Clear responsibility is a business requirement, not an AI innovation.

5. Humans in the loop

Nonsense:
Human oversight is presented as a differentiator.

Common Sense:
In regulated industries, human oversight is mandatory. Period.

Claim 4: “AExI agents deliver advanced capabilities.”

Nonsense:
AExI agents validate EPCIS events, detect exceptions, and coordinate partners.

Common Sense:
So do rule engines, BPM tools, EDI validators, and deterministic automation systems.

If “intelligence” is required to decide whether data conforms to a standard, the issue is not intelligence, it’s awareness and discipline, often made worse by proprietary canonical data models that undermine interoperability and standards.

Claim 5: “Governance by design ensures trust.”

Nonsense:
Governance is framed as an AExI innovation.

Common Sense:
You’re describing enterprise workflow systems that have existed for decades. Governance is essential.

Calling this innovation is like saying your bicycle comes standard with wheels.

Claim 6: “Generic chatbots undermine accuracy.”

Nonsense:
Critical supply-chain decisions are contrasted with hypothetical chatbot misuse.

Common Sense:
This is another strawman. No serious organization is running DSCSA compliance through unguided chat interfaces.

The Missing Foundation: Standards First, Intelligence Second

What the article largely sidesteps is the role of industry standards.

GS1 identifiers and the EPCIS framework already define:

  • what data must exist
  • how it must be structured
  • how it must be exchanged
  • how it must be validated or verified

When systems adhere to these standards, most DSCSA “decisions” disappear. There is nothing to infer, predict, or generate – only to verify.

Serialization and supply-chain traceability are rules-based businesses. Introducing AI into core validation flows does not increase confidence. It introduces ambiguity.

Standards exist precisely to remove interpretation.

So What’s Really Going On?

Strip away the language and AExI becomes:

  • domain-aligned automation
  • governed workflows
  • selective use of AI where helpful
  • positioned as a paradigm shift

That’s not nonsense.

What is nonsense is presenting this as a new category of intelligence, while diverting attention away from interoperability, standards adoption, and execution discipline.

Final Word & An Open Invitation

From a supply-chain and business perspective, progress comes from alignment, not acronyms.

Technology succeeds when it fits the process, the regulations, the standards and the incentives. When it doesn’t, no amount of rebranding will fix it.

Shabbir’s article makes repeated references to current, real-world applications of Artificial Expert Intelligence delivering measurable outcomes across pharmaceutical supply chains.

If this is already happening at scale, the industry would benefit enormously from seeing it.

So I’d like to extend an open invitation to Shabbir Dahod to join us on our podcast to demonstrate AExI operating in a live, real-world context — not as a concept, not as a roadmap, but as something customers can observe, compare, and validate.

This wouldn’t be a debate. It would be a demonstration.

Supply chains don’t run on narratives – they run on evidence.
If AExI is delivering the outcomes claimed, the industry deserves to see it.