Discussions with creatives, leaders and thinkers

TGD

Episodes

Simon Hodgkins

Should You Stop Treating AI Like a Teammate? with Simon Hodgkins Ep 277 - The Global Discussion

Simon Hodgkins, Founder of The Global Discussion and international business leader, shares a thought-provoking solo perspective in this episode. Speaking directly to executives, marketers, and decision-makers navigating the rapid rise of artificial intelligence, Simon challenges one of the most common narratives shaping today’s technology conversations: the idea that AI behaves like a person.

Instead, he argues for something far more practical and important: conceptual clarity.

Drawing from a recent article and his own reflections, Simon explores why the language we use to describe AI influences how leaders govern, invest in, and deploy it across organizations. The result is a powerful reminder that understanding what AI truly is, and what it is not, may be one of the most important strategic disciplines for modern leadership.

AI Isn’t People, And That Distinction Matters

In conversations about artificial intelligence, it’s remarkably easy to slip into human language.

We say systems “hallucinate.”
We say they “lie.”
We ask whether they “understand.”

But as Simon highlights, this framing is misleading. AI systems are not thinking entities. They are probabilistic engines producing outputs based on patterns learned from training data.

This distinction is not philosophical nit-picking; it has real implications for leadership.

When executives believe an AI system has agency, they often respond emotionally to errors. When they understand it as a statistical system, the response becomes structural: examine the prompts, the training data, the evaluation process, and the constraints around the system.

One framing invites blame.
The other invites design.

The Human Instinct to Anthropomorphize Technology

Humans are wired to interpret fluent language as evidence of intelligence. When a system communicates in complete sentences, remembers context, and mirrors tone, it naturally triggers our social instincts.

But fluency is not consciousness.

Today’s AI models operate at a massive scale through pattern completion. They map input sequences to likely output sequences. That capability can be extraordinarily powerful, but it does not represent understanding or intent.

Simon calls this the “interface illusion.”

Because AI is presented through conversational interfaces, we instinctively assume there is a mind behind the words. If the same systems returned probability matrices instead of paragraphs, we would likely judge them very differently.

Strategy, Governance, and the Leadership Lens

Why does this framing matter?

Because how leaders describe AI directly affects how they deploy it.

If AI is treated as a co-worker or a junior employee, organizations may expect judgment where none exists. If it is treated as an oracle, they risk suspending critical thinking when it matters most.

The correct perspective is far more grounded:

AI is infrastructure.

It is a system that must be engineered, evaluated, monitored, and governed.

That means leaders should focus on:

  • Benchmarking performance

  • Defining acceptable error margins

  • Building review processes

  • Establishing feedback loops

  • Investing in data quality and model monitoring

Trust, Simon notes, is a human attribute. Reliability is a system property.

Understanding that difference leads to better governance.

Accountability Cannot Be Outsourced

One of the most important insights in this episode concerns responsibility.

If AI is not a person, it cannot be a scapegoat.

When an AI system produces biased or misleading output, the root cause lies in its design, data, oversight, or deployment, not in the algorithm's moral failure.

Organizations sometimes distance themselves from outcomes by saying, “The algorithm did it.”

But systems are built and deployed by people.

Accountability flows upward.

For CEOs, CMOs, and leadership teams, that means governance frameworks must accompany AI adoption. The technology may be powerful, but responsibility remains human.

AI in Marketing: Tool, Not Teammate

The marketing world offers a clear example of this confusion.

Generative AI can produce convincing campaign copy, brainstorm creative ideas, and generate marketing assets in seconds. At first glance, it can appear as though the system itself is acting creatively.

But Simon reframes this capability in practical terms.

AI is not a creative agent.
It is a pattern synthesizer.

That distinction matters because while AI can dramatically expand ideation, accelerate testing, and enable personalization at scale, it cannot:

  • Hold brand judgment

  • Understand culture through lived experience

  • Accept ethical accountability

Those responsibilities remain firmly human.

The real strategic question therefore, becomes:

How should organizations redesign workflows around probabilistic systems?

Precision Creates Better Investment

As AI adoption accelerates, organizations face a choice.

They can treat AI as something mysterious and quasi-human, or they can treat it as infrastructure that requires disciplined engineering.

When companies misunderstand AI, they often:

  • Overinvest in hype cycles

  • Underinvest in evaluation frameworks

  • Oscillate between fear and overconfidence

But when AI is classified correctly as infrastructure, leaders focus on architecture instead:

  • Data quality

  • Model monitoring

  • Team AI literacy

  • Governance protocols

  • Measurable business outcomes

This shift from narrative to structure is where real value emerges.

A Strategic Discipline for the AI Era

Simon closes the episode with a simple but powerful reminder.

AI isn’t people.

Recognizing that truth sharpens strategy, strengthens governance, and improves decision-making. In a technological shift as significant as this one, clarity itself becomes a competitive advantage.

For leaders navigating the AI era, the discipline lies in understanding both the power and the limitations of these systems and ensuring that human judgment remains at the center of their deployment.

About The Global Discussion

The podcast features carefully curated guests from an exciting cross-section of creatives, leaders, and thinkers. New episodes are available on Apple, Google, and Spotify podcasts and several leading podcast platforms. You can listen to and watch the episodes on our dedicated YouTube channel and the website.

To learn more about The Global Discussion, please visit:
https://www.theglobaldiscussion.com

Audio
Spotify: https://open.spotify.com/show/3QdMqfzyvca6EVlEJ80I4n
Apple: https://podcasts.apple.com/ie/podcast/the-global-discussion/id1668702566

Video
YouTube: https://www.youtube.com/@theglobaldiscussion
Website:  https://www.theglobaldiscussion.com

Follow us on Social Media
LinkedIn:  https://www.linkedin.com/company/theglobaldiscussion⁠
Others: X, Instagram, and Facebook