How Publishing Failure Became a Foundation for Trust
Listen to the full podcast episode on YouTube, Spotify, and Apple Podcasts.
Scene and context
As artificial intelligence moves into the centre of organisational life, trust is becoming a design problem. Decisions once made quietly by managers are now mediated by systems that prioritise, score, and recommend at scale.
Much of the current debate focuses on adoption speed and productivity gains.
Less attention is paid to how trust is built when machines act on behalf of organisations.
That question sits at the heart of this episode of The Responsible Edge, where the discussion turns repeatedly to a counter-intuitive idea. That trust is not built by hiding failure, but by making it visible.
A career shaped by things going wrong
For Steve Garnett, the mythology of seamless growth has never rung true. His career spans senior leadership roles at Oracle and Salesforce, both organisations that experienced moments of severe stress behind the scenes.
At Oracle in the early 1990s, weak discipline and misaligned incentives pushed the company close to collapse.
Survival depended on confronting uncomfortable truths rather than protecting appearances.
Those experiences shaped Steve’s instinct that systems fail, people make mistakes, and organisations reveal their values not when things work, but when they break.
The Salesforce decision
The most telling example came from Salesforce’s early cloud years. As customers moved critical data off-premise, system outages carried real consequences. When the platform went down, entire businesses felt it.
Leadership debated how much to disclose. The safer option was concealment. Instead, they chose exposure.
“We published all of it,” Steve said.
Every outage, every performance issue, every failure was made public. Not as a crisis response, but as a standing practice. Customers could see exactly when systems failed and for how long.
The decision was not framed as bravery. It was framed as consistency. Trust was a stated value. Publishing failure was how that value was operationalised.
Why this matters for AI
The article discussed during the episode, published by Cerkl, argues that AI is increasingly shaping company culture by filtering information and determining relevance.
Steve’s experience adds a sharper edge. When systems decide what people see, what they are measured on, or how they are prioritised, transparency becomes non-negotiable.
AI agents do not feel embarrassment. They do not intuit when silence erodes trust. If their decisions are hidden, confidence drains quietly.
What Salesforce learned through public failure now applies to AI systems operating inside organisations. If employees and customers cannot see how decisions are made, trust is replaced by suspicion.
Trust must be engineered
Steve argues that AI cannot be trusted by intention alone. It must be governed through what he describes as trust layers. Clear rules, visibility, and constraints that mirror human judgement.
A human sales leader knows not to upsell a customer whose system has just failed. An AI agent does not. That restraint must be designed.
Publishing system performance was one way Salesforce encoded values into operations. With AI, leaders must decide what transparency looks like when decisions are automated.
Dashboards, explanations, audit trails, and visibility into failure are not optional extras. They are how trust survives scale.
The tension leaders avoid
Many organisations fear transparency because it exposes imperfection. Steve’s experience suggests the opposite. Concealment magnifies risk.
AI will make more decisions faster, with greater distance from human judgement.
Without deliberate openness, leaders lose the ability to explain outcomes they are still accountable for.
The temptation will be to smooth results, protect confidence, and manage perception. The harder choice is to let people see where systems fall short.
That choice, Steve suggests, is where values become real.
Closing reflection
Publishing failure did not weaken Salesforce’s credibility. It strengthened it. Customers stayed because honesty replaced uncertainty.
As AI systems increasingly act on behalf of organisations, the same logic applies.
Trust will not be earned by perfection, but by visibility.
The leaders who understand this will not ask whether AI works. They will ask whether people can see it fail, and still choose to trust it.
Sponsored by...
truMRK: Communications You Can Trust
👉 Learn how truMRK helps organisations strengthen the credibility of their communications.
Want to be a guest on our show?
Contact Us.
The Responsible Edge Podcast
Queensgate House
48 Queen Street
Exeter
Devon
EX4 3SR
Recognition.
Join 2,500+ professionals.
Exploring how to build trust, lead responsibly, and grow with integrity. Get the latest episodes and exclusive insights direct to your inbox.
© 2025. The Responsible Edge Podcast. All rights reserved.
The Responsible Edge Podcast® is a registered trademark.
Sponsored by truMRK
© 2025. The Responsible Edge Podcast








