Episode 159 | 11.5.2026

OpenAI’s PBC Status and the Governance Gap It Does Not Close

Asher Jay on why a public benefit corporation structure leaves the accountability problem in frontier AI structurally intact.

Listen to the full podcast episode on YouTube, Spotify, and Apple Podcasts.

The Label and What It Does Not Require

When OpenAI completed its restructuring as a Delaware Public Benefit Corporation on 28 October 2025, it did not create any new enforceable obligation to publish safety metrics. It did not require disclosure of how outputs are generated. It did not give civil society organisations formal standing in governance decisions. It did not mandate independent verification of whether the stated public benefit is being achieved.

A public benefit corporation is a legal structure that requires directors to consider the interests of stakeholders beyond shareholders. It does not require them to demonstrate that they have done so.

The distinction is the subject of a recent Financial Times article explored in this episode of The Responsible Edge. The article’s central question is whether PBCs can solve AI governance challenges. Asher Jay’s answer is clear.

“Just making it about intention and not having tangible ways to translate that into practice is a cop-out.”

Formation

Asher did not arrive at this argument through law or policy. She arrived through coastlines.

She grew up travelling, and the water told the same story wherever she went. Bloated dolphins. Turtles in nets. Plastic on the shoreline. She describes these encounters as affecting her “on a very cellular and emotional level.” They produced not a career path, but a preoccupation.

She studied biochemistry and environmental science, volunteered across conservation nonprofits, and then moved into fashion design, branding, and modelling. She describes that period honestly. “

That’s a way to sort of deflect true responsibility and be in alignment with my own calling,” she said. “It was me listening to what the outside world thought would be a safer way.”

The recalibration came through creative conservation work: campaigns against wildlife trafficking for WWF, the Rainforest Action Network, and National Geographic. In 2014, the National Geographic Society named Asher an Emerging Explorer for that work. She later founded Henoscene, a platform built to surface discrepancies in corporate impact commitments around net zero claims, carbon offsets, and water positive pledges. It raised one million dollars in seed funding before entering a strategic pivot. She now serves as Chief Network Architect at the Shareholder Democracy Network, a bipartisan nonprofit that redirects retail proxy votes through civil society organisations, and as an impact consultant to the Mountain Lion Foundation.

The question she has been asking across all of it is the same. Whether stated commitments can be made legible to anyone outside the organisation making them.

 

The Governance Gap

Asher’s critique of OpenAI is structural, not personal. She tracks the company’s trajectory: nonprofit, then capped-profit entity, then public benefit corporation, with purpose-oriented language accompanying each transition. None created an external verification mechanism that would allow an independent observer to determine whether the mission remained intact.

Her characterisation of the implicit logic is direct.

“Let’s get away with what we can,” she said, “make as much money as we possibly can while we’re getting away with it and then wait to be tapped on the wrist.”

On Anthropic, which also operates as a public benefit corporation, she is conditional. There is an opportunity, she says, to learn from OpenAI’s trajectory. But the conditional is load-bearing.

“It’s always an aftermath, afterthought,” she said. Intention stated in founding documents does not constitute a governance mechanism.

The structural absence she returns to most consistently is civil society. The boards of the major AI labs are composed primarily of people with a financial stake in the company’s commercial performance. Organisations representing ecological, social, and democratic interests, the constituency that the PBC structure is nominally meant to serve, have no formal standing in how these companies are governed.

What Verifiable Accountability Would Require

Asher is specific about remedies. She would mandate that AI labs publish safety metrics and accountability standards. She would require disclosure at the level of every output: where information was sourced, how an image was generated. She would fund AI literacy globally, including for populations with no current access to the technology.

“AI is also a privilege,” she said. “I don’t think it reaches a vast majority that we don’t even converse about because they may not even have access to food, let alone a computer.”

Her most structurally significant proposal is civil society representation at board level. Not advisory boards. Voting seats. If the benefit being served is public, the organisations that represent the public should have a direct role in governance decisions. Market-facing board members cannot, she argues, adequately represent interests that do not appear in price signals.

“We should have greater representation of the diversity of people, of democratic representation being afforded,” she said. “That can only be done through civil society.”

Her work at the Shareholder Democracy Network operates on a related logic. Most retail shareholders receive proxy vote notifications and disregard them, lacking time or expertise to evaluate board nominees or governance resolutions. The network routes those votes through civil society organisations with established positions on corporate conduct. A shareholder aligned with the Sierra Club elects to have their proxy cast in accordance with the Sierra Club’s recommendations. One decision. One click. Retail influence redirected through organisations already trusted to represent public interest.

 

Unresolved

Regulatory frameworks for AI governance are being constructed across multiple jurisdictions, at different speeds, with different assumptions about what accountability requires. The major AI labs are participating in those processes while continuing to scale.

OpenAI’s PBC structure formally requires the company to advance its stated mission and consider the broader interests of all stakeholders.

It does not require it to prove that it has.

Whether governance architecture takes shape before these systems become too embedded to constrain is genuinely open.

The PBC label describes an aspiration. Governance requires a mechanism.

Sponsored by...

 

truMRK: Sustainability Reporting and Communications You Can Trust


👉 Learn how truMRK helps organisations strengthen the credibility of their reporting and communications.

Want to be a guest on our show?

Contact Us.

The Responsible Edge Podcast
Queensgate House
48 Queen Street
Exeter
Devon
EX4 3SR

Join 2,500+ professionals.

Exploring how to build trust, lead responsibly, and grow with integrity. Get the latest episodes and exclusive insights direct to your inbox.

  • This field is for validation purposes and should be left unchanged.

© 2026. The Responsible Edge Podcast. All rights reserved. The Responsible Edge Podcast® is a registered trademark.

Sponsored by truMRK

© 2026. The Responsible Edge Podcast