Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
IAC Chairman Barry Diller discusses the necessity of legal guardrails for AGI and the limitations of personal trust in the tech industry's future.
This article is original editorial commentary written with AI assistance, based on publicly available reporting by TechCrunch AI. It is reviewed for accuracy and clarity before publication. See the original source linked below.
The intersection of legacy media and the vanguard of artificial intelligence has often been a theater of conflict, yet IAC Chairman Barry Diller recently offered a more nuanced take on the industry’s trajectory. While expressing profound personal confidence in OpenAI CEO Sam Altman, Diller simultaneously sounded a systemic alarm: as the pursuit of Artificial General Intelligence (AGI) accelerates, the personal integrity of individual leaders becomes an insufficient safeguard for society. This paradox—trusting the architect while fearing the architecture—underscores a growing consensus among institutional power brokers that the AI revolution is outstripping the efficacy of traditional corporate governance.
Diller’s perspective is rooted in a history of navigating seismic shifts in distribution and content. As a media mogul who transitioned from film and television to the digital frontier with IAC, he has witnessed how disruptive technologies can erode intellectual property and market stability if left unchecked. His recent engagement with OpenAI, particularly regarding licensing agreements for IAC publications like People and The Daily Beast, highlights a pragmatic shift. Unlike some of his contemporaries who have opted solely for litigation, Diller has balanced legal defense with collaborative deal-making, positioning himself as a bridge between the old guard and the new silicon elite.
The technical and business mechanics of this relationship are evolving through transformative licensing models. By striking deals with OpenAI, publishers are attempting to move away from the "scraping" era—where data was harvested without consent—toward a structured value exchange. These agreements provide OpenAI with high-quality, human-verified training data, while funneling much-needed revenue back into journalism. However, Diller notes that these are tactical victories in a strategic war over the nature of AGI. The "black box" problem of AI means that even the creators may eventually lose the ability to predict or control the derivative outputs of their models, rendering current contractual obligations potentially obsolete.
The industry implications of Diller's stance are significant, signaling a move toward "trust-less" systems of accountability. If personal character is indeed "irrelevant" in the face of AGI, the burden of safety shifts entirely to regulatory frameworks and algorithmic transparency. This creates a competitive landscape where the primary differentiator may not be the potency of the model, but the robustness of its guardrails. For market incumbents, the risk is no longer just losing market share to an AI startup, but the total dissolution of the copyright and fair-use standards that have underpinned the information economy for decades.
Furthermore, the tension between the altruistic missions of AI labs and their commercial imperatives remains a central friction point. Diller’s commentary hints at a skepticism regarding "capped profit" models and non-profit oversight boards, suggesting that these structures are flimsy barriers against the eventual pressure of hyper-intelligent systems. As OpenAI moves closer to a traditional for-profit structure to attract the billions in capital required for compute power, the gap between Altman’s vision and the reality of corporate interests continues to narrow, reinforcing Diller’s point that institutional guardrails must be external and enforceable.
Moving forward, the focus will shift from high-level "safety summits" to the granular implementation of federal and international law. Observers should watch for how lawmakers interpret the liability of AI developers for "hallucinated" or harmful outputs as these systems become autonomous. Diller’s pragmatism suggests that while the industry should maintain productive dialogues with leaders like Altman, the mission of the coming years is to build a legal fortress that can withstand the arrival of a general intelligence that doesn’t require—and perhaps cannot comprehend—human trust. The race for AGI is no longer just a technical competition; it is a race to define the rules of human-machine coexistence.
Why it matters
- 01Barry Diller argues that while Sam Altman is a credible leader, personal character is a flawed foundation for managing the existential risks of AGI.
- 02The move toward licensing deals represents a strategic pivot for media companies to secure revenue before AI models potentially render traditional copyright obsolete.
- 03A shift is occurring from internal corporate governance toward a demand for external, legally enforceable regulatory frameworks that do not rely on executive altruism.