The True Role of Openness in AI’s Future
In the world of technology, our understanding of history can sometimes be skewed. For instance, we often view Linux’s victory in the Unix wars and the rise of Apache and Kubernetes as proof that “openness” is an unstoppable force. While these examples support the idea that open source naturally prevails, the reality is more nuanced. Openness doesn’t always triumph because it’s inherently better; instead, it often succeeds when a technology becomes essential infrastructure that everyone needs but no one wants to compete on. This dynamic is crucial for understanding AI’s evolving landscape.
Openness as Infrastructure, Not Morality
Consider the server operating system market. Linux gained dominance not because it offered a superior proprietary kernel, but because the operating system itself became a commodity. The real competitive edge shifted upward to applications and services. Major companies like Google, Facebook, and Amazon invested heavily in Linux, sharing maintenance costs for the “boring” parts of infrastructure so they could focus on innovation where data and scale matter most—search, social graphs, and cloud services. This illustrates that open source’s strength lies in providing a common foundation rather than moral superiority.
This pattern suggests that open source thrives when it supports infrastructure that is necessary but not differentiated by proprietary features. The value is in shared stability and cost reduction, not in exclusive control. As a result, open source can become the backbone of an entire industry, allowing companies to compete on higher levels rather than on foundational technology.
The Open-Closed AI Dilemma
Turning to AI, advocates highlight models like Meta’s Llama and the efficiency gains from open source initiatives like DeepSeek, claiming that the era of closed AI giants like OpenAI and Google is ending. However, financial data paints a different picture. Recent research by Frank Nagle at Harvard and the Linux Foundation reveals a significant market inefficiency. Open models often match 90% or more of the performance of their closed counterparts while costing a fraction to operate.
Despite this, companies continue to pay billions for closed models—an estimated $24.8 billion annually—due to factors like brand trust and information asymmetry. The prevailing assumption is that once decision-makers recognize they’re overpaying, they’ll switch to open source, which will topple the proprietary giants. But history suggests otherwise.
The key takeaway is that AI’s economics are fundamentally different from traditional software. The physics of AI—its data requirements, compute costs, and performance metrics—create a “convenience premium” that favors closed, optimized solutions for the foreseeable future. The idea that open source alone will disrupt this market oversimplifies the complex interplay of technology, trust, and economics.












What do you think?
It is nice to know your opinion. Leave a comment.