Familiarity is the enemy
We believe that this document is fully human-written.
Hacker News Article AI Analysis
Content Label
Human
AI Generated
0%
Human
100%
Window 1 - Human
My thoughts on why enterprise knowledge systems have failed for sixty years, and what might finally replace them.A couple of weeks ago I demoed one part of what I have been building to a senior exec at a global enterprise - someone who had been asked to lead and guide AI adoption in their part of this billion dollar company - our conversation was off the record, but what they told me - and why they couldn't buy my product - that is the basis of my essay.First, they told me that what I had shown them was the first time they had seen an AI system for complex enterprise work that looked ready to deploy. Yes, they had familiar reservations: their data had to stay under their control (no problem, my architecture is designed around exactly that).Next, they told me what the large consulting firms had been pitching them: the quotes sat in the hundreds of thousands, spread across a roughly threefold range. The high end was a gold-standard 99.5% accuracy promise while the low end was priced to be a deliberate foot in the door and the common thread was these firms were selling their own learning curve.I had demoed a product that worked, while these behemoths - my competitors - were asking to be paid to build a product that worked.Next I was told that they could not buy from me. Why? RiskAnd they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance. That insurance - more than price and more than product - is what enterprise software has always traded on.My conversation was not a one-off - of course - it is the shape of a sixty-year failure the industry has learned to call "prudent".I'm writing this today as a reminder of that failure, and as a public declaration to keep building anyway.It's 2011. Hewlett-Packard acquires Autonomy for US $11.1 billion, then a year later writes off $8.8 billion of Autonomy's value (eighty percent!), blames fraud and sues Autonomy's founder Mike Lynch.Fast forward to June 2024 - after a thirteen-year legal battle - a US jury acquits Lynch on all counts. Lynch's lawyers established that HP executives spent roughly six hours on conference calls with Autonomy before the $11.1 billion decision.
Window 2 - Human
Two months after his acquittal, Lynch died when his yacht - the Bayesian - sank off the coast of Sicily.HP, one of the largest enterprise IT customers on earth, paid $11.1 billion for a knowledge-management product after six hours of phone calls with its founders yet a year later it could not tell what it had bought.This is the category I want to talk about - enterprise knowledge management - the software that promises to capture what an organisation knows and make it usable. It has existed for forty years or more, yet it has never delivered the intelligence it pretended to offer.I estimate it has cost something north of a quarter of a trillion dollars US in write-offs, opportunity cost, and honestly-counted productivity losses. And its 2026 incarnation - "just add AI to your wiki" - is the worst iteration thus far.The reason is not that the technology is bad (although there's certainly examples of that too). The reason is that the buyers select on the wrong axis.They select on familiarity. They have always selected on familiarity.Familiarity is the enemy.The enemy, namedTwelve years ago I wrote a post arguing that the realised value of an information asset is a function of the technology used to transform it. That the gap between an asset's potential value and what the business actually extracts from it should be the whole economics of the enterprise information management industry. The technology choice isn't decoration on the outcome - it's causal.Of course, nobody read it - though it wouldn't really have mattered if the whole world had - the industry kept buying the same things. The potential-versus-realised gap widened and at some point over the last three years (coincident with ChatGPT) - enterprise knowledge management started collapsing into a final embarrassment so complete it can no longer be hidden.This is my pre-mortem. It is also, at the end, my proposal.In 2011, Rich Hickey - the creator of the Clojure programming language, and arguably one of the most important computer scientists of the last twenty years - gave a talk called Simple Made Easy. He drew the distinction that most of this industry still ignores. Simple is objective: two things are simple if they are not intertwined, if they do not interlock, if removing one does not collapse the other. Easy is relative: something is easy to whom?Easy is near-at-hand.
Window 3 - Human
Easy is familiar. Easy is what your team already knows, what your CIO has heard of, what the analyst quadrant showed you last year and will show you next year. Enterprise software has spent decades confusing the two. Hickey's witty rejoinder is "Incidental is Latin for your fault."The entire apparatus of enterprise technology selection - the analyst reports, the RFP scoring rubrics, the CTO dinners, the Gartner quadrants, the AI world tours, the reference-customer asks, the preferred-supplier panels - is a machine for rewarding familiarity. It is not a machine for rewarding correctness. The two are not the same.Every one of the failures catalogued below, and the forty-year graveyard they sit on top of, has the same structural cause: the buyer bought what was familiar to them, not what was right. The vendor who looked safe beat the vendor who was innovative. The language the hiring committee recognised beat the language that would have made the system maintainable. The architecture that appeared on the last three analyst reports beat the architecture that would have actually solved the problem, at a fraction of the cost.Familiarity is the selection criteria that matters, and has been since before I was born. It has cost - on my back of the envelope estimation - hundreds of billions of dollars. This is my essay about why.Five ways familiarity kills enterprise intelligence1. The familiar vendorMicrosoft proudly announced in 2020 that SharePoint had over two hundred million monthly active users. They have every right to be proud of that. SharePoint is deployed in effectively every Fortune 1000 company. It is also, by the testimony of its own users, one of the worst products ever.Forrester's 2012 SharePoint survey measured IT satisfaction at 73% and business-manager satisfaction at 62%. The eleven-point gap is the whole story. Enterprise IT bought SharePoint because it was bundled with Office, but the business tolerates it because the business has no choice. A SharePoint consultant - writing in 2014 - called it "where documents come to die." He meant it affectionately.This is how a product with 200 million users can be universally described as a place documents go to die. In any case, the actual product does not determine the sale - the familiarity of the vendor does.
Window 4 - Human
The product is an artefact of the buying signal, not the other way around. 2. The familiar language and the familiar architectureLook at any large enterprise software vendor's technology recruitment page. Count the mentions of Java, .NET, Azure, Oracle, SAP, ServiceNow. Then look for anything that is not one of those. The distribution is not an accident. It is a policy.The language stacks that appear on those job ads are the language stacks that recruitment templates can process, that can be defended in hiring committee meetings, that the Big Four consulting firms are organised around billing for. Java monoliths have made tens of billions of dollars of enterprise revenue not because Java is the right tool (the JVM is impressive as a platform) - but because "Java" is a word that an internal promotions committee, an external auditor, and a departmental procurement officer can all pretend to understand. "Clojure" or "Datomic"? Hah. Instant disqualification. Totally unfamiliar.There is a commonly stated reason for all of this: "we can't hire Clojure developers." That was actually the first thing I was told when I took an engineering leadership role at Qantas, leading a team of Clojure software engineers. I found the opposite to be true. Clojure was not a hiring barrier - it was a hiring filter. Engineers who answered a Clojure job ad had self-selected for thinking in data rather than ceremony - it was a small pool, but one with exceptional talent. While I was leading that team, I hired a Python engineer who had never written a line of Lisp, but he thought in maps and reductions, and three years later he is still writing Clojure (now in his own startup).Familiar-language hiring is not a hedge against key-person risk. It is a larger pool bought at the cost of the single best signal you had.Fred Brooks drew a line in 1986: In No Silver Bullet he separated the difficulty of software into essential complexity (fundamental to the problem) and accidental complexity (imposed by our tools). Hickey calls it incidental. Same thing. Enterprise software has spent forty years buying accidental complexity wholesale as part of the easy language and architecture decisions.AI brings a new irony to this particular familiarity.
Window 5 - Human
When the software is being written by agents as much as by humans, the familiar-language argument is the weakest it has ever been - an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases. On every one of those axes, the languages the industry selected for human convenience are worse choices for the machine than the languages it rejected as unfamiliar. It is worth its own essay - I have written about this in detail. For this one, the point is simpler: the familiarity that underwrote the Java decades is evaporating under the industry's feet, but the industry is still buying as if it weren't.3. The familiar buyer motionEnterprise software is not sold on outcomes. To be fair, most things aren't.The economics of software though - a zero marginal cost of reproduction - actually lends itself to selling on outcomes: imagine if software vendors sold their products with no upfront cost but an ongoing entitlement to just 10% of the cost savings their software generated? If their software fails to generate savings, they lose no additional capital. But if it succeeds, the upside is infinite.So why isn't that how software is sold? There's a confluence of really strong economic reasons why.One is that enterprise software procurement is a market in which the buyer cannot verify quality before purchase, cannot switch after purchase, and has no mechanism to measure outcomes during the contract. Another is that there's significant information asymmetry: the vendor knows something the buyer doesn't.The vendor knows that enterprise software implementation is notoriously difficult, requires massive behavioral change from the buyer's employees, and has a staggeringly high failure rate. The software, in many cases, is "a lemon" - not because the code is broken, but because the promised organisational transformation is a mirage.If vendors sold on outcomes, they would be shifting the implementation risk onto their own balance sheets. Instead, they sell a license to use the tool. They get paid their 90% gross margins upfront, effectively washing their hands of whether the tool actually achieves the business outcome. They extract the value of the promise while shifting the risk of the execution entirely onto the buyer.George Akerlof described this dynamic in 1970 and won the Nobel Prize for it: a market for lemons.