What Would Ada Lovelace Build in 2026?
The most precise thing anyone has said about AI was written in 1843.
In 1843, Ada Lovelace published what most people remember as the first computer algorithm. It appeared in Note G of her translation of Luigi Menabrea’s article on Charles Babbage’s Analytical Engine, tucked into an appendix that was longer than the article itself. But the algorithm is the least interesting thing she wrote that year.
Right next to it, in the same set of notes, she included a line that reads like it was written last week: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” One hundred and eighty years before the current debate about whether AI is creative or conscious or coming for your job, Ada Lovelace had already mapped the boundary. The machine executes. It does not originate. And the gap between those two things is where almost everything interesting about 2026 lives.
The platform insight nobody took
Ada’s real contribution was something quieter than an algorithm, and more consequential. She looked at Babbage’s machine, a device built to crunch mathematical tables, and saw that it could do something its inventor hadn’t fully grasped. She recognized that a machine designed to manipulate numbers could manipulate any system of symbols: music, logic, language, algebraic patterns. She saw, in a brass and gear calculator, a general-purpose computation platform.
She wrote that the Analytical Engine “weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves.” That analogy is doing more work than it appears to. She was describing abstraction layers, the idea that the same underlying engine could produce entirely different outputs depending on what instructions you fed it. She saw the platform underneath the product.
And then the field ignored her for over a century. Babbage’s machine was never completed. Ada’s Notes were treated as a historical curiosity. It took until Alan Turing’s 1950 paper for anyone to seriously engage with her insight about origination, and even then he framed it as an objection to overcome rather than a boundary to understand. The 180-year gap between Ada’s observation and the field catching up to it is itself a useful data point. The people who see structural possibilities earliest are often the ones whose perspective gets filed under “interesting but impractical” until the technology catches up to the abstraction.
No pretensions whatever
Ada’s insistence that the Analytical Engine could not originate anything has a specific relevance right now that she could not have anticipated but would, I think, have recognized immediately.
In 2026, AI generates text that reads like expertise. It produces code that compiles on the first run, marketing strategies that sound plausible, analysis that carries the surface texture of careful thought. The output looks like competence. And in many cases, the output is useful. The question Ada would ask, the one she was already asking in 1843, is whether execution that resembles origination is the same thing as origination.
The answer matters because the gap between the two is exactly where competence illusions live. When a machine can produce something that looks like the work of a senior strategist, the market’s ability to distinguish between genuine expertise and fluent execution starts to erode. Ada understood this boundary before it was a problem. She saw that the machine’s power was in following analysis, in making available what we are already acquainted with. The origination, the part where someone decides what questions to ask and what patterns to look for and what the output actually means, stays with the human. Or it’s supposed to.
The uncomfortable version of this, the one I keep coming back to, is that most knowledge work has always been closer to the machine’s side of that line than we wanted to admit. A significant portion of what looked like senior expertise was execution dressed up as judgment. Ada’s framework doesn’t just describe AI’s limitations. It reveals how much professional work was already operating in the territory of “no pretensions whatever to originate anything,” long before the machines arrived to make that visible.
Poetical science and the skill stack that compounds
Ada called her intellectual method “poetical science.” The phrase came from a letter to her mother, and it described her insistence that mathematical rigor and creative imagination were the same practice viewed from different angles. Her mother, terrified that Ada would inherit Lord Byron’s poetic temperament, had steered her toward mathematics as an antidote. Ada responded by fusing the two into something neither parent would have recognized as their own.
That compound skill, the ability to hold technical depth and imaginative range in the same frame, is precisely the kind of capability that becomes more valuable as AI gets better at executing within single domains. The roles being hollowed out in 2026 are the ones describable in a single-domain job title: the analyst who only analyzes, the writer who only writes, the marketer who only markets. These are the roles where AI’s execution capability creates the most direct substitution pressure, because the work can be decomposed into instructions that a machine can follow.
The roles that are compounding in value look more like Ada’s poetical science. They sit at intersections. They require the practitioner to see the platform underneath the product, to understand the abstraction layer, to recognize that the same underlying capability can produce entirely different outcomes depending on what questions you bring to it. This is the skill that resists automation, because it lives in the space between domains where the machine’s lack of origination becomes most apparent.
What Ada would actually build
If Ada Lovelace were operating in 2026, she would not be training a foundation model. I am fairly confident of this. She would look at the current AI infrastructure the way she looked at the Analytical Engine: as a platform whose builders understand its mechanics but have not yet grasped its full range of symbolic possibility.
She would be building on top of it. She would see the abstraction layer, the place where raw model capability becomes composable infrastructure, and she would start weaving patterns on it that the model builders hadn’t imagined. She would be an operator, a platform thinker, someone who understands that the real advantage comes from seeing what a technology can become rather than optimizing what it already does.
This is the part of the Ada Lovelace story that International Women’s Day celebrations tend to miss. The standard framing, “first female programmer,” reduces a structural insight to a biographical fact. It takes someone who saw a general-purpose computation platform inside a brass calculator and turns her into a diversity milestone. The intellectual contribution deserves better, and so does the template it offers.
Because the interesting question for 2026 is whether we can recognize the Ada-style thinkers in our own moment. The people who look at AI infrastructure and see something the builders haven’t seen yet. The people whose insights get filed under “interesting but impractical” because the market hasn’t caught up to the abstraction. The people whose compound skill sets, poetical science by another name, resist easy categorization and therefore resist easy recognition.
Ada’s thinking was dismissed for 180 years, partly because it was too abstract for a mechanical age, partly because she was a woman working in a space that didn’t have a name yet. I keep wondering how many platform insights we are ignoring right now, for reasons that will look just as indefensible when someone finally looks back.