Moon Foundry Moon Foundry
All essays
AI & Marketing

The Soft Skill That Survives

The careers that compound from here belong to people who think in systems, not tasks.

Alex Albano | | 6 min read

I watched someone try to set up an AI agent workflow last week, a multi-step client onboarding process that needed to pull data from a form, check it against a CRM, generate a welcome sequence, and flag exceptions for human review. The tools were all capable, the integrations were documented, and the whole thing fell apart in under ten minutes because the person sitting at the keyboard couldn’t articulate what the process actually was.

They knew how to do client onboarding, had done it hundreds of times across different organisations and CRM configurations, and could walk a new hire through it in an afternoon. But when asked to describe the same process with enough precision for a machine to follow, to make every branching decision and implicit judgment call explicit in a way that left no room for interpretation, they froze. The steps were tangled up with intuition, with judgment calls they’d never needed to name, with dependencies they’d never had to map because their own experience had always filled in the gaps silently. They had deep expertise, and that expertise had never required them to externalise its own logic.

That gap, between knowing how to do something and being able to express how it works as a structured, executable sequence, is quietly becoming the most important gap in professional life.

Everyone becomes a manager. Nobody becomes a leader.

There are already signs of what the AI-augmented workplace looks like at the individual level, and the pattern is more specific than most people expect. Each person will manage a portfolio of processes, where “manage” means something precise: processes that have been identified, verbalised (even when that verbalisation is written), expressed in clear enough terms to be understood by a non-human executor, codified into repeatable sequences, tested against edge cases, iterated on when they break, and ultimately programmed into prompts or agent configurations that run with minimal oversight. This is management in a mechanical sense, stewardship of defined systems, and it is useful, necessary, increasingly valuable work.

Leadership requires something else entirely. The real version of it operates through human texture: inspiring a team through a rough quarter, motivating someone who has lost confidence in their own work, driving a group toward a shared vision that none of them could articulate on their own, persuading a skeptical stakeholder by reading their hesitation and knowing exactly when to push and when to wait. These are skills built on empathy, on behavioural intuition, on the accumulated understanding of what makes people move, and none of it transfers to machines. The entire repertoire of human influence, from the admirable (vision-casting, mentorship, encouragement) to the pragmatic (nudging, deploying social proof, applying pressure at the right moment), operates on a substrate that AI simply does not have, because behavioural economics works on humans, and agents respond to specifications.

What remains, once you strip away everything that depends on human psychology, is process, and the people who can design it well are the ones whose productivity will compound from here.

Work as contract design

When you actually sit down to abstract a process for machine execution, the work resembles contract design more than project management. Where project management gives you Gantt charts and status updates, process abstraction asks you to write something closer to a legal agreement between you and a machine about what should happen under every conceivable circumstance, a long sequence of conditional logic specifying what to route where, what to flag and pause, what to escalate and what to log, with each condition carrying its own environment requirements, resource constraints, dependencies, and fallback paths.

This is what a prompt really is, once you look past the conversational veneer: a contract written in natural language, specifying the terms under which a machine should act, where the quality of every outcome depends entirely on the quality of that specification. A vague specification produces vague results in the same way that an ambiguous contract clause produces disputes, and the people who understand this intuitively, who have always thought about work in terms of preconditions and dependencies and failure modes rather than task lists and deadlines, are the ones producing extraordinary results with AI tools right now. They treat every interaction with an AI system as a design problem that requires a clean understanding of what needs to happen, how, in what order, with which resources, under what conditions, and what should happen when something breaks.

This capacity can be learned, but it requires a fundamentally different orientation to work than most professionals have developed over their careers, because it asks you to think about process as an object that can be examined, decomposed, and reconstructed rather than as something you simply do.

The split that is already opening

The divide forming now is between system thinkers and tactic-driven professionals, and it is widening faster than most people recognise. A tactic-driven professional treats work as a sequence of discrete tasks, each one self-contained, with no shared state between them, no dependency chain, no joint point of failure, and productivity means moving through that sequence quickly. A system thinker sees the same work and notices the interdependencies, notices that this task feeds that one, that a single constraint applies across three steps simultaneously, that one wrong assumption cascades into four broken downstream processes, and that the feedback loop between step two and step five means you cannot optimise them independently.

AI-assisted productivity rewards system thinkers disproportionately, because every prompt, every agent configuration, every automated workflow is an act of system specification, and the machine cares about the clarity of that specification far more than the speed at which you produced it. The uncomfortable part is that many organisations have spent decades rewarding the opposite approach, celebrating fast execution and high throughput and visible busyness, promoting the person who clears their inbox fastest and ships the most tickets and runs the most meetings. These were legible signals of competence in a world where humans were the execution layer, and in a world where machines handle execution, the legible signal shifts to something quieter and harder to see: the quality of the system that was designed before any execution began.

There is a strange inversion taking shape here. The people who always seemed “strategic but not hands-on,” the ones who drew diagrams on whiteboards and obsessed over process flows and kept asking “but what happens when this assumption breaks,” may turn out to have been practising the exact skill that now matters most. And the people who prided themselves on raw execution speed, who could power through any task list, who measured their worth in output volume, may find that the speed was always dependent on a structural clarity they never had to provide themselves, because the systems they worked inside provided it for them.

The question worth sitting with is whether process abstraction can be taught at scale, or whether it is one of those capacities that sorts people more permanently than we’d like to admit. The answer probably determines more about the next decade of professional life than any particular technology release.


Stay in the loop

Essays on growth strategy, brand, and building at the frontier. No spam.