Who Trains the Trainer?
If juniors learn by watching seniors, and seniors now delegate to agents, the expertise pipeline is quietly breaking.
What feels like a lifetime ago, I was working in education management, running publicly funded programmes in the UK and through European Social Fund grants at scale. The work was designing and delivering apprenticeship pathways, the kind of structured learning where someone enters a profession knowing very little and leaves, over time, with the capacity to do the work independently and eventually to teach others how to do it. The mechanism was always the same, regardless of the industry or the qualification framework: a novice watches an experienced practitioner make decisions under real conditions, absorbs the reasoning behind those decisions through proximity and repetition, gradually takes on more complex tasks with decreasing supervision, and eventually reaches a point where the knowledge has transferred sufficiently for the cycle to begin again.
Ten years in deep tech and AI have changed my professional world entirely, and the shift that keeps drawing me back to those apprenticeship years is this: the mechanism I just described assumes that the senior practitioner is visibly doing the work. It assumes the decision-making is observable. And in AI-native teams, that assumption is breaking down in ways that almost nobody is tracking with the seriousness the problem deserves.
The invisible decision
A senior content strategist at a company I work with described her workflow to me recently, and the description was revealing in ways she did not intend. She uses an AI agent to generate first drafts from a brief and a set of brand guidelines, a second agent to evaluate the drafts against a scoring rubric she developed over years of editorial practice, and a third to revise based on the evaluation output. Her role in the process is designing the brief, building the scoring rubric, reviewing the final output, and making the judgment calls about what to publish and what to send back for revision. The whole cycle takes less time than the first draft alone would have taken two years ago.
The quality of her output has not declined. It has arguably improved, because the agent handles the mechanical aspects of drafting while she concentrates on the strategic and editorial decisions that she is uniquely qualified to make. This is exactly the workflow that AI productivity advocates describe as augmentation, and on its own terms, it works.
The problem becomes visible when you ask what her junior team members are learning. In the previous workflow, a junior strategist would write the first draft, receive editorial feedback from the senior, revise, receive more feedback, and through that iterative cycle, develop the judgment that the senior was transmitting through the correction process. The junior learned not just what good content looks like, but how the senior thought about content: what she prioritised, what she rejected, what patterns she recognised, what her instincts were when the brief was ambiguous or the audience was unfamiliar.
In the AI-native workflow, the agent writes the draft. The senior evaluates and revises. The junior sees the finished output and perhaps the evaluation rubric, and the dozens of micro-decisions the senior made while designing the brief, tuning the rubric, and reviewing the agent’s work remain invisible, happening in a space the junior has no access to. Those decisions are the expertise. They are also invisible to anyone who is not doing them.
The apprenticeship pipeline
The pattern repeats across every knowledge-work function I have observed closely enough to see the operational detail.
In software development, senior engineers who once wrote code that junior engineers could read, study, and learn from are increasingly working through AI coding assistants where the senior’s expertise is expressed in the prompts, the architectural decisions about what to ask the agent to build, and the evaluation of what comes back. The junior developer sees the committed code, which looks competent, and the chain of reasoning that produced it has already disappeared into the prompt history. The senior’s skill has moved from the artefact to the orchestration, and the orchestration is not visible in the same way.
In consulting, senior partners who built their practices on the ability to synthesise complex information into strategic recommendations are now using AI agents to handle the research, the data analysis, and the initial synthesis, reserving their own contribution for the reframing, the judgment about what matters, and the client-facing interpretation. The associates who would have done that research and analysis, and who would have developed strategic judgment as a byproduct of doing it, are instead reviewing AI output without access to the reasoning that determined what questions to ask in the first place.
In marketing, this dynamic is the most immediately consequential, because marketing is one of the fields where AI-native workflows have been adopted earliest and most completely. A senior demand gen lead who once built campaign strategies by hand, testing messaging against her accumulated understanding of the ICP, reading competitive positioning, sensing when a market narrative was shifting, now builds those strategies through a sequence of agent-assisted steps: competitive research synthesis, audience signal detection, messaging framework generation, performance prediction. The output is often better than what she produced manually, because the agents handle breadth while she concentrates on depth. The junior marketer on her team sees the finished campaign brief and the performance data. The years of intuition about which audiences respond to which framing, the editorial instinct for when copy is technically correct and emotionally wrong, the ability to read a campaign performance report and know, before the data fully confirms it, that the problem is positioning rather than targeting, all of that stays with the senior, expressed through her orchestration choices, invisible to the person who is supposed to be learning it.
The dependency trap
The structural consequence of this pattern, if it continues without deliberate intervention, is a generation of knowledge workers who are fluent operators of AI tools and progressively less capable of doing the work those tools are doing.
This is not a prediction about declining intelligence or declining effort. It is a structural observation about what happens when you remove the primary mechanism through which a specific kind of expertise has been transmitted, without replacing it with an equivalent mechanism. The apprenticeship model worked because it was embedded in the work itself. The junior learned by doing the same tasks the senior had once done, under conditions where the senior’s corrections and guidance transmitted the judgment that made the work good. When the tasks are delegated to agents, the embedding breaks.
The result is practitioners who can configure an AI workflow efficiently, evaluate output against a rubric, and manage the production pipeline with high throughput, but who cannot, when the tool fails or the context shifts or the rubric does not apply, fall back on the underlying expertise that the tool was drawing on. They are operationally competent and structurally dependent. And the dependency deepens over time, because the experience that would build the underlying expertise is the experience that has been delegated to the agent.
The comparison to earlier technological transitions is instructive but only partially. The introduction of calculators into mathematics education raised similar concerns about whether students would lose the ability to perform mental arithmetic, and the answer turned out to be: they did, and it mostly did not matter, because the calculator was reliable enough and ubiquitous enough that the underlying skill was no longer economically necessary. The question for AI-native knowledge work is whether the same logic applies, whether the underlying expertise that the tools are replacing will prove to be economically unnecessary or whether it will turn out to be the foundation on which the tools’ usefulness depends.
I think the answer varies by domain, and marketing is a domain where the judgment at stake is particularly difficult to encode. A junior marketer who never develops the instinct for brand voice, the ability to sense when messaging has drifted off-strategy, or the accumulated knowledge of how audiences in a specific market actually respond to a specific kind of claim, will not notice the gap when the AI produces serviceable output for routine campaigns. The gap will become apparent when the market shifts, when a new competitor changes the category narrative, when the brand needs repositioning, when the agent produces copy that is technically on-brief and strategically wrong. These are the moments where the depth of the practitioner’s expertise determines whether the response is adequate or exceptional, and they are precisely the moments that a rubric cannot anticipate.
What the apprenticeship years taught me
The thing I keep returning to from my years in education management is how deliberately those apprenticeship programmes were designed. The learning was structured so that observation, practice, feedback, and gradually increasing autonomy happened in a specific sequence, with specific support at each stage. It was not accidental. It was engineered, because the people who designed those frameworks understood that expertise does not transfer by proximity alone. It transfers through structured engagement with the work, under conditions where the learner can see the expert’s reasoning and test their own developing judgment against it.
The AI-native workplace has not designed the equivalent structure. It has optimised for output, for efficiency, for the productivity gains that come from delegating routine work to agents, and those gains are real. What it has not done is ask what happens to the pipeline that produces the next generation of senior practitioners, the people whose judgment the agents will eventually need to be guided by. If the current generation of seniors retires or moves on without transmitting their expertise, and the current generation of juniors has been trained to operate tools rather than develop the underlying craft, the quality of the orchestration itself will degrade, because the people doing the orchestrating will have less to draw on.
That degradation will not be sudden or dramatic. It will be gradual, visible primarily as a slow decline in the sophistication of the questions being asked, the briefs being written, the rubrics being designed. The tools will continue to produce fluent output. The output will become progressively less informed by genuine expertise. And the gap will be difficult to diagnose, because the surface quality will remain high long after the depth has gone.
I spent a decade designing systems to transmit professional expertise at scale. The irony that the most consequential technology shift of the next decade may be quietly dismantling the mechanism through which expertise gets transmitted is not lost on me. The question is whether the organisations adopting these tools will notice the problem before the pipeline has thinned beyond easy repair, or whether they will optimise for the measurable efficiency and discover the unmeasured loss only when they need the expertise that nobody remembered to teach.