Before the Prompt
The Psychological Substrate of AI Delegation
In Ethan Mollick’s recent article “Management as AI Superpower”, we get to experience the challenges of human transformation amidst this monstrous technical transformation in the AI revolution: that the bottleneck in AI adoption lives in the psychology of the person, not in the application of the technology. His argument, drawn from an experiment in the Wharton executive MBA program demonstrates how the skill that determines who thrives with AI agents lives in the capacity to scope a problem, define a deliverable, and recognize when the output lands right or wrong. Management skill.
“The skills that are so often dismissed as ‘soft,’” he writes, “turned out to be the hard ones” (Mollick, 2025).
Heifetz, Grashow, and Linsky (2009) would name this an adaptive challenge dressed in technical clothing. Organizations treat AI adoption as a tool problem, a matter of training and prompt engineering. The deeper demand asks people to renegotiate their relationship to authority, uncertainty, and the possibility of failure. That renegotiation happens before the brief gets written.
“Adaptive challenges can only be addressed through changes in people’s priorities, beliefs, habits, and loyalties.” (Heifetz et al, 2009)
This article is for organizational leaders, L&D professionals, and coaches who cannot shake that feeling that AI adoption stalls in places no matter how much effort goes into training and getting workforce buy-in to go “all in on AI”. It traces the psychological substrate beneath delegation through three sections: how anxiety shapes the brief before skill enters the picture, how projection flows toward the agent the way it once flowed toward a manager, and what containment has to do with letting an AI run autonomously. By the end, you will have a framework for designing development that builds the internal capacity for AI delegation, rooted in the psychological conditions that make it possible, and five reflection questions to locate these dynamics in your own system.
What Delegation Actually Requires
Watch someone hand a task to an AI agent. Anxiety organizes the interaction before skill enters the picture. The brief gets over-specified until the task shrinks to something that feels safe. The agent starts working and the user interrupts before the first output appears. Or the opposite: the task goes in vague, with no structure, because naming what good looks like requires the kind of authority the person has not yet claimed in themselves.
These patterns shift only when the underlying psychological function shifts. Prompt engineering workshops leave them untouched. The phantasy of the “perfect prompt” to one click and finished product chokes their courage and stifles creativity. The AI was intended as a tool for delegation. In practice, it becomes a screen for projection. It instead asks the agent to mirror what’s really in play or more seriously, at stake.
Delegation operates as a capacity built from the inside out. Bion (1962) described the function of the container: the leader’s ability to receive what gets projected into them, hold it without being overwhelmed, and return it as meaning. That function breaks down when the holder carries unprocessed anxiety of their own, lacks clarity about their authority, or cannot tolerate the ambiguity of waiting to see what comes back.
What sits behind the brief determines its quality, every time.
Anthropic recently published a case study of how their internal teams use Claude Code. One pattern appears across every team: the people who generate the most value let the tool run. The data science team treats it like a slot machine: commit your state, let Claude work for thirty minutes, accept or restart. The product development team enables auto-accept mode and reviews the eighty-percent solution before stepping in. The RL team uses a checkpoint-heavy workflow specifically to make experimentation safe.
In every case, the capability runs on tolerance for autonomous action in a domain where the human still holds accountability. That tolerance represents a psychological achievement, built through experience of handing off and surviving the result.
The Projection onto the Agent
Projective identification names what happens when inner experience gets relocated into an object in the environment. In organizational life, this mechanism most commonly flows toward those in authority: the leader who receives the group’s hope, doubt, and fantasy as if these were facts about her. With AI agents, the mechanism redirects rather than disappears.
The manager who cannot tolerate imperfection asks the AI to produce the exact output he would have produced himself, and grows disappointed when it falls short. The person with fragile authority over-delegates, offloading every decision, every sentence, as if the agent’s competence fills a gap she senses in herself. The anxious supervisor who monitors every step of an agent’s autonomous run manages a feeling, not a quality standard.
What the agent receives from us carries our relationship to authority, to uncertainty, and to the possibility of failure. The quality of the brief reflects that relationship more faithfully than it reflects prompting skill.
Training programs produce modest results precisely because they teach the form of a good brief without touching the psychology that shapes what actually gets written.
What Organizations Are Missing
Josh Bersin’s (2025) research on the “Supermanager” maps the managerial behaviors that distinguish effective AI-era leaders. Supermanagers create conditions for AI experimentation, celebrate both successes and failures, and give teams autonomy to rethink how work gets done. The report documents a shift from managing people and enabling technology to enabling people and managing technology.
“AI makes this work easier by tracking what people do, enabling leaders to think more strategically and optimize the ‘system’ rather than monitor each individual.” (Bersin, 2025)
The condition in which teams risk being wrong in front of one another, the cultural attribute that appears throughout the “Supermanager” literature, describes an outcome of containment. A team experiments freely when someone in authority holds the anxiety that experimentation produces. Without that holding function, calls for open experimentation remain aspirational, a stated value the system cannot yet deliver.
Bersin’s research also surfaces a pattern in manager engagement worth naming: the very population asked to lead AI adoption carries the heaviest burden of organizational anxiety. Spans of control widen, hierarchies flatten, expectations rise, and the structural supports that once contained managerial anxiety erode. The Supermanager model asks managers to hold more while receiving less. Attending to the container matters as much as attending to the content of the role.
What Development Can Actually Address
The systems psychodynamic tradition offers a framework that organizational development has not yet fully applied to AI adoption.
Role clarity before the brief
The question runs deeper than “what do I want the AI to produce?” It asks what authority you take up, where your judgment carries weight, and where execution belongs elsewhere. Heifetz and Linsky (2002) describe the failure to distinguish what belongs to you from what belongs to the system as one of the central dangers of leadership. Managers who lack that distinction under pressure collapse the boundary: doing everything themselves or offloading accountability entirely. The brief surfaces a confusion that was already there.
Containing the anxiety of novelty
When the situation runs new, defenses organize. People become more rigid, more controlling, or more avoidant. Working with AI agents delivers reliably novel conditions: the outputs arrive unpredictably, the failure modes remain unknown, the social norms around what counts as legitimate use have not yet settled. A development process that names this anxiety rather than bypasses it gives people somewhere to put it. That work of containment creates a structure large enough to hold the anxiety while the task proceeds.
Building the internal space for evaluation
Evaluating AI output requires holding two things simultaneously: what you know and what you do not know. You trust your judgment enough to identify what lands well and flag what misses, while acknowledging you cannot always verify the result from first principles. This demands epistemic confidence, the kind that develops through repeated experience of being right and wrong, and through relationships that have remained honest about both.
The Practice Principle
Mollick names the capability. Bersin maps the behaviors. The question both leave open: what develops the capacity underneath?
Repeated experience of handing off, tolerating not knowing, evaluating what comes back, and staying accountable for the result – all while being held in safety by authority figures and colleagues alike. So important it becomes an imperative that the experience happens in conditions that carry enough containment for the learning to register: genuinely held, with real stakes present.
The organizations that develop this capacity fastest create those conditions deliberately: naming the anxiety rather than suppressing it, building role clarity before AI literacy, and developing their people’s internal capacity alongside their technical fluency.
Mollick and Bersin describe what effective AI leadership and systems look like. Both concepts must address the anxiety, the authority, and the capacity to hold what comes back to fortify the foundation. That foundation is where the real work begins.
Reflection Questions
1. When you delegate a task to an AI agent, what do you notice in yourself between pressing enter and seeing the output? Where does that feeling come from?
2. When your team resists adopting a new AI tool, what might that resistance carry about the system’s relationship to authority and uncertainty?
4. Where in your organization does the call for open experimentation function as an aspiration rather than a practice, and who holds the anxiety that would make it real?
5. What would change in your next AI delegation if you briefed from clarity about role (yours, your teams, the tools’) rather than from anxiety about the outcome?
If what you have read here struck a chord, there are a few ways we could take this conversation further:
Talks and Offsites – If you are shaping a leadership retreat or planning a gathering where you want people to think differently about themselves and the systems they inhabit, I can help spark that dialogue.
Executive Coaching – For leaders navigating complexity, change, or the unspoken dynamics in their roles, I offer one-to-one coaching as a thinking partner in the work.
You can reach me directly if one of these feels like the right next step.
References
Anthropic. (2025). How Anthropic teams use Claude Code. https://claude.com/blog/how-anthropic-teams-use-claude-code
Bersin, J., Bersin, J., & Nangia, N. (2025). People management in the age of AI: The rise of the Supermanager. The Josh Bersin Company.
Bion, W. R. (1962). Learning from experience. Heinemann.
Heifetz, R. A., Grashow, A., & Linsky, M. (2009). The practice of adaptive leadership: Tools and tactics for changing your organization and the world. Harvard Business Press.
Heifetz, R. A., & Linsky, M. (2002). Leadership on the line: Staying alive through the dangers of leading. Harvard Business School Press.
Mollick, E. (2025, January). Management as AI superpower. One Useful Thing.

