Are you a cyborg, a centaur, or a self-automator? Why businesses need the right kind of ‘humans in the loop’ in AI
As generative AI rapidly spreads through organizations, executives face a deceptively simple question: How should humans work with AI? The common answer—”keep humans in the loop”—sounds reassuring.
But new research reveals that this answer is dangerously incomplete. What appears to be the same “human-in-the-loop” approach actually manifests in three radically different ways, with profoundly different implications for performance and skill development.
To understand how companies can truly extract value from human-AI collaboration, we conducted a field experiment with 244 consultants using GPT-4 for a complex business problem-solving task. With support from scholars at Harvard Business School, the MIT Sloan School of Management, the Wharton School, and Warwick Business School, the experiment analyzed nearly 5,000 human-AI interactions to answer a critical question: When humans collaborate with GenAI, what are they actually doing—and what should they be doing?
Three hidden patterns of human-AI collaboration
Our experiment’s most striking finding is that professionals working with GenAI naturally sorted themselves into three distinct collaboration styles—each with dramatically different outcomes:
Cyborgs (60% of participants) engaged in what we call “Fused Knowledge Co-Creation”—a continuous, iterative dialogue with AI throughout the entire workflow. They used it for each sub-task in their workflow and in different ways: They assigned personas to the AI, broke complex tasks into modules, pushed back on AI outputs, exposed contradictions, and validated results in a dynamic back-and-forth. For Cyborgs, the boundary between human and AI thinking became deliberately blurred.
Centaurs (14% of participants) practiced “Directed Knowledge Co-Creation”—using AI selectively for specific subtasks while maintaining firm control over the overall problem-solving process. They leveraged AI to enhance their capabilities, to map problem domains, gather methodological information, and refine their own human-generated content. But they kept themselves firmly in the driver’s seat, using AI as a targeted tool rather than a collaborative partner.
Self-Automators (27% of participants) engaged in “Abdicated Knowledge Co-Creation”—delegating entire workflows to AI with minimal iteration or critical engagement. They provided data and instructions to AI to conduct the sub-tasks, then accepted its outputs without modification or with only small edits. Their work was fast and polished but lacked depth—resembling outputs completed for them rather than with them.
What’s remarkable is that every participant had access to the same tools and the same task. They did not receive any different instructions about the work process with AI. Yet their emergent/instinctive choices about when to engage AI and how much authority to give it produced fundamentally different collaboration dynamics.
A framework for understanding collaboration
To make sense of these patterns, we developed a framework built around two fundamental questions that structure any collaborative problem-solving dynamic between human and machine: Who selects what needs to be done? and Who identifies how it gets done?
Cyborgs let humans drive the “what” but allow AI significant control over “how.” Centaurs retain human control and leadership over both dimensions, using AI only for targeted assistance. Self-Automators cede control of both to AI. Notably, the fourth theoretical possibility—where AI drives task selection but humans drive execution—remained empty in our study; when professionals surrender control over what to work on, they also tend to abdicate control over how to do it.
The hidden cost: What happens to expertise?
Perhaps our most consequential finding concerns what happens to professional expertise under each collaboration mode. The implications diverge dramatically:
Cyborgs developed new AI-related expertise—what we call “newskilling.” Through continuous experimentation with prompting strategies, they learned how to effectively communicate with AI, when to push back, and how to extract maximum value from the collaboration. They also maintained their domain expertise by staying actively engaged throughout the process.
Centaurs deepened their domain expertise—traditional “upskilling.” By using AI to accelerate learning about unfamiliar industries, gather methodological guidance, and refine their own thinking, they built stronger foundational capabilities. However, they did not develop significant AI-related expertise because their interactions with AI were limited and targeted.
Self-Automators developed neither—experiencing what we call “no skilling.” By delegating the entire cognitive process to AI, they missed opportunities to build either domain knowledge or AI fluency. Their productivity gains came at the cost of professional development.
This finding should give executives pause. When employees default to Self-Automator behavior—which over a quarter of our highly trained consultants did—organizations may be inadvertently hollowing out the very expertise that creates competitive advantage.
Performance implications: Who gets it right?
Our experiment evaluated outputs on two dimensions: accuracy (did they recommend the correct brand?) and persuasiveness (how compelling was the CEO memo?). The results challenge simplistic assumptions about AI collaboration:
Centaurs achieved the highest accuracy—outperforming both Cyborgs and Self-Automators on getting the right answer. By maintaining control over the analytical process and using their own judgment to evaluate AI inputs, they avoided being led astray by AI’s confident but sometimes incorrect recommendations.
Both Cyborgs and Centaurs excelled in persuasiveness—producing more compelling outputs than Self-Automators. The depth of engagement, whether through iterative refinement (Cyborgs) or human-driven analysis (Centaurs), translated into higher-quality deliverables.
Notably, Cyborgs sometimes fell victim to AI’s persuasiveness. Even when they employed best practices like validation—asking AI to check its own work—they were sometimes convinced by AI’s confident justification of incorrect answers. This highlights a critical risk: sophisticated engagement with AI doesn’t guarantee immunity from its errors.
What should companies do right now?
These findings have immediate implications for how organizations deploy GenAI:
First, abandon the myth of a single “human-in-the-loop” approach. Executives must recognize that their employees are already adopting dramatically different collaboration styles—and that these differences matter. Simply mandating “human oversight” without specifying what that means will produce wildly inconsistent results.
Second, match collaboration styles to strategic objectives. For tasks requiring maximum accuracy on high-stakes decisions, encourage Centaur behavior—selective AI use with strong human judgment. For tasks requiring rapid iteration and creative exploration, Cyborg behavior may be more appropriate. Reserve Self-Automator approaches for truly routine tasks, not the core or risky ones, and where skill development is not a concern.
Third, monitor for automation complacency. The 27% Self-Automator rate in our study—among highly skilled, motivated professionals who knew their performance was being evaluated—suggests that the temptation to over-delegate is powerful. Organizations must develop mechanisms to detect when employees are sliding toward full automation on tasks that require human engagement.
Fourth, rethink how you measure AI adoption success. Using only final outcomes—like edit rates or acceptance ratios—as proxies for engagement is insufficient. A Self-Automator who accepts AI output and a Cyborg who iterates extensively then accepts a refined version may look identical in the data. Companies need to track the quality of interaction throughout the workflow, not just the result.
Fifth, invest in developing AI fluency alongside domain expertise. Our findings suggest that the most sustainable approach combines both. Cyborg behavior builds advanced AI skills while maintaining domain knowledge; Centaur behavior builds domain skills while providing baseline AI exposure. Companies need training programs that develop both capabilities deliberately, rather than hoping employees will figure it out on their own.
The stakes: Expertise in the Age of AI
The emergence of GenAI presents organizations with a paradox. The technology promises to elevate human judgment, creativity, and speed, but it also carries a quieter risk: that in handing more thinking to machines, professionals may slowly give up the very capabilities that make them valuable. The same tools that sharpen expertise in some hands can, in others, replace it entirely, leaving organizations with impressive outputs short term but a thinning core of human judgment. This is not merely another efficiency tool, this is a revolution.The good news is that productive collaboration modes exist. Cyborgs and Centaurs demonstrate that humans can work effectively with AI while building, rather than depleting, their expertise. The challenge for executives is to create organizational conditions that encourage these productive patterns while discouraging the seductive but self-defeating path of full automation.
As AI capabilities continue to expand and improve, the organizations that thrive will be those that master not just what AI can do, but how humans should work with it. Understanding that “human-in-the-loop” is not a single approach but actually three fundamentally three different collaboration modes—with fundamentally different consequences—is the first step toward building that mastery.
François Candelon is a partner at private equity firm Seven2 and executive fellow at D^3 Institute at Harvard. Read other Fortune columns by François Candelon.
Katherine Kellogg is the David J. McGrath Jr. Professor of Management and Innovation at the MIT Sloan School of Management.
Hila Lifshitz is professor of management at Warwick Business School, faculty associate at the Harvard Laboratory for Innovation Science, and the co-director of the AI Innovation Network.
Steven Randazzo is a PhD student at Warwick Business School, visiting researcher at the Harvard Laboratory for Innovation Science, and the Co-Director of the AI Innovation Network.
This story was originally featured on Fortune.com