Intelligence²: The Business Case for Making Humans Smarter with AI (Not Replacing Them)
The companies winning with AI are not the ones that replaced the most humans. They are the ones that made their humans measurably sharper. This is the intelligence squared thesis: Smart Humans multiplied by Smart AI, not Smart Humans added to Smart AI. The distinction matters more than it sounds.
Addition produces incremental gains. Multiplication produces compounding ones. When a skilled human and a well-designed AI system work together, each exposes the other’s blind spots. The human catches what the AI gets confidently wrong. The AI surfaces what the human would never have the bandwidth to see. The output is not the sum of two workers. It is a qualitatively different kind of thinking.
The Intelligence Squared Thesis: What the Evidence Actually Shows
Three observations from practitioners anchor this model.
Jensen Huang, CEO of Nvidia, has noted publicly that radiologists did not disappear after AI vision tools arrived in medical imaging. The number of radiologists grew. The AI handled pattern detection at scale; the radiologist handled judgment, context, and the cases where the model’s confidence was misplaced. The specialty became more valuable, not obsolete.
Andrej Karpathy, former AI lead at Tesla and OpenAI, describes current large language models as “jagged” – genuinely superhuman on some tasks, surprisingly weak on adjacent ones. The jaggedness is not a bug to be fixed later. It is structural. Human calibration is not optional overhead. It is load-bearing.
Omar Yaghi, awarded the Nobel Prize in Chemistry in 2025 for his work on reticular chemistry, has spoken about embodied knowledge – the tacit understanding that accumulates in a researcher’s hands and intuitions over years of lab work. That knowledge cannot be scraped, tokenized, or scaled. AI can accelerate the search space; it cannot replicate what Yaghi knows from having made ten thousand mistakes at the bench.
These are not anecdotes in favor of optimism. They are structural arguments about where value actually lives in a world with powerful AI.
Why Replacement Logic Is Optimizing for the Wrong Variable
The replacement framing asks: how many humans can this AI eliminate? The augmentation framing asks: how much better does this AI make the humans we keep?
The first question is a cost question. The second is a capability question. Companies obsessed with the first tend to hollowout the institutional knowledge that gives AI systems anything useful to work with. You cannot fine-tune a model on judgment that no longer exists in the building.
McKinsey’s 2025 research on AI and workplace productivity found that workers who use AI as a thought partner – not just a task automator – report substantially higher output quality alongside efficiency gains. The productivity lift is real. But it is largest when humans retain enough expertise to push back on AI outputs.
A meaningful data point: research published by the National Bureau of Economic Research found that AI assistance improved the performance of low-skill customer service agents significantly, but had a near-zero effect on top performers – and in some tasks, top performers did better without it. The implication for business model design is direct. AI is not a uniform multiplier. Its effect depends entirely on who is holding it.
How This Plays Out in Practice
Data Innovation, a Barcelona-based AI and data company that builds and operates intelligent systems where humans and AI agents work together, has documented that client teams using its augmentation model across email operations and content production consistently outperform automation-only configurations on the metrics that matter over 90-day windows – specifically deliverability rates, engagement retention, and content conversion.
In email operations, the pattern is consistent. An AI system handling segmentation, send-time optimization, and deliverability monitoring creates the conditions for a human strategist to make better decisions about offer structure and list health – decisions the AI has no basis to make alone. The result is not AI doing the email. It is an email operation that gets better every month because the human and the system are learning from the same feedback loop. You can see how this works in practice in the Sendability system documentation.
In content production, the same structure holds. AI handles research synthesis, draft generation, and format optimization. Human editors supply editorial judgment, brand voice calibration, and the instinct for what a specific audience will actually find credible. Neither alone produces what both together produce. For a view of how this extends to search visibility, the LLMO optimization framework documents the content architecture that emerges from this model.
The honest failure case: this model breaks down when the human in the loop is not skilled enough to audit the AI’s outputs. We have seen teams assign junior staff to “oversee” AI systems they do not have the expertise to correct. The AI’s errors compound rather than getting caught. Intelligence squared requires genuine expertise on the human side. Without it, you get confidence squared instead – which is considerably more dangerous.
The Intelligence Squared Diagnostic: A Framework for Business Leaders
Before redesigning any workflow around AI, apply this four-question diagnostic:
- Where does the AI in this workflow get confidently wrong? If you cannot answer this, you do not have a human in the loop. You have a human watching the loop.
- Is the human’s expertise growing or shrinking as the AI handles more tasks? If shrinking, you are consuming institutional capital, not building it.
- Does the system produce feedback that improves both the AI and the human? One-way learning is not compounding. It is depreciation.
- What tacit knowledge exists in this team that cannot be encoded in the AI? Protect it. That is your moat.
Companies that score well on all four are building compounding advantage. Companies that cannot answer question one are exposed.
The Research Agenda
The intelligence squared thesis for AI-human business design raises questions we are actively investigating for an upcoming paper on co-evolutionary business models, developed in collaboration with IESE Business School. Specifically: how do organizations structure incentive systems so that human expertise is rewarded for improving AI performance, not just replaced by it? What does the organizational chart of a genuinely co-evolutionary company look like at 50 people versus 500? And how do you measure the compounding effect of human-AI collaboration in financial terms that a CFO will act on?
These are not settled questions. The CRM revenue benchmarks we track give us one measurement lens. The engagement lift data from AI-assisted marketing gives us another. But the full accounting of what a human-AI compound produces over three to five years is work still in progress.
If your team is already structured around the augmentation model and you are looking for a measurement framework to document the compounding effect, we have built early versions of that accounting across email, content, and data strategy engagements. The methodology is available to organizations that are ready to instrument it seriously.
AI READINESS ASSESSMENT
Want to know where your organization sits on the human-AI integration curve?
Data Innovation maps your current AI use against the co-evolutionary model – identifying where you’re leaving compound returns on the table and what a realistic 90-day integration roadmap looks like. Trusted by Nestle, Reworld Media, and Feebbo Digital.