AI Doesn't Reduce Work—It Intensifies It
AI Doesn't Reduce Work—It Intensifies It
Harvard Business Review study (February 2026) showing how AI tools don't reduce workload but consistently intensify it through task expansion, blurred work boundaries, and increased multitasking — proposing "AI practice" as solution.
Links
- Article: https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
- Published: February 9, 2026
- Authors: Aruna Ranganathan (UC Berkeley Haas), Xingqi Maggie Ye (PhD student, Berkeley Haas)
- Study: 8-month ethnography at U.S. tech company (~200 employees)
Overview
In-progress research from an 8-month study at a U.S. technology company reveals that AI tools don't reduce work—they consistently intensify it. Workers voluntarily took on more tasks, extended work into more hours, and worked at faster pace without being asked, driven by AI making "doing more" feel possible, accessible, and rewarding.
Core finding: "You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more."
The Promise vs. Reality
The Promise:
- AI reduces burden of routine work (drafting documents, summarizing, debugging)
- Allows workers more time for high-value tasks
- Productivity gains
The Reality:
- Workers worked at faster pace
- Took on broader scope of tasks
- Extended work into more hours of the day
- Often without being asked to do so
- Company did NOT mandate AI use (offered enterprise subscriptions, workers used voluntarily)
Why: "AI made 'doing more' feel possible, accessible, and in many cases intrinsically rewarding."
Three Forms of Work Intensification
1. Task Expansion
What happened: Workers stepped into responsibilities that previously belonged to others
- Product managers and designers began writing code
- Researchers took on engineering tasks
- Individuals attempted work they would have outsourced, deferred, or avoided entirely
Why it felt empowering:
- AI filled gaps in knowledge
- Provided "empowering cognitive boost"
- Reduced dependence on others
- Offered immediate feedback and correction
- Workers described as "just trying things" with AI
The accumulation: "These experiments accumulated into a meaningful widening of job scope. In fact, workers increasingly absorbed work that might previously have justified additional help or headcount."
Knock-on effects:
- Engineers spent more time reviewing, correcting, guiding AI-generated work from colleagues
- Extended beyond formal code review
- Coaching colleagues who were "vibe-coding"
- Finishing partially complete pull requests
- Surfaced informally (Slack threads, quick desk consultations)
2. Blurred Boundaries Between Work and Non-Work
What happened: Work slipped into moments that had previously been breaks
- Prompted AI during lunch, in meetings, while waiting for files to load
- Sent "quick last prompt" before leaving desk so AI could work while away
- Spilled into evenings or early mornings without deliberate intention
Why it didn't feel like work:
- AI reduced friction of facing blank page or unknown starting point
- Conversational style of prompting felt closer to chatting than formal task
- Actions "rarely felt like doing more work"
The accumulation:
- Workday with fewer natural pauses
- More continuous involvement with work
- Prompting during breaks became habitual
- Downtime no longer provided same sense of recovery
- Work felt "less bounded and more ambient—something that could always be advanced a little further"
Result: "The boundary between work and non-work did not disappear, but it became easier to cross."
3. More Multitasking
What happened: Workers managed several active threads at once
- Manually writing code while AI generated alternative version
- Running multiple agents in parallel
- Reviving long-deferred tasks because AI could "handle them" in background
Why they did it: "They felt they had a 'partner' that could help them move through their workload."
The reality:
- Continual switching of attention
- Frequent checking of AI outputs
- Growing number of open tasks
- Created cognitive load
- Sense of always juggling
The expectation shift: "This rhythm raised expectations for speed—not necessarily through explicit demands, but through what became visible and normalized in everyday work."
Worker observation: "Many workers noted that they were doing more at once—and feeling more pressure—than before they used AI, even though the time savings from automation had ostensibly been meant to reduce such pressure."
The Self-Reinforcing Cycle
- AI accelerated certain tasks
- Which raised expectations for speed
- Higher speed made workers more reliant on AI
- Increased reliance widened scope of what workers attempted
- Wider scope expanded quantity and density of work
Paradox: "Several participants noted that although they felt more productive, they did not feel less busy, and in some cases felt busier than before."
The Risks
Short-term appearance: Higher productivity
Hidden reality:
- Silent workload creep
- Growing cognitive strain
- Employees juggling multiple AI-enabled workflows
- Extra effort is voluntary and "framed as enjoyable experimentation"
- Easy for leaders to overlook how much additional load workers carry
Long-term consequences:
- Impaired judgment
- Increased likelihood of errors
- Harder to distinguish genuine productivity gains from unsustainable intensity
- Fatigue
- Burnout
- Growing sense that work is harder to step away from
- Rising organizational expectations for speed and responsiveness
The trap: "What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain."
The Solution: Developing an "AI Practice"
Definition: A set of intentional norms and routines that structure:
- How AI is used
- When it is appropriate to stop
- How work should and should not expand in response to newfound capability
Why it's needed: "Without such practices, the natural tendency of AI-assisted work is not contraction but intensification, with implications for burnout, decision quality, and long-term sustainability."
Three Key Elements
1. Intentional Pauses
Purpose: Brief, structured moments that regulate tempo
- Protected intervals to assess alignment
- Reconsider assumptions
- Absorb information before moving forward
What they do:
- Don't slow work overall
- Prevent quiet accumulation of overload when acceleration goes unchecked
- Support better decisions, healthier boundaries, more sustainable productivity
Example: Decision pause requires:
- One counterargument before major decision finalized
- One explicit link to organizational goals
- Widens attention field enough to protect against drift
2. Sequencing
Purpose: Deliberately shape WHEN work moves forward, not just how fast
What it includes:
- Batching non-urgent notifications
- Holding updates until natural breakpoints
- Protecting focus windows from interruptions
How it works:
- Rather than reacting to every AI output as it appears
- Work advances in coherent phases
- When coordination is paced this way:
- Workers experience less fragmentation
- Fewer costly context switches
- Teams maintain overall throughput
Benefit: "By regulating the order and timing of work—rather than demanding continuous responsiveness—sequencing can help organizations preserve attention, reduce cognitive overload, and support more thoughtful decision-making."
3. Human Grounding
Purpose: Counter solo, self-contained AI work with human connection
What it includes:
- Short opportunities to connect with others
- Brief check-ins
- Shared reflection moments
- Structured dialogue
What it does:
- Interrupts continuous solo engagement with AI tools
- Helps restore perspective
- Supports creativity (AI provides single synthesized perspective; creativity needs multiple human viewpoints)
- Re-anchors work in social context
- Counters depleting, individualizing effects of fast AI-mediated work
Insight: "By institutionalizing time and space for listening and dialogue, organizations re-anchor work in social context."
Key Insights
On productivity gains:
"Organizations might see this voluntary expansion of work as a clear win. After all, if workers are doing this of their own initiative, why would that be bad? Isn't this the productivity explosion we've been promised?"
Reality check:
"What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain… Because the extra effort is voluntary and often framed as enjoyable experimentation, it is easy for leaders to overlook how much additional load workers are carrying."
Why self-regulation fails:
"Asking employees to self-regulate isn't a winning strategy."
The core tension:
"The promise of generative AI lies not only in what it can do for work, but in how thoughtfully it is integrated into the daily rhythm. Our findings suggest that without intention, AI makes it easier to do more—but harder to stop."
The choice:
"The question facing organizations is not whether AI will change work, but whether they will actively shape that change—or let it quietly shape them."
Study Methodology
- Duration: April to December (8 months)
- Company: U.S.-based technology company (~200 employees)
- Methods:
- In-person observation 2 days/week
- Tracking internal communication channels
- 40+ in-depth interviews
- Departments: Engineering, product, design, research, operations
- AI use: Voluntary (not mandated), enterprise subscriptions offered
Author Context
Aruna Ranganathan:
- Associate Professor of Management and Organizations, UC Berkeley Haas
- PhD from MIT Sloan
- Research focus: Future of work, identification with work, inequality in workplace
- Uses "full-cycle research methods"
Xingqi Maggie Ye:
- PhD student, Management of Organizations, Berkeley Haas
- Combines ethnography and field experiments
- Research: How generative AI reshapes work practices, professional identities, organizational structures
- MHA from Cornell, BS from Imperial College London
Significance
Why it matters:
- Empirical evidence - Not speculation, but 8-month ethnographic study
- Counterintuitive finding - AI intensifies rather than reduces work
- Voluntary behavior - Workers doing this on own initiative, not being forced
- Hidden costs - Short-term productivity masks long-term burnout
- Practical framework - "AI practice" with concrete elements (pauses, sequencing, grounding)
- Organizational blindspot - Leaders may miss workload creep until too late
What it challenges:
- Simple productivity narrative ("AI makes you more efficient")
- Assumption that time saved = reduced work
- Belief that voluntary adoption = sustainable adoption
- Idea that workers will self-regulate effectively
What it reveals:
- Task expansion (workers absorbing headcount-justifying work)
- Boundary erosion (work becoming "ambient")
- Expectation inflation (speed becomes normalized)
- Cognitive load accumulation (always juggling)
Related
- agent-psychosis - Complementary critique on AI coding addiction and quality degradation
- pew-ai-risks-regulation-2025 - Public concern about job displacement (56% vs. 25% experts)
- karpathy-december-coding-agents-breakthrough - The capabilities enabling this intensification
- Organizational behavior and AI
- Work-life balance in AI era
- Sustainable productivity
Timing
February 2026 publication aligns with:
- Rapid AI adoption in workplaces
- Growing awareness of unintended consequences
- karpathy-december-coding-agents-breakthrough (December breakthrough enabling more intensive use)
- agent-psychosis concerns (January 2026) about unsustainable patterns
Study period: April-December (previous year), capturing early phase of widespread generative AI adoption
Key Quote
"You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more." — Engineer participant
By
Aruna Ranganathan and Xingqi Maggie Ye