Here’s a prediction that sounds dystopian until you look at the data: within 18 months, a significant share of the people currently managing teams, approving budgets, and running performance reviews will have been replaced — not by other people, but by autonomous AI systems that never sleep, never take leave, and never ask for a raise.

This isn’t science fiction. It’s the logical conclusion of trends already visible in corporate earnings calls, government workforce data, and the regulatory wars now erupting across three continents. The question is no longer whether AI will restructure the chain of command. It’s who gets to decide the rules of the transition — and who gets left behind.

The Numbers That Should Terrify Every Middle Manager

Gartner’s latest workforce forecast is blunt: by the end of 2026, one in five organisations will use AI to flatten their management structures, eliminating more than half of existing middle management roles in the process. That’s not a fringe prediction from a Silicon Valley startup — it’s a consensus view from one of the world’s most conservative enterprise research firms.

The reason is straightforward. AI agents — systems that don’t just answer questions but autonomously plan, execute, and adapt across business workflows — can now handle the core functions that justify a middle manager’s salary: scheduling, performance monitoring, reporting, forecasting, and resource allocation. IDC projects that AI copilots will be embedded in nearly 80% of enterprise workplace applications by the end of this year. Google Cloud’s 2026 AI Agent Trends Report describes the shift as moving from “one-off prompts” to “digital assembly lines” that run entire workflows from start to finish.

And the layoffs have already begun. A survey of 1,000 US business leaders found that 39% conducted AI-related layoffs in 2025, with 58% expecting further cuts in 2026. High-salary employees and those without AI skills face the steepest risk. Entry-level workers, once the pipeline for future leadership, are being hit hardest of all — one in three companies expects entry-level roles to be eliminated at their organisations by the end of this year.

“We’re not seeing mass layoffs — yet. What we’re seeing is job redesign, hiring avoidance, and role consolidation. The near-term story is AI changing jobs faster than it’s cutting them.”

— Gartner workforce analysis, March 2026

The Political Battleground: Who Decides?

If you want to understand why AI workforce policy is the sleeper political issue of 2026, look at the regulatory map. Three competing visions are colliding in real time — and none of them are compatible.

Washington: “One Rulebook” and the Innovation-First Doctrine

In December 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” centralising AI oversight at the federal level and explicitly signalling hostility toward state-level regulation. “There must be only One Rulebook if we are going to continue to lead in AI,” Trump posted on Truth Social. The order directs agencies to challenge state AI laws deemed “onerous” and conditions federal grants on states aligning with the national framework.

The practical effect is a deregulatory posture that treats AI-driven workforce displacement as an acceptable cost of maintaining US technological dominance. The administration’s position: if companies want to replace managers with AI agents, that’s the market working as intended. Worker protections are largely left to employers’ goodwill.

Meanwhile, the federal government is practising what it preaches. After cutting more than 264,000 federal positions since taking office — through firings, layoffs, retirements, and early separation incentives — agencies like the General Services Administration are now turning to AI tools to make their remaining staff more productive. GSA alone lost nearly 40% of its workforce since fiscal 2024.

The States: A Patchwork of Protections

While Washington deregulates, individual states are racing in the opposite direction. Colorado’s AI Act, taking effect June 30, 2026, imposes risk management requirements, anti-discrimination obligations, and impact assessments on any company using AI for “consequential decisions” in employment, finance, healthcare, or education. Texas’s Responsible AI Governance Act took effect on January 1, 2026, with civil penalties and attorney general enforcement. Illinois now requires employers to notify workers whenever AI is used to make employment decisions. California’s rules on automated decision-making technology take effect in January 2027.

The result is exactly the regulatory chaos Trump’s executive order was designed to prevent — but may have accelerated. Companies operating across state lines face a patchwork of conflicting obligations. Some are complying with the strictest standard. Others are betting that federal pre-emption will eventually wipe the slate clean. Neither group knows who’s right.

Brussels: The World’s Strictest AI Rulebook

Then there’s Europe. The EU AI Act — the world’s first comprehensive AI law — hits its biggest milestone on August 2, 2026, when the full suite of high-risk system obligations becomes legally binding. AI used for recruitment, performance evaluation, promotion decisions, and workforce management is explicitly classified as high-risk, triggering requirements for human oversight, worker notification, discrimination monitoring, and detailed logging.

The penalties are severe: fines of up to €35 million or 7% of global annual turnover for prohibited AI system violations — exceeding even GDPR maximums. And critically, the law has extraterritorial reach. Any company whose AI products or services touch EU residents is subject to compliance, regardless of where the company is headquartered. That means American tech giants building the very AI agents designed to replace middle managers are directly in Brussels’ crosshairs.

The Real-World Fallout Is Already Here

The corporate world isn’t waiting for regulators to sort this out. The shift is happening now, company by company, department by department.

Amazon used AI agent coordination to modernise thousands of legacy Java applications, completing upgrades in a fraction of the expected time — work that would have required armies of project managers to oversee. Telus, a Canadian telecoms giant, has more than 57,000 team members regularly using AI, saving an average of 40 minutes per AI interaction. Suzano, the world’s largest pulp manufacturer, developed an AI agent that translates natural language into SQL queries, achieving a 95% reduction in the time required for data analysis across 50,000 employees.

In each case, the layer being compressed is the same: the people who used to sit between strategy and execution. The coordinators. The supervisors. The report-generators. The meeting-schedulers. The people whose primary function was to translate leadership’s intent into action. AI agents now do that translation faster, cheaper, and around the clock.

And there’s an uncomfortable irony for the workers who remain. A recent Udacity survey found that only 9% of executives, managers, and front-line employees actually want to replace their entire workforce with AI. Seven in ten respondents still prefer working with humans. But preference and profit rarely move in the same direction. Companies that tried full replacement and failed are quietly rehiring — but into redesigned roles, not the old ones.

The Deeper Political Question Nobody Is Asking

Strip away the corporate jargon and the regulatory acronyms, and the political stakes become stark. Middle management is not just an employment category. It is the economic foundation of the suburban professional class — the people who vote in swing districts, pay mortgages in the exurbs, coach weekend football, and send their kids to state universities. They are the demographic backbone of both major parties’ electoral coalitions.

When Gartner says 20% of organisations will eliminate half their middle management roles, that is not an abstract forecast. It is a prediction about who will lose health insurance, who will default on car loans, and which school districts will see their tax base erode. The political consequences of hollowing out that class — without a coherent transition plan — are almost too large to model.

And yet the policy response, on both sides of the aisle, remains staggeringly thin. The Trump administration’s position amounts to “don’t slow it down.” State-level efforts are earnest but fragmented. The EU’s approach is comprehensive but designed for European labour markets with far stronger safety nets. Nobody in Washington is seriously asking: what happens to the millions of Americans whose jobs are eliminated not by trade, not by offshoring, not by recession — but by a software upgrade?

What Comes Next

The optimists point out, correctly, that AI skills command wage premiums of up to 56% — and that PwC’s data shows job numbers rising even in highly automatable roles. The transition is not zero-sum. New categories of work are emerging: AI governance specialists, prompt engineers, agent supervisors, compliance auditors for algorithmic systems. Gartner notes that 80% of the engineering workforce will need to upskill through 2027 just to keep pace.

But that upskilling doesn’t happen by magic. It requires investment, institutional support, and — critically — time. And time is the one resource the current pace of deployment is not providing. Companies are shipping AI agents into production environments today. The Colorado AI Act doesn’t take effect until June. The EU’s high-risk rules land in August. Federal workforce retraining programmes are being defunded, not expanded.

The political leader who grasps this dynamic — who can articulate a credible plan for the millions of professionals facing AI-driven displacement, without demonising the technology or pretending the transition is painless — will own the most powerful issue of the 2028 cycle. So far, no one has claimed it.

Key Dates to Watch

  • June 30, 2026: Colorado AI Act takes effect — first comprehensive US state AI governance law with enforcement teeth.

  • August 2, 2026: EU AI Act high-risk obligations become legally binding. AI used in hiring, performance evaluation, and workforce management must comply with oversight, notification, and logging requirements.

  • January 1, 2027: California CCPA automated decision-making rules take effect, covering AI used for “significant decisions” about consumers.

  • Through 2027: 80% of the engineering workforce globally will need to upskill to keep pace with generative AI (Gartner).

Your boss might not be an AI today. But the person deciding whether to replace your boss with one? They’re reading the same data you just did.

Reply

Avatar

or to participate

Keep Reading