Artificial Intelligence Can’t Take Your Job (But Its Promoters Can) AI agents on sale, below-cost pricing, and invisible supervision: anatomy of an unsustainable market that shifts the costs onto workers

L'Intelligenza Artificiale non può toglierti il lavoro (ma chi la promuove si)

Right now, a surprisingly large share of U.S. stock-market capitalization revolves around seven companies that offer artificial intelligence services. Their products and promises have dominated front pages for months, polarizing investment and reshaping expectations about productivity and work. Among the most seductive messages of this permanent marketing campaign is the idea that AI can replace entire swaths of tasks, leaving humans with a role of control and supervision: an extreme variant of the approach known as “human in the loop” (HITL), in which a person intervenes at decisive steps of an automated process.

In reality, the prospect of AI capable of substituting human labor hides deeper tensions. It’s not just a matter of technical limits: most current models require colossal infrastructure, rising energy consumption, and an invisible army of low-paid workers who monitor, correct, and feed the algorithms — a workforce that the rhetoric of full automation almost always tends to conceal.

What drives this vision is less the real needs of the market than a narrative of perpetual growth constructed by a few giants already saturated in their core sectors, who now present AI as a new frontier for expansion. The promise is the same as ever: reduce headcount, shift bargaining power away from workers, and funnel investor enthusiasm. But unlike previous waves of automation, here a concrete economic problem emerges that the public narrative tends to remove. It’s a knot that defines the entire phenomenon and that, later on, will show its full scope.

In this article — which I won’t pretend is short — I’ll try to shed light on the problematic aspects of this knot: where the promise of “productivity-saving” AI comes from, how the financial bubble that fuels it is sustained, why the economic equilibrium behind it is far more fragile than it seems, and the inevitable practical consequences for people working in an economy that risks paying a very high bill.

The Myth of Infinite Growth

The first thing to note is that the companies building AI aren’t riding a frictionless innovation wave: the perpetual-growth narrative collides with an undeniable structural fact: the core markets these companies operate in are already saturated. Search, online advertising, operating systems, cloud, hardware, and social platforms have for years displayed “mature market” dynamics: entrenched shares, high acquisition costs, margins defended more by lock-in and rents than by genuine expansion. Unlike the pioneering phases of the web or e-commerce — where each new customer improved unit economics — here the growth promise is projected onto an adjacent space (generative AI) that is structurally capex-intensive, energy-hungry, and — above all — subject to diminishing returns: more models, more data, and more compute don’t guarantee better margins, while live costs (training, inference, data centers) and legal risks (copyright, privacy, liability) rise.

The result is a paradox: to sustain ever more ambitious revenue targets in exhausted markets, AI becomes the new growth narrative, but that narrative depends on the yet-to-be-proven idea that large-scale human tasks can be replaced by systems that, to function decently, require more supervision, more curated data, and more infrastructure. What’s advertised as “growth for everyone” is, in reality, an attempt to squeeze an already saturated market, shifting resources from one segment to another with rising costs and systemic risks that marketing tends to hide.

Dogshit unit-economics

The companies dominating today’s AI debate aren’t going through a simple shakedown phase: they burn cash structurally, regardless of launch timing or initial expansion. This isn’t the classic startup scenario of early-years losses — high expenses that then amortize as the business scales — but an inherently loss-making economic model that struggles to generate positive margins even once the product is widespread. Every new customer doesn’t improve the numbers — it worsens them — because variable costs grow faster than revenues. In generative AI, every additional interaction requires compute, electricity, cooling, and bandwidth: resources whose costs are rising and hard to compress. Making matters worse, each new generation of models demands a fresh leap in requirements: larger datasets to acquire and clean, costlier training infrastructure, higher-end GPUs, and increased electricity and water consumption. Instead of shrinking, the cost to serve the user tends to grow as the product evolves.

The contrast with “legacy” digital platforms is stark: services like search engines, social networks, or marketplaces benefited from classic economies of scale, where the marginal cost per additional user fell; running a hundred million users didn’t cost a hundred times more than running one million. In generative AI, the opposite holds: more users mean more compute kept online, more models to update, and more infrastructure to power, with a direct — often exponential — impact on costs.

You don’t need an economics degree to see this is a paradoxical, long-term dead end, memorably captured by Edward Zitron’s phrase “dogshit unit-economics.” It’s a scenario that completely inverts the economic paradigm underpinning tech markets: instead of scaling with falling costs, AI scales with expanding costs, which makes it fragile in the face of liquidity drops or investment pullbacks.

In short, this is a model with no real chance of achieving satisfactory profitability. And what happens when it becomes clear that an investment can’t reach the break-even point and offers no credible prospect of generating profits any time soon?

Tangled Books, Inflated Numbers

First of all, there’s an attempt to buy time with accounting engineering, altering the perception of revenues and margins in a desperate bid to cover a gigantic cash hole. A recurring pattern is the cross-booking of value between partners: for example, a major cloud provider grants compute capacity as part of a “strategic investment”; the AI company uses it for training and inference and books it as an operating expense; the provider, in turn, can record that same capacity as revenue for its cloud business. The very same figure thus appears under three different headings (investment, expense, revenue), inflating the aggregate picture without a corresponding flow of cash.

In essence, that’s what happened between Microsoft and OpenAI: Microsoft granted OpenAI data-center usage credits as part of a strategic “investment,” valued on the books at around $10 billion. For OpenAI, that amount shows up as an operating cost to train and serve models, while for Microsoft it was recognized as cloud revenue from a customer. The same economic quantity thus appears, in different contexts, as investment, cost, and revenue, feeding a growth narrative without a corresponding cash inflow.

A similar dynamic involves Nvidia: many data-center operators, to expand quickly, have pledged batches of GPUs as collateral to obtain credit, reporting the value of those cards on their balance sheets both as assets and as loan guarantees. This inflates apparent net worth without increasing actual liquidity or the ability to generate profits.
This accounting dance makes it possible to present “revenues” and “assets” higher than what’s actually collected, and to mask chronic operating losses: as long as capital keeps flowing and credit is easy, the illusion holds; when liquidity tightens, the house of cards shows.

This kind of accounting “scaffolding” often comes with other maneuvers:

  • “Annualized” revenues extrapolated from the best month (or a big client’s run-rate), projecting non-repeatable peaks over a full year.
  • Extensive use of “adjusted” metrics (adjusted EBITDA, normalized revenues, selective exclusions of cost items) that soften the impact of operating losses.
  • Supplier credits (capacity credits, vendor credits) traded as consideration, which improve the accounting numbers more than liquidity.
  • Reclassifications and transactions between related parties make it opaque who is actually paying what (and when).

The net effect is a growth story propped up by not-fully-monetary revenues, which masks chronic losses and postpones a reckoning with the fundamentals (what does it really cost to serve a user?). As long as capital flows and credit is abundant, the game works. Unfortunately, numbers that don’t reconcile with cash always come due: a puff of wind — rising rates, investors hitting the brakes, customers cutting spend, expiring supplier credits — is enough to bring this house down.

Infrastructure and Debt: GPUs, Data Centers, Mortgages

One aspect often overlooked in the AI debate is the enormous physical apparatus needed to keep it running: industrial-scale data centers, tens of thousands of high-end GPUs, liquid-cooling systems, constant energy and water consumption, plus technical staff for maintenance. This infrastructure carries very high fixed costs, on top of the variable costs of every training session or user request.

To finance expansion, many players — not just AI producers but also the companies providing them with compute capacity — have resorted to loans secured by the very same tech assets: in practice, they mortgage the GPUs they’ve bought, using them as collateral to obtain financing. The problem is that, unlike real estate or industrial machinery, GPUs depreciate very quickly: the innovation cycle renders them obsolete in a few years (sometimes months), and their resale value collapses rapidly.

On top of this accelerated obsolescence comes significant operational wear and tear: training cycles for large models can last weeks and involve tens of thousands of chips; it’s not uncommon that, by the end, a significant share of GPUs is faulty or degraded, raising replacement and maintenance costs.

Many compute-capacity providers, such as large intermediaries of AI-specialized cloud infrastructure, operate with very high debt levels and short-term supply contracts signed with big tech. If those contracts weren’t renewed — or were cut amid slowing demand or investment pullbacks — these companies could face cascading insolvencies, with potential domino effects across the entire ecosystem: from hardware suppliers to data-center operators to energy producers.

The result is that the apparent solidity of this new “AI economy” rests on volatile assets and heavy debt, in a sector where productive capacity ages faster than loan repayment schedules: a combination that makes not only individual firms fragile, but the entire technological supply chain on which AI rests.

The Impact on the Real Economy

At this point, after lining up costs, fragilities, and accounting acrobatics, a crucial question remains: are all these gambles to prop up a technology that looks barely sustainable justified by truly disruptive effectiveness? The best way to answer is to look at the real economy’s data. And here too, unfortunately, the empirical evidence suggests a far from rosy picture. Despite media noise and lofty expectations, the empirical evidence to date shows that AI adoption hasn’t produced macro-level transformations in the main economic indicators.

A study by the University of Chicago (“Early Economic Impacts of Generative AI”, SSRN, 2025) finds that the introduction of generative AI systems has not significantly affected wages, hours worked, or average income per employee in the sectors analyzed. Employment effects are marginal and concentrated in highly specialized niches, with no impact on aggregate productivity.

An analysis by MIT (“We Analyzed 16,625 Papers to Figure Out Where AI Is Headed Next”, MIT Technology Review, 2019) found that around 95% of companies that experimented with AI did not report tangible economic results and, in many cases, even recorded net losses relative to the investments made.

This gap between expectations and results fuels a genuine collective illusion, inflated by marketing campaigns and the emphasis of media and investors. The paradox that follows is this: while the dominant rhetoric announces that AI will drastically reduce the cost of labor, the evidence on the ground says the benefits remain limited, while costs (computational and human supervisory labor) stay high.

AI Agents "on sale": The Mirage of Efficiency

At this point, someone might object that the real economy isn’t just made of companies that invest in AI, but also — and above all — organizations that use AI pragmatically and every day. The argument is simple: maybe the big players burn money, but the majority is already benefiting from the “simple” use of agents that handle part of the work. In other words, we’re talking about the “economic advantages” resulting from the standard use of tools like ChatGPT, Claude, or Copilot via more or less elaborate prompts: email drafts, automated minutes, template-based documents, vibe coding, and so on. In many workplaces, this flow is already routine: the employee instructs the agent to do the job, reviews the output, and then passes it off as their own, with the impression of saving a lot of time while reducing risks linked to typos, human error, etc.

So far, everything seems to work. The problem is that the extraordinary convenience making this trade-off attractive is sustained by a kind of de facto permanent promotional period: free credits, API discounts, bundling with other services, accounting that shifts costs from usage to subscriptions, and overall prices that are objectively too low to cover the real costs borne by technology providers (compute, energy, infrastructure, supervision). The entire sector is currently immersed in a kind of technological dumping: broadly the same strategy whereby, in the 2000s, Amazon Web Services priced storage below cost to massively expand its user base at competitors’ expense, and Uber offered discounted rides to displace traditional taxis. The issue is that, in AI’s case, we’re not talking about a single segment, but a strategy that involves millions of people across nearly all major productive sectors — from customer care to software development, from publishing to marketing, from operational finance to education, through to administrative healthcare and the public sector. When direct and indirect subsidies end or list prices return to reflect costs, this apparent “efficiency” can turn into sudden price hikes, service contractions, onerous lock-in, and widespread employment shocks, compromising the day-to-day operations of a vast number of organizations and workers.

The Final Bill for Workers

If the AI bubble were to deflate abruptly, workers would be the first to pay the price. The paradox is that many risk being hit twice: first replaced or downgraded by tools that promise to substitute them, but, in reality, cannot perform their jobs with the same reliability; then pushed out of the market when those systems become too expensive to maintain and are shut down.

This prospect is especially alarming because the rhetoric of automation encourages preemptive layoffs, driven more by expected savings than by the actual capacity of models to replace humans. The result is that entire corporate functions are reduced or dismantled while operations still require supervision and human intervention, which inevitably gets marginalized or shifted into precarious, outsourced, or poorly paid roles.

Put another way: at present, AI can’t take your job, but its promoters can persuade your boss to fire you (or pay you much less). And when the capital that now keeps these services alive runs out, leading to the shutdown or downsizing of many platforms offering AI services, the damage will already be done. Staff already let go — presumably replaced or reskilled into HITL tasks or low-paid freelance roles — will not automatically return to their posts. We’ll be left with a labor market that’s poorer and more fragmented, with interrupted skills and a pool of workers who, after “training” AI, find themselves without a stable role.

Conclusions

Ultimately, the clearest risks of widespread AI use aren’t the robot-apocalypse scenarios that now dominate most conferences and coverage of the topic, but a possible (and, alas, increasingly likely) employment crash: the combination of inflated expectations, premature layoffs, and the sudden shutdown of unsustainable services can translate into widespread shocks across critical sectors — from logistics to customer service, from accounting to administrative healthcare — with consequences far beyond big tech’s balance sheets.

Worse still, even in the most “optimistic” scenario — one where the bubble doesn’t burst soon — workers are likely to gain nothing. The extra value generated by AI adoption, as so often happens, tends to flow upward, concentrating among those who control chips, cloud, and platforms, while those downstream see no real benefit: when productivity rises, any gains are immediately capitalized into margins and license prices, not into higher wages or shorter hours at the same pay. In fact, it often becomes an increase in the amount of work demanded at the same salary (“now that AI helps you”) and/or headcount cuts, outsourcing, and more fragile contracts.

In other words, the promise of “working less while earning more” is not materializing: efficiency is booked at the top, while at the bottom, protections and pay are squeezed. Unless we build explicit mechanisms to share the productivity dividend — bargaining, cost transparency, clauses on hours and pay, audit rights over impacts — Artificial Intelligence will not spontaneously redistribute its benefits. And so, even without a spectacular bang, we could end up with more precarity and less bargaining power: an economy that produces more, but not for the people who make it run, with all that entails.

If we want AI to deliver broad-based welfare, the “productivity dividend” must be contractually guaranteed, not left to market inertia. This means explicitly tying system adoption to verifiable commitments on wages, hours, and job quality; demanding transparency on total costs (compute, energy, supervision) and on the real performance of projects; introducing pass-through clauses that turn efficiency gains into better paychecks or more time off; setting no-layoff periods and reskilling funded by those implementing automation (and reaping the largest benefits); ensuring interoperability and portability to avoid lock-in that pushes price hikes downstream; and, no less important, adopting a system that can objectively measure how this technology improves workers’ conditions: working fewer hours (for the same pay) or earning more (for the same work).

Absent these rules, enthusiasm for AI will keep moving profits upward and risks downward: more margin for those who control chips, cloud, and platforms — and for those who buy their services — and more precarity for those who keep processes running. And here’s the final point: if, as everything suggests, this market isn’t sustainable, the day the music stops the bill won’t be paid by the prophets of growth but — as usual — by those who work.

Sources and references

About Dark

Lead Developer, IT Project Analyst, UI Designer, Web Enthusiast. IT Architect for websites, interfaces, services & apps built for web & mobile devices. Microsoft MVP for Development Technologies since 2018.

View all posts by Dark

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

This site uses Akismet to reduce spam. Learn how your comment data is processed.