AI's SaaSpocalypse and “feedback loop with no natural brake

DeepSeek Distilled Claude in its Training?

Posted on February 28th, 2026

Summary

Audio Summmary

The term SaaSpocalypse” is now used to denote investor fears that AI agents will undermine the software-as-a-service economy. The fears have not yet impacted Salesforce whose revenue for the last year was 41.5 billion USD – a 10% increase on the previous year. Nevertheless, Anthropic developments are driving these fears as it released more tools for its Claude Cowork AI enterprise productivity platform. These include AI connectors for Google Drive, Google Calendar, Gmail, DocuSign, and other platforms, as well as plugins for MS Excel and PowerPoint. For Anthropic, the advent of AI – or MCP – connectors has been a game-changer, allowing AI to move away from user-AI interactions to smart workflows. The company mentioned that one year ago, one third of all US jobs had associated tasks appearing in Claude usage statistics. This number has increased today to one in two jobs.

Also on the subject of agents, a post by a Meta AI security researcher went viral telling how she told an OpenClaw AI agent to check her Inbox and suggest messages for archival or deletion, but the agent deleted all messages. One explanation for the misbehavior is compaction” – which is when the agent’s context window does not have enough space for prompt and data, and has to throw information away. In this case, the agent seems to have lost the instruction about not deleting messages. Meanwhile, Google removed API access for OpenClaw AI agents to Gemini AI, citing “malicious usage” leading to service degradation for users. The move could be the start of a “walled garden” approach to AI eco-systems, which could see developers lose model access.

Anthropic is claiming that Chinese AI companies DeepSeek, Moonshot and MiniMax have been illicitly using the Claude model to train their companies’ models using distillation – the training technique where a student model learns from a teacher model by observing answers to queries, rather than by learning from raw training data. Distillation is a grey area legally; intellectual property law covers the use of content in training data, so distillation might need regulation. Elsewhere, the Trump administration is claiming that DeepSeek trained its latest AI model using Nvidia's Blackwell AI chip, despite an export ban on these chips to China. The pro-sales to China lobby in the US includes Nvidia which argues that selling high-quality chips removes incentive for China to invest in local companies like Huawei; the opposing lobby fears that China will exploit AI chips to take the lead in AI ahead of the US, and exploit AI to strengthen its military.

On society issues, an MIT Technology Review article looks at the hidden human labor behind the humanoid robot hype, as Nvidia and other companies announce the era of physical AI. It cites the example of a factory worker in Shanghai who spent a week wearing an exoskeleton and virtual reality headset, and was told to open and close a microwave door hundreds of times. The use of low-wage workers for AI tasks mimics the use of workers in low-wage countries to help train large language model guardrails, by moderating disturbing content (violent of over-sexualized). Meanwhile, TechCrunch discusses how the proliferation of AI slop is making it difficult for content providers to be heard on the Internet, but this may create an increase in demand for authenticity, leaving the avenue open for a new platform.

An InfoWorld opinion article argues that the need for experienced software developers is as strong as ever. A key, appreciated, feature of AI code generation tools is that they generate extensive code. This means that in trying to be robust, the code covers as wide a use case as possible. The problem with this approach is that requirements are rarely 100% clear, so when these change, a greater volume of code needs to be changed. The author writes that “AI reduces the cost of writing code. It does not reduce the cost of owning it”.

Finally, the Guardian analyzes a Substack post by Citrini Research that describes an economic doomsday scenario caused by AI. The first stage of the scenario sees people deploying their own AI agents for organizational and data management tasks, relegating the importance of companies like Oracle. People then create AI agents that replace need for Uber or DoorDash. Agents transact in cryptocurrency – removing the need for Visa and Mastercard. Software share prices crash, leading to a credit crisis, and revenue crashes for the 50% of consumer spending accounted for by 10% of the US population. The report calls this “a feedback loop with no natural brake”.

1. Can the creator economy stay afloat in a flood of AI slop?

This article summarizes a TechCrunch podcast on how content creators can generate revenue in an Internet world filled with AI Slop.

  • Todays, a highly popular “Youtuber” called MrBeast is not generating revenue from ads alone, finding it difficult to compete with the sheer volume of content out there.
  • AI is contributing to this problem through the creation of AI Slop, making it harder for people with a message worth hearing to be heard.
  • AI can also act as a break on content generation, as people often violate copyright (by using images of well-known actors or comic characters in videos), leading to studios sending cease-and-desist warnings to the content platforms.
  • Revenue creation might only follow though a technological avenue, such as the use of digital twins.
  • The proliferation of AI Slop could yield create an increase in demand for authenticity, leaving the avenue open for a new platform.

2. The human work behind humanoid robots is being hidden

This MIT Technology Review article looks at the hidden human labor behind the current humanoid robot hype, as Nvidia and other companies announce the era of physical AI.

  • One issue is the overestimation of the current capabilities of humanoid robots, despite highly publicized demos. For instance, the Neo humanoid robot, set to ship this year from 1X for 20’000 USD, requires remote human intervention as soon as the robot gets into a difficult situation. This has important privacy implications.
  • As with autonomous vehicles, hype can contribute to exaggerated expectations among the public and investors.
  • Human factory workers are being used to train robots. The article cites the example of a factory worker in Shanghai who spent a week wearing an exoskeleton and virtual reality headset, and was told to open and close a microwave door hundreds of times.
  • The use of low-wage workers for AI tasks mimics the use of workers in low-wage countries to help train large language model guardrails, by moderating disturbing content (violent of over-sexualized).

3. Google clamps down on Antigravity 'malicious usage', cutting off OpenClaw users in sweeping ToS enforcement move

This article looks at the implications of Google removing API access for OpenClaw AI agents to the Gemini AI model. The ban even applies to enterprise users paying 250 USD per month.

  • Google cited malicious usage” leading to service degradation for users as the reason for the OpenClaw ban.
  • Anthropic made a similar move earlier this year, banning wrappers like OpenClaw, so that Claude Code is now the only interface to its AI models.
  • These moves are seen as ushering in a “walled garden” approach to AI eco-systems, and represents an important business risk for AI developments. Access to third-party AI model APIs are very dependent on terms of service conditions. Guaranteed API access could become more expensive as a result.

4. ‘A feedback loop with no brake’: how an AI doomsday report has rattled markets

This article analyzes a Substack post by Citrini Research that describes an economic doomsday scenario caused by AI. The scenario depends on the availability of efficient AI agent technology.

  • The first stage of the scenario sees people deploying their own AI agents for organizational and data management tasks. This relegates the importance of traditional software companies like Oracle and Monday.com. People then use agents to create their own service companies, competing with DoorDash and Uber. This leads to large market fragmentation where margins are very thin.
  • People also use agents to transact on their behalf – cutting out middlemen like Booking.com – and eventually cryptocurrency becomes the default payment method. This removes the need for Mastercard and Visa.
  • These developments lead to mass white-collar unemployment, or to a situation where stable employment is replaced by “unstable, gig-economy jobs”. Revenue falls, as does the 50% of consumer spending accounted for by 10% of the US population. This is “a feedback loop with no natural brake”.
  • The crisis then ripples to the broader economy through defaults in private credit and a mortgage crisis. Many portfolios with stocks in software companies crash.
  • This can lead to a civic crisis, as government structure was never designed for a crisis of this nature.
  • The report warns of the misleading GDP figures in the US, with so much of the value being attributed to very few companies. Citrini Research call this ghost GDP” which is “output that shows up in the national accounts but never circulates through the real economy.

5. Anthropic alleges large-scale distillation campaigns targeting Claude

Anthropic is claiming that Chinese AI companies DeepSeek, Moonshot and MiniMax have been illicitly exploiting the Claude model to train their companies’ models.

  • Distillation is one technique for training generative AI models. This is where a student model learns from a teacher model (Claude in this case) by observing answers to queries – rather than by learning from raw training data.
  • Anthropic says that over 16 million interactions with Claude were observed through 24’00 fraudulent accounts, violating Anthropic’s terms of service.
  • The US currently has export controls towards China around technology. However, these typically apply to hardware and might not cover the distillation that Chinese companies are allegedly engaged in.
  • The use of distillation is generally a grey area legally. Intellectual property law covers the use of content in training data, so distillation might need regulation.

6. China's DeepSeek trained AI model on Nvidia's best chip despite US ban, official says

Reuters reports that the Trump administration has claimed that DeepSeek trained its latest AI model using Nvidia's Blackwell AI chip, despite an export ban on these chips to China. The administration believes that a data center in inner Mongolia is using these chips.

  • There is much debate in the US about whether AI chips from US companies should be exported to China. In August 2025, President Trump decided to allow a scaled-down version of Blackwell – the H200 – to be exported to China, before later reversing this decision.
  • The pro-sales to China lobby includes Nvidia CEO Jensen Huang who argues that selling high-quality chips to China removes incentive for the Chinese to invest in local companies like Huawei for developing chips.
  • The opposing lobby fears that China will exploit AI chips to take the lead in AI ahead of the US, and to exploit AI to strengthen its military.
  • A member of President Biden’s administration said that “Chinese AI companies’ reliance on smuggled Blackwells underscores their massive shortfall of domestically produced AI chips and why approvals of H200 chips would represent a lifeline.”.

7. A Meta AI security researcher said an OpenClaw agent ran amok on her inbox

This article considers a post by Meta AI security researcher Summer Yue who had told an OpenClaw AI agent to check her Inbox and suggest messages for archival or deletion. The agent deleted her emails.

  • The tweet read “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speed-run deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”.
  • One explanation for the misbehavior is compaction” – which is when the context window does not have enough space for prompt and data, and has to throw information away. In this case, the agent seems to have lost the instruction about just archiving the messages.
  • Several similar AI agent platforms now exist, e.g., ZeroClaw, IronClaw, and PicoClaw, which have the same safety concerns.

8. Salesforce CEO Marc Benioff: This isn’t our first SaaSpocalypse

Salesforce says it is performing well, and that fears that agentic AI is threatening the SaaS (software-as-a-service) industry are exaggerated.

  • The company announced a yearly revenue of 41.5 billion USD – a 10% increase on the previous year. Its net income was 7.46 billion USD. This result was helped by its 8 billion USD acquisition of the data management company Informatica.
  • The fears by investors that AI agents will completely undermine the SaaS economy has been dubbed the “SaaSpocalypse.
  • SaaS companies are trying to sell that idea that AI agents will strengthen SaaS platforms. Salesforce CEO Marc Benioff said that “If there is a SaaSpocalypse, it may be eaten by the Sasquatch because there are a lot of companies using a lot of SaaS because it just got better with agents.”.
  • Salesforce is using some agentic AI. It has also introduced an agent metric called agentic work units (“AWU”) to measure concrete work tasks completed as a more meaningful measure than processed tokens. For instance, filing a report is at least 1 AWU, writing a poem is 0 AWU.

9. How AI redefines software engineering expertise

This InfoWorld opinion article argues that the need for experienced software developers is as strong as ever, despite advances in AI coding tools.

  • A key, appreciated, feature of AI code generation tools is that they generate extensive code. This means that in trying to be robust, the code covers as wide a use case as possible. This differs to the typical “lazy” human programmer approach.
  • The problem with this approach is that requirements are rarely 100% clear, so when these change, a greater volume of code needs to be changed. The author writes that “AI reduces the cost of writing code. It does not reduce the cost of owning it”.
  • Another issue is that enterprise systems are typically developed in an evolutionary manner. This means that many design decisions are implicitly known to the human engineers, all of which need to be crafted into the code prompt in the AI coder. The system becomes fragile without this context.
  • AI introduces a new form of technical debt risk linked to judgement – the ability to recognize when a solution is heavier than the problem requires. The author writes that, for all generations of software tools, “what never changed was the need for someone to think deliberately about structure before scale magnified its flaws”.

10. Anthropic says Claude Code transformed programming. Now Claude Cowork is coming for the rest of the enterprise.

Anthropic has released a set of AI tools for its Claude Cowork AI productivity platform. The company’s strategy is heavily oriented towards the enterprise market.

  • The company offers MCP (model context protocol) connectors for Google Drive, Google Calendar, Gmail, DocuSign, Apollo, Clay, Outreach, SimilarWeb, MSCI, LegalZoom, FactSet, and WordPress, as well as plugins for MS Excel and PowerPoint.
  • A Spotify engineer says that use of Claude has led to a 90% reduction in engineering time for JIRA tickets, and that over 650 AI-generated code changes are shipped each month.
  • The CTO of the New York Stock Exchange is using Claude Code and is in the process of “rewiring [the] engineering process”. He describes an “assembly” engineering paradigm where organizations combine several models. Further, use of AI is forcing organizations to shift from “risk avoidance to risk calibration.
  • For Anthropic, the advent of MCP connectors has been a game-changer, allowing AI to move away from user-AI interactions to automated and smart workflows. These workflows have caused investors to worry about the future of classical software companies like IBM, ServiceNow, Salesforce, Snowflake, Intuit, and Thomson Reuters to suffer falls in share prices.
  • The company also mentioned that one year ago, about one third of all US jobs had associated tasks appearing in Claude usage statistics. This number has increased today to one in two jobs. Nevertheless, there is not yet evidence of mass job layoffs.
  • Anthropic uses the term “thinking divide” to distinguish organizations using AI across employees, processes and products, from those which use AI in isolated use cases. The company argues that the latter companies will increasingly lag behind.