Summary
Audio Summmary
An opinion article in VentureBeat argues that impact of the AI bubble bursting will cascade through companies. The first companies to fail will be those whose business offering is essentially a wrapper around third-party AI services. Of the LLM constructors, only two or three will survive. These will be those companies that manage to develop the most effective systems engineering techniques to make AI inference more efficient. The author estimates that over 1300 AI startups in the US have valuations exceeding 100 million USD, and few are worth these evaluations. Meanwhile science-fiction writer Cory Doctorow is extremely critical of the current AI trajectory. For him, Big Tech companies have become too big to grow. To maintain investor interest, they are creating the illusion of growth by insisting on the need to invest in AI. His main concern for the AI bubble is that Big Tech itself will not lose out. Rather, the biggest losers will be the pension funds that have invested in these projects.
On the regulatory front, AI is now a lever in the sour political climate in the US, and will impact the mid-term US elections in November 2026. In December 2025, President Trump signed an executive order to restrain states from enacting laws by restricting federal funding to states which do enact laws. These have been contested by States. Also, in advance of next November’s elections, political action committees (PAC) have been created to raise campaign funds for candidates for or against AI regulation. In Europe, some experts argue that the EU AI Act is not ready for multi-agent AI incidents. The AI Act focuses on serious incidents that can occur with high-risk AI systems such as critical infrastructure. This might not cover risks that arise from a cascading series of incidents from independently developed AI agents.
In the wake of the killing of a US citizen by agents of the US Immigration and Customs Enforcement (ICE), Big Tech leaders are under pressure from their employees to distance themselves from the federal agency. Some have issued measured criticism of events in Minnesota, but none have criticized Trump. Meanwhile, two directors of the German based HateAid association are being prohibited from entering the US. HateAid was founded in 2018 to help victims of online violence take legal action against offenders. The US Secretary of State Marco Rubio wrote in a tweet: “For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose. The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship.”.
On the cybersecurity front, a report from the firm Pillar Security explains that criminal groups are exploiting public access points to AI services. Their aim is an intrusion attack on the platform hosting the AI. Once achieved, the criminals steal chatbot access credentials or sensitive data in prompts. They may also resell API access at discounted rates or attack MCP servers to propagate deeper into the IT infrastructure of the host company. Meanwhile, a report from Deloitte warns that organizations are deploying AI agent systems faster than their AI safety guardrails can keep up. Currently, 23% of organizations have deployed AI agents and the figure is expected to rise to 74% over the next two years. Nevertheless, only 21% of organizations have strict safety protocols in place. One risk is that agents take decisions in opaque ways. This opacity of AI agent systems could make insurance companies reluctant to insure organizations with these systems.
Meta is abandoning its metaverse project – the platform of 3D virtual spaces which people connect to using immersive devices for virtual and augmented realities, and then navigate the spaces using avatars. Meta set the metaverse as a key project when the Facebook company was renamed as Meta in 2021. The company and even analysts from McKinsey estimated at the time that the metaverse could become a multi-trillion-dollar platform by 2030. Estimates were based on Gen Z’s apparent preference for online games such as Fortnite over traditional social media apps.
Finally, journalists from Computer World report that the meaning of “AI skills” in job postings has gone from “prompt engineering” to mean AI governance skills – the ability to identify and mitigate AI-related risks – as well as the analytical ability to understand business workflows and to see where and how AI can contribute improvement in workflows.
Table of Contents
1. Well, there goes the metaverse!
2. What it’s like to be banned from the US for fighting online hate
3. AI companies will fail. We can salvage something from the wreckage
5. America’s coming war over AI regulation
6. Crooks are hijacking and reselling AI infrastructure: Report
7. What AI skills job seekers need to develop in 2026
8. Deloitte sounds alarm as AI agent deployment outruns safety frameworks
9. The EU AI Act Newsletter #94: Grok Nudification Scandal
10. Anthropic, Apple, OpenAI CEOs condemn ICE violence, praise Trump
1. Well, there goes the metaverse!
Meta seems to be abandoning its metaverse project – the platform of 3D virtual spaces which people connect to using immersive devices for virtual and augmented realities, and then navigate the spaces using avatars – even engaging in digital commerce.
- Meta set the metaverse as a key project when the Facebook company was renamed as Meta in 2021. The company and even analysts from McKinsey estimated at the time that the metaverse could become a multi-trillion-dollar platform by 2030. Estimates were based on Gen Z’s apparent preference for online games such as Fortnite over traditional social media apps.
- The metaverse project and company rebranding of Facebook came at a time when the company was facing public relations challenges. The Cambridge Analytica data privacy scandal was fresh in people’s memory, and a whistleblower reported how Facebook had evidence of the negative impact of social media on children and teens but decided not to do anything.
- Many companies developing applications for Meta’s metaverse are directly impacted by Meta’s decision. These include the makers of games like “Resident Evil 4 VR”, “Asgard’s Wrath”, and “Marvel’s Deadpool VR”.
- Meta has had problems with its metaverse platform. Not only did the platform not attract enough users, but there were complaints of people being sexually harassed on the platform. Meta reacted by implementing a “personal boundary” feature but its efficiency is contested.
2. What it’s like to be banned from the US for fighting online hate
This MIT Technology Review article relates how two directors of the German based HateAid association are being prohibited from entering the US.
- HateAid was founded in 2018 to help victims of online violence take legal action against offenders. The organization has worked with around 7’500 victims, leading to 700 criminal cases and 300 civil cases. HateAid also advises German police, prosecutors and politicians.
- The real reason for the ban is probably related to the European Union’s Digital Service Act, which forces digital platforms to enforce greater controls over published content. HateAid is named as a contact point in Germany for citizens who wish to challenge online content. The act is highly contested by US Big Tech and the US administration.
- The US Secretary of State Marco Rubio wrote in a tweet: “For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose. The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship.”.
- The European Commission recently fined X 140 million USD for violations of the Digital Services Act.
- In similar moves, the US administration has attacked two judges of the International Criminal Court for the “illegitimate targeting of Israel”. The judges have even lost access to US Tech platforms, including Microsoft, Gmail and Amazon.
3. AI companies will fail. We can salvage something from the wreckage
In this Guardian article, science-fiction writer Cory Doctorow, also author of “Enshittification: Why Everything Suddenly Got Worse and What To Do About It”, gives his take on the current AI trajectory.
- He returns to the term “centaur” from automation theory – this is a person assisted by a machine. A programmer using autocomplete in a text editor is a centaur. One danger with AI is the appearance of “reverse centaurs”: machines assisted by humans. An example is a person helping a self-driving delivery truck: the human is there only to carry packets from the van to the doorstep – the only part of the delivery process that the machine cannot handle.
- Another issue highlighted is the narrative from Big Tech, which the author finds quite dictatorial, which insists that only one technology solution is possible. For instance, Meta CEO Mark Zuckerberg “wants you to think that it is technologically impossible to have a conversation with a friend without [Meta] listening in”, and Apple CEO Tim Cook “wants you to think that it is impossible for you to have a reliable computing experience unless he gets a veto over which software you install and without him taking 30 cents out of every dollar you spend”. A science-fiction writer is wholly against the “only way” discourse.
- Another issue discussed is the market dominance of Big Tech, and the fact that their dominance prevents them from growing any further. Investors are attracted by growing companies because any investment is likely to yield a return; a company that cannot grow further does not yield returns. The incentive is therefore on non-growing companies to convince the market that they can grow still. This can explain why Big Tech is betting so much on the evolution of AI.
- Another issue is why AI companies want to automate existing jobs. When an AI replaces a human job, then the saved salary is split between the employer and AI company. Only the employee loses out.
- The main concern for the AI bubble is that Big Tech itself will not lose out. The biggest losers are the pension funds that have invested in these projects.
4. Stop calling it 'The AI bubble': It's actually multiple bubbles, each with a different expiration date
An opinion article by the Chief AI Officer at WEKA argues that we should analyze the AI bubble into three layers, each layer being at a different resistance level.
- The Layer 3 companies will be the first to fall. These are the “wrapper” companies that provide an AI interface, prompt engineering, and whose service then calls a third-party AI model. An example of a successful company in this layer is Jasper.ai.
- Layer 3 companies face risks from large companies, e.g., if Microsoft releases a similar feature in its Office 365 service. The author expects many of these companies to disappear in 2026.
- Layer 2 companies are those building large language models. Their current concern is return on investment. For instance, OpenAI has signed AI investment deals worth 1 trillion USD, but has an annual revenue of “only” 13 billion USD. In this area, competitive advantage will increasingly be achieved by systems engineering for improved inference times. The author expects two or three players to emerge over the next two years.
- Layer 3 companies are the infrastructure companies like Nvidia. Their advantage is simply that if Layer 1 and 2 companies crash, Layer 3 companies’ infrastructures will remain and be exploitable.
- The impact of the AI bubble bursting will cascade through the three layers of companies. Layer 1 companies will fail first. The author estimates that over 1300 AI startups in the US have valuations exceeding 100 million USD, and 498 are valuated at more than 1 billion USD. He believes few of these companies are worth these evaluations.
- The fall of these companies will precipitate the emergence of two or three dominant LLM companies. Infrastructure spending will level out but remain high.
5. America’s coming war over AI regulation
This article examines how AI is playing a part in the current sour political climate in the US, and will impact the mid-term US elections in November 2026.
- In December 2025, the US Congress failed to pass a law that would ban States from enacting their own laws on AI. President Trump then signed an executive order to restrain states from enacting laws by restricting federal funding to states which do enact laws.
- California (SB-53) and New York (RAISE Act) are the most prominent states to have enacted laws, but the National Conference of State Legislatures says that nearly 40 states have passed AI related laws.
- Trump has promised to create a federal AI law, which will be minimally burdensome to Big Tech, but many politicians feel that Trump’s tactics have made it impossible to gather bipartisan support.
- Child safety is one area where agreement on legislation may be found. Trump’s executive order does not apply to child-safety regulation. OpenAI is supporting an initiative for a Parents & Kids Safe AI Act that would oblige AI providers to include age verification and parental controls.
- In advance of next November’s elections, political action committees (PAC) have been created to raise campaign funds for candidates for or against AI regulation. Leading the Future is a PAC backed by OpenAI president Greg Brockman which is supporting candidates who support uninhibited AI development. Public First is a PAC advocating AI regulation, sourcing its support in increasing public angst about where AI is going as well as unhappiness about the financial and environmental costs of data centers.
6. Crooks are hijacking and reselling AI infrastructure: Report
A report from the firm Pillar Security explains that criminal groups are exploiting public access points to LLMs (e.g., chatbot service) and to MCP (Model Context Protocols – used to coordinate agents in applications).
- The company says honeypots detected 35’000 attacks on AI infrastructure in the last two weeks alone. One of the criminal groups has named its operation Bizarre Bazaar.
- The aim of the criminals is an intrusion attack on the platform hosting the AI. Once achieved, the criminals can steal chatbot access credentials or sensitive data in prompts. They may also resell API access at discounted rates or attack MCP servers to propagate further into the IT infrastructure of the host company.
- An example of a risk infrastructure is one running an LLM infrastructure like Ollama, with an unauthenticated API access running on a standard port. As AI is part of many prototypes being developed, many of the unprotected access points are running in development or test environments.
- Security companies are encouraging organizations to implement authentication on AI endpoints, to implement rate limitation and to conduct regular audits on the service.
7. What AI skills job seekers need to develop in 2026
In this Computerworld article, journalists report on the current meaning of “AI skills” when the term appears in job postings.
- The Manpower Group say that the frequency of the term “AI Skills” increased by 5% in job postings in 2025.
- Initially, the term was often equated to being able to “prompt engineer” generative AI requests. Prompt engineering has given way to “context engineering”, which is the ability to develop prompts that allow one to obtain consistent and predictive answers from AI, despite a rapidly changing AI ecosystem.
- For Gartner, AI governance – the ability to identify and mitigate AI-related risks – will be an increasingly sought skill in 2026.
- A final skill is the analytical ability to understand business workflows and to see where and how AI can contribute improvement in the workflow.
8. Deloitte sounds alarm as AI agent deployment outruns safety frameworks
A report from Deloitte warns that organizations are deploying AI agent systems faster than their AI safety guardrails can keep up.
- Currently, 23% of organizations have deployed AI agents and the figure is expected to rise to 74% over the next two years. Nevertheless, only 21% of organizations have strict safety protocols in place.
- The risk is that agents take decisions in opaque ways. Also, agents are tested in development environments which do not have the same challenges as production environments where data can be inconsistent and systems are more fragmented.
- The article points out that the opacity of AI agent systems could make insurance companies reluctant to insure organizations with these systems.
- Another concern is that some standards, like those of the Agentic AI Foundation (AAIF), are geared towards defining how agent systems should work functionally. They are not focused enough on safety and governance.
9. The EU AI Act Newsletter #94: Grok Nudification Scandal
On the European regulatory front, some experts argue that the EU AI Act is not ready for multi-agent AI incidents.
- The AI Act focuses on serious incidents that can occur with high-risk AI systems such as critical infrastructure. This might not cover risks that arise from a cascading series of incidents from independently developed AI agents.
- Several AI incidents have already been reported that involve multi-system interactions rather than isolated failures.
- Meanwhile, 57 European Parliament Members have called for a ban on AI applications that allow for the generation of non-consensual sexualized images or videos.
- This call comes after controversy on this topic with the X platform. The company said that it was going to prevent users creating these images – though this promise might not yet have been realized.
10. Anthropic, Apple, OpenAI CEOs condemn ICE violence, praise Trump
In the wake of the killing of a US citizen by agents of the US Immigration and Customs Enforcement (ICE), Big Tech leaders are under pressure from their employees to distance themselves from the federal agency.
- An open letter from Big Tech employees called on their CEOs to end all contracts with ICE and to publicly criticize the organization.
- The response has been mixed from CEOs. Anthropic CEO Dario Amodei has expressed concern over “some of the things we’ve seen in the last few days” and insisted that “we need to defend our own democratic values at home”. He nevertheless praised a decision by Trump to allow Minnesota officials to conduct their own investigation into the killing. OpenAI CEO Sam Altman expressed similar sentiments.
- One of the signatories of the ICEout.tech letter has criticized Altman’s attitude, saying he wants to “have it both ways” by praising Trump’s leadership, “as if the president bears no responsibility for ICE’s actions”.
- Both Anthropic and OpenAI have said that they have no ongoing contracts with ICE.