New York Signs AI Safety Bill

AI Slop is Shaping the Web

Posted on December 29th, 2025

Summary

Audio Summmary

As 2025 draws to a close, MIT Technology Review lists some terms that highlighted the year in AI. These include vibe coding, agentic AI, the bubble, and super-intelligence. Terms that could see increased importance next year are physical and real-world models, which are models designed to reason about the impact of actions in the real world, e.g., predict whether a loose shoelace could trip someone up. Another feature of 2025 is the increased amount of AI Slop on the Internet – low-quality, sometimes surreal content, designed to augment the number of clicks. For the Guardian newspaper, slop is the “endpoint of an algorithmically determined Internet”, where user clicks are prioritized for commercial purposes. Elsewhere, the US-based Data Center Watch, tracking anti-data center activism, says that 142 different activist groups have blocked or delayed data center development worth 64 billion USD.

The New York State Governor, Kathy Hochul, has signed into law the state-wide AI Safety Bill which calls for AI developers to publish information about safety protocols and also to report safety incidents within 72 hours. Violations of the law by AI firms can lead to fines of up to 1 million USD for a first offense, and 3 million USD for subsequent offenses. Meanwhile in Europe, the current political climate is encouraging countries to rethink their relation with major US technical companies, and engage an increased effort towards digital sovereignty. The climate is relatively sour, with the US complaining of legislation in the EU that penalizes US companies and the issue of tariffs. Meanwhile, Big Tech firms in the US are concerned about the impact of recent restrictions on access to visas. US officials have begun reviews of applicant social media accounts to prevent “abuse of the H-1B program”. One immigration expert expects the restrictions to lead to several month delays in visa applications, and even to job losses.

An InfoWorld article looks at agent-washing” – the sale of solutions that are supposedly agentic but which in reality are just automation platforms. A typical example of a misleading agentic platform is one where prompts are sent to an AI model, and the outputs are passed to a fixed workflow. The danger for companies is that they believe in an AI investment, only to get technical debt. An MIT Technology Review article examines whether humans may be able to build trust in the robots that Big Tech is promising in the next few years. One challenge is that AI is currently not advanced enough to help a robot autonomously do household tasks, so 80% of robot tasks need to be controlled by remote human operators. Another issue for trust is a robot’s appearance, which must evoke trustworthiness in humans, without appearing too human.

OpenAI announced that it deployed a security update to its Atlas browser to defend against prompt injection attacks. This is an attack that “targets AI agents by embedding malicious instructions into content to override or redirect the agent’s behavior”. The company admits that there is no universal defense mechanism against prompt-injection attacks, and advise users of agent browsers to use agents with limited log-in abilities. Finally, another Guardian article reports on an English-speaking cyber-criminal organization called the Com, believed to be composed of a few thousand native English-speaking members, generally aged from 16 to 25. The UK’s National Crime Agency has described the members as “usually young men who are motivated by status, power, control, misogyny, sexual gratification, or an obsession with extreme or violent material”.

1. New York Governor Kathy Hochul signs RAISE Act to regulate AI safety

The New York State Governor, Kathy Hochul, has signed into law the state-wide AI Safety Bill called the RAISE Act.

  • The law calls for AI developers to publish information about safety protocols and also to report safety incidents within 72 hours. An office within the Department of Financial Services will monitor AI developments.
  • Violations of the law by AI firms can lead to fines of up to 1 million USD for a first offense, and 3 million USD for subsequent offenses.
  • The governor said that the “law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public.”.
  • Donald Trump recently signed an executive order calling on federal agencies to challenge US state regulations on AI. Nonetheless, an Anthropic representative said: “The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them.”.

2. The Com: the growing cybercrime network behind recent Pornhub hack

This Guardian article reports on an English-speaking cyber-criminal organization called the Com. It is believed to be composed of a few thousand native English-speaking members, generally aged from 16 to 25.

  • The UK’s National Crime Agency has described the members as “usually young men who are motivated by status, power, control, misogyny, sexual gratification, or an obsession with extreme or violent material”.
  • The Com is composed of several subgroups, the first of which is Hacker Com – to which well-known groups like ShinyHunters, Scattered Spider and Lapsus$ belong. These groups are known for ransomware attacks on companies after exfiltrating data as well as for crypto scams via hacked social media acconts.
  • Another subgroup is In Real Life Com (IRL) whose activities include “swatting” – calling police on false pretenses, e.g., fake bomb alert. Another activity is violence-as-a-service where contracts to carry out violence are posted on their forums.
  • Another subgroup is Extortion Com which targets vulnerable children and teenagers. The victims are coerced into sharing or live-streaming acts of self-harm, explicit sexual activities, or even attempted suicide. The footage is shared within the group for later extortion.
  • For one expert, “the Com ranges from 11-year-olds trying to hack Minecraft to people in their mid-20s targeting vulnerable kids online”. He says that the organization acts as a pipeline where older members groom younger recruits into performing increasingly damaging cybercriminal acts.

3. The year data centers went from backend to center stage

This article reports on how US citizens are increasingly aware of, and opposed to, the construction of data centers across the country.

  • Data from the US Census Bureau shows that data center spending has increased by 331% since 2021 – which amounts to billions of USD.
  • Data Center Watch is an organization tracking anti-data center activism. It claims that there are currently 142 different activist groups across 24 states protesting against data center developments. The organization also says that data center development worth 64 billion USD has been blocked or delayed by this activism.
  • The main concerns of the activists are environmental concerns, harmful ways for which AI might be used, the impact of data centers on increases in electricity bills. Many analysts believe electricity price increases due to data centers will be an important topic in next year’s mid-term elections.
  • Politico reports that a new trade group called the National Artificial Intelligence Association (NAIA), has been “distributing talking points to members of Congress and organizing local data center field trips to better pitch voters on their value.”.

4. AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

This MIT Technology Review article looks at key AI terms that were popular in 2025.

  • Super-intelligence is increasingly used in place of Artificial General Intelligence (AGI). AGI is used to refer to a machine whose intelligence is indistinguishable from a human’s. Super-intelligence is demonstratively superior to human intelligence. The term is generally considered marketing, and has been used by Big Tech companies to attract talent.
  • In the area of AI model development, reasoning represents the resolution of a problem in multiple, and sometimes verifiable, steps. Distillation is a manner of training an AI model by having it learn from a “teacher” model. Distillation was important in 2025 since it is believed that DeepSeek created their R1 model using distillation.
  • One of the newer terms is physical models. These are models that have more grounding in the real world through augmented reality interfaces – and are generally associated with robots. A related term is world models – these are models with more real world knowledge. They are designed to reason about the impact of actions in the real world, e.g., predict whether a loose shoelace could lead someone to trip up.
  • Hyper-scalers are the increasingly large data centers with GPUs to support AI training and inference.
  • Agentic was a key term in AI this year. It refers to an AI than can act autonomously on behalf of a human, like a personal assistant does when booking a trip. Another is vibe coding, referring to the design and coding of an application exclusively via prompting an AI.
  • As AI is increasingly used, some of the problems linked to AI have become more pronounced. One is sycophancy, where a model excessively agrees with or seeks to please the human. A release of GPT-4o underwent a rollback after it was considered to be too sycophantic. Another problem is AI Slop – poor quality, mass-produced, content, often aiming simply to maximize Web traffic.
  • One term which we may be hearing a lot more of in 2026 is the AI bubble, which reflects an enormous amount of spending by investors for a return that, so far, has fallen short of expectations.

5. When is an AI agent not really an agent?

This InfoWorld article looks at the dangers of “agent-washing” – the sale of solutions that are supposedly agentic but which in reality are just automation platforms.

  • A parallel of made with “cloud-washing”, from the early years of the century, where organizations spent a lot of money on architectures that were supposedly cloud-native, but found themselves with rigid architectures that did not support agility.
  • The author defines a truly agentic platform as one where the system can pursue a goal with a degree of autonomy. The agent should be capable of multi-step behavior, adapting its actions to outcomes and feedback. An agent should be able to act by invoking tools or APIs.
  • A typical example of a misleading agentic platform is one where prompts are sent to an AI model, and the outputs are passed to a fixed workflow. The danger for companies here is that they might believe in a huge investment in AI, whereas only technical debt is the result. Also, compliance misunderstands the role that the platform is playing, and analyzes risks inappropriately.
  • The article calls on organizations to challenge vendors on the precise meaning of “autonomy” and “reasoning” in their platform during pitches.

6. Will we ever trust robots?

This article examines whether humans may be able to build trust in the robots that Big Tech is promising in the next few years.

  • Many AI firms are investing in robots for the home, as they are a visually effective use case for AI. One firm, Figure, has raised 675 million USD in the last 2 years. Another, Prosper, hopes to be able to sell a humanoid robot called Alfie in the next few years for between 10’000 USD to 15’000 USD a unit.
  • A key challenge is that AI is currently not advanced enough to help a robot autonomously navigate homes and do household tasks. Currently, 80% of robot tasks would be controlled by remote human operators.
  • This means that robots need to be equipped with privacy enhancing technology when deployed in homes (e.g., to hide faces and valuable objects). It also means that, just as AI firms outsource training data curation to firms that exploit low-wage workers abroad, with the Philippines cited as a country with many of these workers, control of robots might also exploit low-wage workers.
  • Another issue for trust is the robot’s appearance. A robot must evoke trustworthiness in human operators, without appearing too human. The robot need not have legs, though this choice is often made for humanoid robots since they must navigate physical spaces designed for humans. Trustworthiness might also be built in having a robot reveal its flaws, e.g., its desire to be able to move faster.

7. Continuously hardening ChatGPT Atlas against prompt injection attacks

OpenAI announced last month that it deployed a security update to its Atlas browser to defend against prompt injection attacks.

  • OpenAI defines a prompt injection attack as an attack that “targets AI agents by embedding malicious instructions into content the agent processes. Those instructions are crafted to override or redirect the agent’s behavior – hijacking it into following an attacker’s intent, rather than the user’s..
  • The company says that a prompt-injection attack was discovered by internal red-teaming. In particular, OpenAI has developed a process that employs an adversary LLM to craft prompt-injection attacks. The company exploits its white-box access to the internals of the GPT models, so defenses can be applied directly to the models.
  • An advantage of using an LLM is that it caters for prompt-injection attacks that are long duration, meaning that a long series of steps may be taken by an attacker. The attacker LLM uses reinforcement learning to find new prompt injection attacks.
  • OpenAI admits that there is no universal defense mechanism against prompt-injection attacks. They advise users of agent browsers to use agents with limited log-in abilities (meaning that an agent should not have access to user credentials), to manually review all confirmation requests for actions engaged by agents, and to be as explicit as possible in the instructions given to agents since wide latitude in instructions facilitates prompt-injection.

8. H-1B visa applicants face more disruptions amid social media checks

Big Tech firms in the US are concerned about the impact of recent restrictions on access to visas, notably the H-1B and H-4 visas. According to the article, Amazon, Meta, Microsoft, Tata Consultancy Services, and Google are the top five employers of people with H-1B visas.

  • US officials have recently began reviews of social media accounts of applicants in what it says aims to prevent “abuse of the H-1B program. One immigration expert expects the restrictions to lead to several month delays in visa applications, and even to job losses.
  • This move combines with a proposition by the US Department of Labour for a wage protection law, which experts say is designed to dissuade companies from employing H-1B visa applicants.
  • He said that the “policies appear to be for appeasing domestic political constituencies”. However, given the importance of talent to the Tech companies, some workarounds could eventually emerge.
  • Donald Trump has raised the H-1B visa fee to 100’000 USD for new applications but this move is being challenged by the attorney generals in 20 states.

9. From shrimp Jesus to erotic tractors: how viral AI slop took over the internet

The year 2025 saw a flood of AI Slop on the Internet – low-quality, sometimes surreal content, designed to augment the number of clicks.

  • At one point, a cultural trend was the creation of images in the style of Studio Ghibli. This practice could also be considered copyright-violating. Merriam-Webster defined “slop” as its word of the year.
  • The article also describes slop as the “endpoint of an algorithmically determined Internet. In the hands of a very small number of powerful companies, the proliferation of slop serves the interests of these firms by optimizing engagement.
  • Another motivation is for content providers on platforms like YouTube to differentiate themselves in a very crowded market. One content provider says that only 5% of providers monetize a video, and 1% make a living. He says that “artists are [using AI] to generate extreme images of everything, hoping that someone would buy them”.

10. Global uncertainty is reshaping cloud strategies in Europe

The current political climate is encouraging European countries to rethink their relation with major US technical companies, and engage an increased effort towards digital sovereignity.

  • Digital sovereignty is defined as a “strategy aimed at retaining control over data, applications, and infrastructure in accordance with local regulatory laws and requirements. It relates to where data is stored and processed, who can access it, and under which legal jurisdiction.”.
  • The current climate is relatively sour, with the US complaining of legislation in the EU that penalizes US companies. This is coupled with the issue of tariffs, and the fear in the EU that the US will exploit Europe’s dependance on US cloud providers as political leverage.
  • One case mentioned is that of the International Criminal Court’s chief prosecutor, Karim Khan, who reportedly lost access to Microsoft Services after President Donald Trump placed sanctions on the organization.
  • Europe is very reliant on Amazon Web Services, Google, and Microsoft, with these companies currently holding 70% of the cloud market, compared to just 15% for European cloud providers.
  • A key worry for the Europeans is the 2018 CLOUD Act and the Foreign Intelligence Surveillance Act (FISA) that could compel a US company to transfer data to the US government on request. Microsoft and other providers are storing data for European users in Europe, but it is not clear whether this would still prevent Microsoft having to comply with a data transfer order from US authorities.
  • Another initiative being reinforced by the EU authorities is the move to open-source software. Several regional administrations are moving to open-source, replacing Microsoft Office with LibreOffice for example.