Summary
Audio Summmary
The US Administration is planning an AI legislative framework that will centralize AI policy with the federal government to prevent a “patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race”. The framework prioritizes innovation, leaving child safety issues to parents, and has relatively soft requirements on platform accountability. On the issue of copyright, the framework could allow AI systems to be trained on copyrighted content without author permission – citing “fair use” – something that AI companies have been arguing for a long time. Meanwhile, an MIT Technology Review article ponders what OpenAI technology might be used for following the signing of a deal with the US Department of Defense two weeks ago. One use of AI might be for deciding on targets. The military already use an AI system to determine targets, but OpenAI’s generative AI could add a conversational and reasoning layer on top of this. Elsewhere, a US congressional body is concerned that the US may be losing the AI race in the long-term to China because of China’s use of open-source models. 80% of US start-ups are using Chinese models, and the country is also well positioned for physical AI thanks to its mass data collection efforts for development of humanoid robots.
On agents, Visa is investigating how AI agents may make payments on behalf of users. One challenge is to link a transaction to an accountable human since for compliance, banks are obliged to implement checks related to fraud, audit trails, and customer consent. An MIT Technology Review article looks at the current craze around the open-source agent platform OpenClaw in China, and how service companies are sprouting up to help people install the platform. Menawhile, Google has developed an approach to training multi-agent systems that uses a pool of existing agents, allowing student agents to infer rules rather than having administrators explicitly configure the rules. The approach uses the same reinforcement learning techniques used by foundational AI models. An InfoWorld article looks at the role of AI in software development, arguing that AI cannot replace software developers. The key problem with AI-generated code is that it over-abstracts, meaning that code can be over-complicated for the problem being solved. This creates technical debt, which makes it harder for humans when reviewing code.
A TechCrunch article reports on the disconnect between the current optimism shown by Nvidia and nervousness shown by Wall Street. Nvidia CEO Jensen Huang says he expects the company to sell 1 trillion USD of Blackwell and Vera Rubin chips before the end of 2027. But for Futurum CEO Daniel Neuman, the issue for the markets is that AI is moving so fast that we do not know how things are going to settle. He writes: “The markets hate uncertainty. The speed of innovation has actually created a great new uncertainty that I think most people never expected.”. Elsewhere, the CEO of BlackRock, Larry Fink, writes that the AI boom is leading to an increasing gulf in revenue between companies. He worries that the benefits of the technology remain with a small number of companies.
Finally, a video released by the Norwegian Consumer Council to raise awareness for a campaign against “enshittification” has gained much publicity. The term “enshittification” refers to the deliberate and gradual degradation of digital services mostly through advertisements and other revenue generation methods like premium features, software update scams, and AI chatbots.
Table of Contents
1. Hustlers are cashing in on China’s OpenClaw AI craze
2. ‘Another internet is possible’: Norway rails against ‘enshittification’
3. Google finds that AI agents learn to cooperate when trained against unpredictable opponents
5. Why Wall Street wasn't won over by Nvidia's big conference
6. Where OpenAI’s technology could show up in Iran
7. Visa prepares payment systems for AI agent-initiated transactions
8. China's open-source dominance threatens US AI lead, US advisory body warns
9. Trump’s AI framework targets state laws, shifts child safety burden to parents
10. AI boom risks widening wealth divide, says BlackRock’s Larry Fink
1. Hustlers are cashing in on China’s OpenClaw AI craze
This article looks at the current craze around the open-source agent platform OpenClaw in China, and how service companies are sprouting up to help people install the platform.
- One engineer described in the article opened a service company and, in just one month, received 7000 orders for installation and has hired 100 employees. The company charges 34 USD for each OpenClaw installation.
- Despite the ease of use of OpenClaw, its installation still requires IT expertise. Apart from service companies offering installation support, the major AI model providers are also promoting their models by claiming that they are OpenClaw compatible.
- Another company described installs OpenClaw on secondhand MacBooks. Given the security risks associated with running autonomous agents on a platform, many users prefer to have OpenClaw run on a second device like a Mac Mini.
- At the same time, the Chinese cybersecurity regulator CNCERT has warned users of the security risks in using OpenClaw, highlighting that it increases exposure to data breaches.
2. ‘Another internet is possible’: Norway rails against ‘enshittification’
A video released by the Norwegian Consumer Council to raise awareness for a campaign against “enshittification” has gained much publicity.
- The term “enshittification” refers to the deliberate and gradual degradation of digital services mostly through advertisements and other revenue generation methods like premium features, software update scams, and AI chatbots.
- The character in the video is a “enshittificator”. He has the line “What I do is I take things that are perfectly fine and I make them worse”.
- The video was publicly funded by the Norwegian government along with over 70 groups in Europe and the US, which include trade unions and human rights organisations.
- The Norwegian council said they “wanted to show that you wouldn’t accept this in the analogue world”.
- Another commentator writes: “Services don’t need to be enshittified if we have real competition, if you can choose as a consumer which services you use, and if the market will better regulate all these practices.”.
3. Google finds that AI agents learn to cooperate when trained against unpredictable opponents
This article looks at the challenge of training systems that use multiple agents.
- The current approach is to define a set of hardcoded coordination rules. The problem with this approach is its limited scalability, and often large systems do not have a centralized management. Platforms currently used include LangGraph, CrewAI, and AutoGen.
- Another issue is potential conflict between different agents’ goals. For instance, two agents programmed to optimizes prices might compete against each other by continuing to lower prices in order to maximize their own specific rewards. This phenomenon is known as mutual defection, and is a variant of the prisoner’s dilemma problem from game theory.
- A new approach to training multi-agent systems by Google trains agent using a pool of existing agents, allowing the student agents to infer rules – rather than having to be explicitly taught them. The approach uses the same reinforcement learning techniques already used by today’s foundational AI models.
4. The AI coding hangover
This InfoWorld article is another in a series that looks at the role of AI in software development, arguing that AI cannot replace software developers.
- It is understandable why organizations may be tempted to replace developers for costs reasons. Given the ability for AI to write functions, the temptation for organizations is to push for AI to write larger services.
- The key problem with AI-generated code is that it over-abstracts, meaning that code can be over-complicated for the problem to be solved. This creates technical debt, which makes it harder for humans when reviewing code.
- Another challenge for AI chatbots is to simultaneously reason about performance, security and deployment issues.
- Compliance checking is also more challenging since AI code may include insecure library code or errors. An increased volume of AI code slows down compliance teams.
- The author comments: “AI excels at automating tasks. It is not good at owning outcomes”.
5. Why Wall Street wasn't won over by Nvidia's big conference
This article reports on the disconnect between the current optimism shown by Nvidia and nervousness shown by Wall Street.
- Nvidia CEO Jensen Huang says he expects the company to see 1 trillion USD in sales of Blackwell and Vera Rubin chips before the end of 2027. Nvidia’s revenue increased by 73% year-over-year last quarter.
- Huang also said that the AI agent ecosystem industry is worth 35 trillion USD and the market for physical AI and robotics industry is worth 50 trillion USD.
- For Futurum CEO Daniel Neuman, the issue for the markets is that AI is moving so fast that we do not know how things are going to settle: “The markets hate uncertainty. The speed of innovation has actually created a great new uncertainty that I think most people never expected.”.
- Another issue is the omnipresence of Nvidia. For one senior equity strategist, the “economy is sort of orbiting around Nvidia” which is building the infrastructure of nearly all other companies on the stock exchange.
6. Where OpenAI’s technology could show up in Iran
This MIT Technology Review article ponders what OpenAI technology might be used for following the signing of a deal with the US Department of Defense two weeks ago, giving the military access to AI in classified environments.
- OpenAI CEO Sam Altman says that he got the military to agree not to use AI for autonomous weapons or for mass surveillance, but the articles claims that the guidelines around these promises are vague.
- The motivation for OpenAI in signing the deal is also questioned. One reason could be the need for OpenAI to generate revenue. Another could be idealogical, notably that OpenAI believe the military needs advanced AI access to compete with China.
- The uses of OpenAI technology by the military will take time because of the effort needed to integrate the technology with existing military IT systems.
- One possible use of AI will be for deciding on targets. The military already use an AI system called Maven to determine targets, but OpenAI’s generative AI could add a conversational and reasoning layer on top of this.
- Another application is time-sensitive analysis of incoming drones for defense measures to take.
7. Visa prepares payment systems for AI agent-initiated transactions
Visa are investigating how AI agents may do payments on behalf of users.
- Its “Agentic Ready” program for AI initiated payments may soon be tested in Europe. Commerzbank and DZ Bank are among the banks collaborating with Visa.
- Visa envisages permitting an agent to make a payment that forms part of a regular series of payments, or effecting a payment when specified conditions are met, e.g., stock in a warehouse is low and new supplies need to be ordered.
- Current payment systems are linked to human identity. A human initiates the transaction and the system verifies his or her ownership of the account (card). In agentic AI, the key challenge is to link the transaction to an accountable human.
- Another challenge for Visa is compliance. Banks are obliged to implement checks related to fraud, audit trails, and customer consent.
- Regulators are also looking at the implications of AI payment systems especially since recent reports suggest that AI has led to an increase in financial fraud.
8. China's open-source dominance threatens US AI lead, US advisory body warns
A US congressional advisory body is concerned that it may be losing the AI race in the long-term to China because of China’s use of open-source models.
- Chinese models from Alibaba, Moonshot and MiniMax are among the most popular on the HuggingFace and OpenRouter platforms. Also, 80% of US start-ups are believed to be using Chinese models.
- The report writes that their “open ecosystem enables China to innovate close to the frontier despite significant compute constraints”.
- The report also admits “Chinese labs have narrowed performance gaps with top Western large language models”.
- China is also well positioned for physical AI thanks to its mass data collection efforts for development of humanoid robots and autonomous driving.
- The advisory commission is following how China is using AI in biotech, quantum computing and advanced materials.
9. Trump’s AI framework targets state laws, shifts child safety burden to parents
The US Administration is planning an AI legislative framework that will centralize AI policy with the federal government in an effort to prevent a “patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race”.
- The framework follows an executive order last December that asked the US Commerce Department to compile a list of “onerous” state AI laws. The states would be threatened with the loss of federal funds. The list has not yet been published.
- New York (RAISE Act) and California (SB-53) are among the states to have enacted laws regulating AI.
- The new legal framework prioritizes innovation, leaving child safety issues to parents; one official wrote that “parents are best equipped to manage their children’s digital environment and upbringing”. Platforms will have relatively soft requirements around accountability.
- On the issue of copyright, the framework wants a middle ground where AI systems may be trained on copyrighted content without author permission – citing “fair use” as a need. This is something that AI companies have been arguing for a long time.
10. AI boom risks widening wealth divide, says BlackRock’s Larry Fink
In a letter to investors, the CEO of BlackRock, Larry Fink, writes that the AI boom is leading to an increasing gulf in revenue between companies.
- He highlights that the benefits of the technology remain with a small number of companies: “When market capitalization rises but ownership remains narrow, prosperity can feel increasingly distant to those on the outside.”.
- He also references the potential AI bubble. Last October, the Bank of England in October warned there were growing risks of a “sudden correction” in global markets linked due to the large valuations of AI companies. Nvidia for instance is now valued at 4.3 trillion USD.
- There is also criticism of the “circular deals” between Nvidia and other AI companies. There have been notable cases where Nvidia invested in companies, which followed on by the companies purchasing Nvidia chips.
- Fink writes that “AI will create significant economic value. Ensuring that participation in that growth expands alongside it is both the challenge and the opportunity.”.