Summary
Audio Summmary
The fallout upon AI of Donald Trump’s election continues. Several industry leaders have called on the administration to reverse the funding cuts impacting the Directorate for Technology, Innovation and Partnerships, as well as the National Institute of Standards and Technology and the National Science Foundation. The cutbacks have an idealogical motivation as the new administration see many funded projects as wasteful and “woke”. Meanwhile, the Department of Homeland Security has admitted that it wants to monitor the social media activities of immigrants for evidence to deny visas, such as expressing pro-Palestinian statements or attending protests. The department is suspected of using AI-based video analysis to identify individuals through attributes like gender, body size, clothes style, or accessories. The tool used does not do facial recognition, so it avoids the legal restrictions of facial recognition. The American Civil Liberties Union called the tool a “potentially authoritarian technology”. Elsewhere, Reuters reports that the Trump administration is going to revoke Joe Biden’s AI Diffusion Rule which restricts the export of semiconductor technologies. The rule was motivated by fears that high-performance computing helps develop weapons of mass destruction.
In AI development, an MIT Technology Review article looks at expectations around general-purpose AI humanoid robots. Bank of America predicts that there will be one billion humanoids at work by 2050. However, technical roadblocks remain like energy support and physical nimbleness, and new safety regulations will have to be enacted for robots to work alongside humans. An AWS survey of Canadian organizations shows that nearly 40% of organizations are moving generative AI from testing to production environments. The survey also shows that the role of Chief AI Officer is becoming entrenched, with the role being responsible for up-skilling and implementing an organizational AI strategy. Meanwhile, AI leaders are writing to senators to insist on the need for a renovated energy infrastructure in the US to support AI development. Microsoft’s Brad Smith also called for the government to release its huge datasets to Big Tech for model training, saying that the federal government “remains one of the largest untapped sources of high-quality and high-volume data”.
Research published in Science Advances shows that, like in human societies, social conventions emerge in communities of autonomous AI agents. The researchers examined whether AI safety conventions or biases, which were not exhibited when an individual agent is deployed, might develop in the agent through its interaction with other agents. The researchers also demonstrated that a “committed minority” of malevolent of benevolent agents can flip a convention.
Big Tech companies in the news were Microsoft and Meta. Microsoft laid off 6’000 employees, which corresponds to 3% of its workforce. The layoffs were needed to free up capital due to the huge cost of AI development at the company. Meta admitted that Facebook has “lost the mindshare and momentum” to TikTok, and that the only differentiation strategy is to become the “default discovery surface” for information.
In cybersecurity, researchers from ETH Zurich have discovered a security vulnerability that affects all Intel processors released over the last six years. The vulnerability is linked to the speculative execution subsystem and allows data leakage at the rate of 5’000 bytes per second.
Table of Contents
1. IBM CEO urges the Trump administration to increase — not cut — federal AI R&D funding
2. Why the humanoid workforce is running late
3. AWS Study: Generative AI Adoption Index
4. New court filing shows that Meta execs agreed that Facebook was losing to TikTok
5. Nvidia welcomes Trump’s proposal to rescind global chip restrictions
6. AI leaders to urge senators to speed power supply permitting, boost government data access
7. Emergent social conventions and collective bias in LLM populations
8. How a new type of AI is helping police skirt facial recognition bans
9. Microsoft Lays Off About 3% Of Workers As Company Adjusts For AI Business
10. New Vulnerability Affects All Intel Processors From The Last 6 Years
1. IBM CEO urges the Trump administration to increase — not cut — federal AI R&D funding
The CEO of IBM, Arvind Krishna, and several industry leaders are calling on the US government to increase funding for AI and other critical technologies. The Trump-led government has recently made significant cuts to federal research programs and grants. The article reports that many jobs are under threat at the Directorate for Technology, Innovation and Partnerships (TIP), the National Institute of Standards and Technology (NIST) and the National Science Foundation (NSF). The government is expected to cut the NSF’s budget by 50% in the upcoming fiscal year, which amounts to billions of dollars, and this will have a huge impact on AI research projects. The cutbacks have an idealogical motivation as officials in the government see many NSF projects as wasteful and “woke”. The article points out that the U.S. Joint Economic Committee has argued that federally funded research yields annual returns of 25% to 40%, which is largely superior to the 15% to 27% returns of the top-quartile VC funds. The IBM CEO argues that current federal funding is “close to historic lows in terms of percentage of GDP”.
2. Why the humanoid workforce is running late
This article looks at the large variance in expectations around general-purpose AI humanoid robots, also known as physical AI. Nvidia CEO Jensen Huang claims that humanoids will be the largest robotics market ever seen, and that they will essentially be capable of most physical work. Bank of America predicts that there will be one billion humanoids at work by 2050. The startup Figure AI is currently on a 1.5 billion USD funding round. The company is working with car manufacturer BMW and in only 12 months has deployed robots to move car parts.
On the other hand, many underline key limitations. First, new safety regulations will have to be defined and enacted for robots to work alongside humans. Second, programming robots – or developing AI models – so that robots behave correctly remains a challenge. One example cited by the article is a robot capable of watering plants; when asked to water a human, the robot went ahead and tried to pour water over the person. Another challenge is that physically strong robots require large batteries and this has manufacturing constraints. A final challenge is the nimbleness that robots would require for different tasks. Fundamentally, robots can only be designed for specific tasks. Those that perform well in one area will not be able to perform well elsewhere. This will make adoption of robots slow and industry-specific.
3. AWS Study: Generative AI Adoption Index
This report presents the results of an AWS survey on the status of generative AI in Canadian organizations in 2025. 421 IT decision makers were questioned in January and February of this year, and several lessons emerge.
- Generative AI has surpassed cybersecurity in this year’s IT budgets. Generative AI is a top budget priority for 42% of organizations, with cybersecurity top for 34%.
- The role of Chief AI Officer (CAIO) is becoming entrenched in organizations. 52% of organizations surveyed have already appointed a CAIO with 23% more planning to do so by 2026. A key role of the CAIO is to define and implement a change management strategy for the organization. 82% of organizations admit to not having a strategy yet but the number is expected to fall significantly this year.
- The current trend in AI is a move from experimentation and POCs towards deployment in production. 39% of organizations have deployed AI tools in production or integrated AI into IT workflows. 21% of organizations are using in-house solutions that rely on AI models they have trained from scratch; 35% are using out-of-the-box AI applications; 52% are developing custom applications based on out-of-the-box AI models; and 48% are developing custom AI applications over AI models that the organizations themselves have fine-tuned.
- The move of AI to production requires up-skilling and hiring. 50% of organizations have a training plan, and 18% more will have one this year.
4. New court filing shows that Meta execs agreed that Facebook was losing to TikTok
Documents released in the context of one of Meta’s current court cases seem to indicate that Meta has acknowledged the dominance of TikTok over Facebook as early as 2022. The court case involves the US government attempting to show that Meta has violated competition laws by acquiring Instagram and WhatsApp because it intends to create a social networking monopoly. For Zuckerberg, the success of TikTok is largely due to the content algorithm where friends on the platform see the same memes, and can therefore discuss these together. He says that this creates a “feeling of shared context” which meant that Facebook has “lost the mindshare and momentum”. Facebook is still the biggest platform in terms of the number of users, but TikTok users spend more time on their platform than Facebook users do. For Meta, the differentiation strategy is to become the “default discovery surface” for information. However, TikTok also has a lead here with the platform overtaking Youtube for watch time in 2021, and with children under 18 spending 60% more time on TikTok than on Youtube in 2023. Though TikTok originally supported only short videos, the platform now allows videos up to 60 minutes long. Meta also acknowledges that TikTok leads in content creation.
5. Nvidia welcomes Trump’s proposal to rescind global chip restrictions
This article reports on Reuters insights that claim that the Trump administration is going to revoke the AI Diffusion Rule, put in place by President Joe Biden. The AI Diffusion Rule is about restricting the export of semiconductor technologies to countries because of the fear that some countries are seeking to exploit high-performance computing to develop weapons of mass destruction. Nvidia commented that with “the AI Diffusion Rule revoked, America will have a once-in-a-generation opportunity to lead the next industrial revolution and create high-paying U.S. jobs, build new U.S.-supplied infrastructure, and alleviate the trade deficit”. The announcement also saw a rise in Nvidia share prices. Countries who stand to gain from a repeal of the rule include Saudi Arabia and United Arab Emirates which recently hosted a visit by Donald Trump. Both countries are eager to develop their AI infrastructures.
6. AI leaders to urge senators to speed power supply permitting, boost government data access
In written statements made to US lawmakers, several Big Tech leaders have stressed the need to improve the energy infrastructure in the US to support AI development. Microsoft president Brad Smith wrote that “America’s advanced economy relies on 50-year-old infrastructure that cannot meet the increasing electricity demands driven by AI”. AMD’s CEO Lisa Su writes of the need for “rapidly building data centers at scale and powering them with reliable, affordable, and clean energy sources" and for the need to move AI away from the cloud into user devices. CoreWeave’s CEO Michael Intrator has written of the huge demand that AI is making on the energy grid. Data centers' consumption could rise to 12% of US electricity by 2028 from 4.4% in 2023: “Millions of hours of training, billions of inference queries, trillions of model parameters, and continuous dynamic scaling are all driving an insatiable hunger for compute and energy that borders on exponential”. Microsoft’s Brad Smith also called for the government to release its huge datasets to Big Tech for model training, saying that the federal government “remains one of the largest untapped sources of high-quality and high-volume data”, and that access to that data is critical for US firms to maintain a lead in AI.
7. Emergent social conventions and collective bias in LLM populations
Among humans, social conventions determine how humans coordinate among themselves and form a group. This paper defines social conventions as “unwritten, arbitrary patterns of behavior that are collectively shared by a group”. Examples range from handshakes, bows, to language and moral judgements. Social conventions are not planned, but emerge over a period through arbitrary and random interactions of individuals. Collective interaction can both suppress and amplify individual traits. This research examines whether such conventions can arise in a community of language model agents. The fundamental question asked is whether conventions or biases, which were not exhibited when an individual agent is deployed, might develop in the agent through its interaction with other agents. The analysis is based on a game theoretical approach (from Wittgenstein’s general model of linguistic conventions) where repeated interaction between two individuals (AI agents in this case) lead to a pairwise agreement on conventions. In the experiment, agents use a pre-programmed prompt for initial exchange, and each agent maintains a history of interactions with other agents.
The results show that group-wide conventions spontaneously emerge among agents. Initial interactions have a low probability of establishing conventions because of the low probability that an agent meets the same agent, but convergence nonetheless arises after a relatively short time: as early as the 15th interaction in a population of 200 agents. Once a global convention emerges, the population generally adheres to it. The researchers then examined whether a “committed minority” of (malevolent of benevolent) agents could flip a convention. They discovered that tipping points exist that permit the minority to impose their conventions. The tipping point mainly depends on the distance of the new convention to the old convention in terms of the likelihood that the new convention would be considered by an agent from its original prompt. The language models used in the experiment were Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70B-Instruct, and Claude-3.5-Sonnet.
8. How a new type of AI is helping police skirt facial recognition bans
This MIT Technology Review article looks at Track – a technology developed by the company Veritone that uses AI-based video analysis to identify individuals through attributes like gender, body size, clothes style, hair color, accessories. A key feature of the tool is that it does not do facial recognition, so the technology can avoid legal restrictions on facial recognition while still identifying people. Veritone expects to increase the range of attributes included in its analysis, and the company has already around 400 clients that include police departments and federal authorities. Further, the technology will be able to process live video feeds within the next year. The company reports that the tool is being used by US attorneys at the Department of Justice, by the Department of Homeland Security (which oversees immigration), and by the Department of Defense. The Department of Homeland Security has admitted that it wants to monitor the social media activities of immigrants for evidence to deny visas, and Immigrations and Customs Enforcement have already detained and denied people visas following pro-Palestinian statements or appearances at protests. Facial recognition technology is increasingly the subject of regulatory limitations due to a number of wrongful arrests following algorithmic errors. Concerning Track, a spokesman at the American Civil Liberties Union called the tool a “potentially authoritarian technology”.
9. Microsoft Lays Off About 3% Of Workers As Company Adjusts For AI Business
Microsoft announced that it is laying off 6’000 employees, which corresponds to 3% of its workforce. The layoffs are due to AI, but the company argues that they are not due to AI automating human tasks, but rather to cater for the need to free up capital due to the huge cost of AI development at the company. CEO Satya Nadella said that the company expects to spend 80 billion USD in 2025 on AI-related efforts. He also said that up to 30% of company code is now written by AI, and that the AI strategy of the company is to develop smaller specialized models for specific tasks, as well as to integrate AI into key business products like Microsoft 365, Azure, and Dynamics365. The markets responded favorably to the job losses with the company’s share price closing at its highest value in 2025. Its quarterly revenues of 70 billion USD exceeded Wall Street expectations. At the same time, one analyst told Reuters that he expects Microsoft to let go another 10’000 employees to compensate for AI expenditure.
10. New Vulnerability Affects All Intel Processors From The Last 6 Years
Researchers from ETH Zurich have discovered a security vulnerability that affects all Intel processors released over the last six years. The vulnerability, named the Branch Predictor Race Conditions (BPRC), is linked to speculative execution on processors. The idea of speculative execution is to improve overall system performance by anticipating instructions to execute, and executing these while data is being fetched from slower memory systems. In this way, CPU computational throughput is maximized. However, a problem arises when the processor switches a user process because the associated permission changes lag behind instruction prediction by a few nanoseconds. Attackers can inject code that triggers speculative execution and that gives the attacker access to memory regions from other processes. The researchers show that data leakage can happen at the rate of 5’000 bytes per second. The vulnerability is especially a problem for cloud multi-tenant environments where different user processes share the same hardware. Unfortunately, there is no effective fix for the vulnerability, since, according to the researchers, the problem stems from “fundamental architectural flaws”. Only architectural changes like in-order execution or use of the Intel SGX secure architecture will address the problem.