Growing Concern About Government Unpreparedness for AI Upheaval

Claude Mythos Alread in Hands of Unauthorized Actor?

Posted on April 25th, 2026

Summary

Audio Summmary

Several recent articles have expressed concern at the societal impact of AI. With the International Monetary Fund warning of an impending recession due to the Middle East conflict, unemployment will rise even in spite of AI. The Guardian argues that governments need to focus on the three Rs: re-skilling, re-industrialization and redistribution. A key worry is that any jobs created by efficiency improvements brought by AI will be more menial than those jobs lost. MIT Technology Review compares AI with the COVID-19 pandemic. The pandemic transformed society by re-organizing work and governments mobilized research into vaccines. What is missing for AI is a similar mobilization plan to define what millions of out-of-work truck drivers, software engineers, paralegals, journalists, etc. can do. The journal also summarizes some of the popular resistance activities in the US against the surge in AI. For instance, major resistance is growing in communities where data centers are being built. People fear increases in utility bills, destruction of rural lands and pollution. Protests stalled the construction of 98 billion USD worth of data centers in the second quarter of 2025 alone.

Bloomberg reported that an unauthorized group has gained access to the Claude Mythos model – Anthropic’s model that the company refused to release to the public because of its strong capacity to detect zero-day vulnerabilities in code. This is a dangerous capability in the hands of a bad actor. Despite the US administration declaring Claude AI models as modellus non grata after Anthropic refused to sign a deal with the US Department of Defense, CEO Dario Amodei was in the White House to discuss Claude Mythos being used to launch cyberattacks on basic infrastructure like the electrical grid and financial services.

The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence reports that the top AI models are not plateauing in terms of performance, but rather performance is continuing to increase even compared to last year. Energy consumption by models is increasing. Worldwide, data centers now consume 29.6 gigawatts of power, which is enough to power New York State at peak demand. The water used by OpenAI for the GPT-4o model alone exceeds the drinking water needed for 12 million people. Also, the AI supply chain is under pressure. The US has 5427 data centers, which is more than 10 times that of any other country. At the same time, the leading AI chips are fabricated by TSMC in Taiwan.

Regarding Big Tech, a Guardian article looks at the track record of Apple CEO Tim Cook on privacy matters. Cook is stepping down after 15 years in charge of the firm. Cook has always highlighted Apple’s commitment to user privacy, calling privacy a “fundamental right. At the same time, in 2018, Apple transferred the iCloud accounts of Chinese users to a state-backed data center. The Guizhou-Cloud Big Data (GCBD) center allows Chinese government officials to search through user data. Amnesty International criticized this move, saying that it has helped China crack down on dissidents. Meanwhile, Google said that it blocked 8.3 billion policy-violating advertisements globally in 2025. It attributes AI to the increase in number of ads blocked, as well as to an 80% reduction in the number of incorrect suspensions.

On the topic of AI adoption, an InfoWorld article warns that leading cloud providers such as AWS, Microsoft Azure, and Google Cloud may be losing ground to newer hyper-scaler competitors because they still assume that corporate customers privilege vendor reputation. Established cloud providers are charging up to six times more than newer cloud services for the same compute capacity. Another article looks at the challenge of AI governance for unstructured data – sound, video, mails, etc. Language models help governance because they can be used to classify and categorize unstructured documents, as well as detect duplicates and tag documents for data retention policies. One of the biggest open risks for governance is “inference exposure”, where an AI might correctly answer a question by accessing a sensitive document.

1. Want to understand the current state of AI? Check out these charts.

This article looks at some of the lessons from the 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence. The Stanford report evaluates the state of AI worldwide.

  • One key lesson is that the top AI models are not plateauing in terms of performance, but rather performance is continuing to increase even compared to last year.
  • Energy consumption by models is increasing. Worldwide, data centers now consume 29.6 gigawatts of power, which is enough to power New York State at peak demand. The water used by OpenAI for the GPT-4o model alone exceeds the drinking water needed for 12 million people.
  • The AI supply chain is under pressure. The US has 5427 data centers, which is more than 10 times that of any other country. At the same time, the leading AI chips are fabricated by TSMC in Taiwan.
  • The gap between the US and China is significantly narrowing. Anthropic’s models currently lead in performance, closely followed by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba are only slightly behind. China leads in patents and robotics.
  • AI models still exhibit “jagged intelligence” – performing very well in some domains and poorly in others. For instance, robots currently only succeed in executing around 12% of household tasks.
  • Over half of all people around the world use AI, and 88% of organizations have adopted it. AI is said to bolster productivity by 14% in customer service and by 26% in software development. A Stanford study suggests that demand for junior programmers (under 25 years of age) has fallen by 20% since 2022 – though factors other than AI may be contributing to this.

2. Google is now targeting bad ads over bad actors

Google has said that it blocked 8.3 billion policy-violating advertisements globally in 2025. It attributes AI to the increase in number of ads blocked, as well as to an 80% reduction in the number of incorrect suspensions.

  • Over 1.7 billion adds were removed in the US with 3.3 million advertiser accounts removed. The main reasons cited were ad network abuse, misrepresentation, and sexual content.
  • 484 million ads were blocked in India in 2025, with trademarks, financial services, and copyright issues constituting the top violations.
  • The approach reflects a push by Google to integrate Gemini AI into its key products – including testing for policy violations and cybersecurity threats.

3. The hyper-scalers are pricing themselves out of AI workloads

This InfoWorld article warns that leading cloud providers such as AWS, Microsoft Azure, and Google Cloud may be losing ground to newer hyper-scaler competitors because they still assume that corporate customers privilege vendor reputation.

  • Established cloud providers are charging up to six times more than newer cloud services for the same compute capacity. As an example, Nvidia H100-class compute costs about 2.01 USD per hour on Spheron versus approximately 6.88 USD per hour on AWS.
  • The article argues that traditional cloud are assuming that companies will continue to pay for services using the traditional cloud model. AI workloads require real-time monitoring of utilization, throughput, latency, and token usage. This forces organizations to be more rational with their computing budgets.
  • Another aspect is that there are far more credible alternatives to the established cloud providers. The new workloads and applications of AI also mean that it is an easier moment for clients to move from existing providers.

4. AI is destroying jobs – and the energy crisis could make that much worse | Larry Elliott

This Guardian article looks at whether the societal evolution driven by AI is similar to previous technological evolutions. The identifying feature of a revolution is when technology forces capitalism to reinvent itself.

  • The article recognizes that the potential upheaval is greater than that of the 18th century’s industrial revolution, because technology is replacing cognitive work and not labor.
  • Previous revolutions, including the IT revolution of the late 20th century developed relatively slowly, meaning that governments had time to see the workforce retrained for new jobs. The AI transformation is happening at a much faster pace.
  • The timing of the AI revolution in the West is not ideal on the economic front. If the revolution were to happen in a period of full employment like the 1950s and 1960s, workers replaced by AI could find alternative employment.
  • The energy crisis that could follow from the conflict in the Middle East is threatening global recession, which could increase incentive for companies to replace people with digital workers. The International Monetary Fund (IMF) has recently cut growth predictions and is warning of a recession.
  • For previous technological revolutions, the technology made it possible for people to work more efficiently, which in turn led to more job opportunities. The author points out that there is no guarantee that history will repeat itself with AI. Further, the jobs created could be more menial than those jobs replaced.
  • Investors have been worried recently by over-valuations of AI companies. The author argues that they should worry more about a potentially large unemployment rise due to AI. Governments need to focus on the three Rs: re-skilling, re-industrialization and redistribution.

5. The era of AI malaise

This MIT Technology Review article looks at the angst and uncertainty that AI is creating.

6. Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claims

Bloomberg has reported that an unauthorized group has gained access to the Claude Mythos model – Anthropic’s model that the company refused to release to the public because of its strong capacity to detect zero-day vulnerabilities in code. This is a dangerous capability in the hands of a bad actor.

  • According to Bloomberg, the group “made an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models”.
  • Anthropic says that, so far, it has seen no evidence of unauthorized activity in its systems<.
  • A member of an Anthropic contractor company reportedly told Bloomberg that the group that got access to the model is only “interested in playing around with new models, not wreaking havoc with them”.

7. Apple’s Tim Cook leaves behind complicated legacy on privacy

This Guardian article looks at the track record of Apple CEO Tim Cook on privacy matters. Cook is stepping down after 15 years in charge of the firm.

  • Cook has always highlighted Apple’s commitment to user privacy, calling privacy a “fundamental right. He criticized both Google and Meta for the large amount of personal data collected by those firms. In 2015, after a mass shooting incident in San Bernardino, California, Apple refused to give the FBI access to the suspected shooter’s iPhone.
  • Apple sued the sued Israeli spyware company NSO in 2015. NSO created the Pegasus spyware that also infected iPhones.
  • At the same time, in 2018, Apple transferred the iCloud accounts of Chinese users to a state-backed data center. The Guizhou-Cloud Big Data (GCBD) center allows Chinese government officials to search through user data. Amnesty International criticized this move, saying that it has helped China crack down on dissidents.
  • A similar arrangement was made in Russia with a Russian data center.
  • Apple designed a “private relay” feature in 2021 that allowed iPhone users to hide their identities when visiting sites, even from Apple. This feature was not rolled out in China.

8. Addressing the challenges of unstructured data governance for AI

AI relies very much on unstructured data – sound, video, mails, etc – which traditionally has been less handled by data governance processes. This InfoWorld article looks at the challenges.

  • NoSQL databases have helped to organize and implement access controls on unstructured data, but it is only the arrival of vector databases and large language models that have allowed governance processes to verify contents for meaningful access control.
  • Such controls are need since, as one expert writes, they stop those documents from entering unsafe [AI] workflows because exposure often happens before anyone knows the risk exists.
  • Another writes: “one of the biggest security challenges with unstructured data is the lack of visibility and lineage as information moves across systems, clouds, and teams.”.
  • Language models help governance because they can be used to classify and categorize unstructured documents, as well as detect duplicates and tag documents for data retention policies.
  • For one expert, the “primary security risk in AI document management is inference exposure, where an AI might correctly answer a question by accessing a sensitive document that the user technically shouldn’t see.”. This risk is one area where data governance processes need to evolve.

9. Anthropic walks into the White House and Mythos is the reason Washington let it in

Weeks after the US administration declared Claude AI models as modellus non grata after Anthropic refused to sign a deal with the US Department of Defense, CEO Dario Amodei was in the White House to discuss Claude Mythos.

  • Claude Mythos is the model that Anthropic developed which turned out to be extremely effective at finding zero-day vulnerabilities in code. Anthropic refused to release the model to the public as a result.
  • The main concern now for the US government is preventing cyberattacks on critical infrastructure like the electrical grid, as well as on financial services.
  • Despite declaring Anthropic as a supply chain risk, one government report recommends using Claude Mythos to improve cyberdefenses.
  • One government official said “It would be grossly irresponsible for the US government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.”.
  • Trump said he was unaware of Amodei’s visit to the White House.

10. Resistance

This article summarizes some of the popular resistance activities in the US against the surge in AI.

  • One is the Pro-Human AI declaration signed in March 2026 by a coalition of unions, church leaders, Democrats, MAGA Republicans, and others, arguing that AI must serve humanity and not replace it.
  • Another element is the wave of protests that followed OpenAI’s deal with the US Department of Defense that saw many people uninstall OpenAI from their devices.
  • Surveys in the US reveal increasing worries around AI. A poll from last year reveals that most people are worried that AI is having negative impact on people’s creativity and their ability to form meaningful relationships. A more recent poll suggests that three-quarters of Americans believe AI could pose a threat to humanity.
  • Major resistance is growing in communities where data centers are being built. People fear increases in utility bills, destruction of rural lands and pollution. Protests stalled the construction of 98 billion USD worth of data centers in the second quarter of 2025 alone.