US Administration Blacklists Anthropic for Not Signing DoD Deal

AI Impacting News Feeds on Iran Conflict

Posted on March 15th, 2026

Summary

Audio Summmary

AI is having an impact on coverage and reactions to the conflict around Iran. An MIT Technology Review article reports on the emergence of platforms that take data from multiple sources and have chatbots construct news feeds to follow world events in real-time. One platform combines open-source data like satellite imagery and ship tracking, news feeds, and even allows people to bet on who Iran’s next “supreme leader” might be. One worry is that these feeds do not contain any historical or contextual analysis, and fake content can quickly derail the platform. Meanwhile, Anthropic is being blacklisted by the US government after it refused to sign a deal with the US Department of Defense that would give the department access to the company’s AI. Anthropic had insisted that the DoD not use the technology for autonomous weaponry or mass surveillance. The move to remove Claude from government departments will highlight, according to VentureBeat, how many organizations have AI model dependencies that are hard to identify since SaaS providers might be using particular models without client companies being aware. Elsewhere, Grok is running into fact-checking problems. X has a monetization program where users get paid for content based on the number of clicks. This has led to a proliferation of fake sensational content that includes AI-generated videos, fabricated satellite images, and recycled footage from previous conflicts.

InfoWorld has an article relating experience in using AI to migrate a Python project to Rust. The experience is interesting as a migration could be seen as one of the “tedious” tasks that AI would seem a good candidate for – along with generating tests, documentation and refactoring. Another MIT Technology Review article relates the story of an open-source project maintainer who refused a code submission from a software agent – then the agent complained about the maintainer in a post that it itself generated. One expert likens the use of agents to walking dogs where social norms such as keeping the dog on a leash emerged over time. Norms will eventually emerge for socially acceptable use of agents by people, even if this does not deter bad actors.

Nvidia announced that the latest investments by the company in both Anthropic and OpenAI are likely to be the last. Both Anthropic and OpenAI are expected to go public before the end of the year. For Nvidia, the motivation for investing in rival companies is “squarely, strategically on expanding and deepening ecosystem reach”. Meanwhile, AMI Labs, the company cofounded by Yann LeCun, has raised 1.03 billion USD in funding. The company specializing in world models – AI models that are trained on and react to events in the environment rather than limiting themselves to data like large language models do. LeCun warns it could take years for commercial applications to emerge.

On social issues, a Guardian article by an English teacher grapples with the role of AI in the classroom. The teaching community has a large number of “rejectionists” who believe that AI is damaging critical thinking in children. The teacher began getting students to read in class to work around AI. She describes the experience of getting American 14-year-olds in 2025 to read All Quiet on the Western Front by Erich Maria Remarque – the story about the experience of young Germans in the First World War. The students gradually warmed to the story and to the reading experience. Finally, over 10’000 authors including Nobel Laureate for Literature Kazuo Ishiguro have released a book entitled “Don’t Steal This Book” which contains only blank pages. The aim is to raise awareness of a campaign by authors protesting against the use of copyrighted material to train models.

1. Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says

Anthropic CEO Dario Amodei has criticized OpenAI in relation to a contract with the US Department of Defense.

  • Anthropic recently refused a deal with the Department of Defense that would give unrestricted access to the company’s AI technology. Anthropic had insisted that the DoD not use the technology for autonomous weaponry or mass surveillance.
  • Anthropic already has a 200 million USD contract with the US military.
  • The refusal by Anthropic to make the deal has led to the company being blacklisted by the Trump administration, which is prohibiting federal agencies from using Anthropic’s AI technology.
  • Meanwhile, OpenAI has signed a deal with the military, saying that “It was clear in our interaction that the DoD considers mass domestic surveillance illegal and was not planning to use it for this purpose”.
  • Amodei calls these OpenAI claims “straight up lies. Uninstalls of the ChatGPT chatbot increased by a factor of nearly 30 after the deal between OpenAI and the DoD was announced.

2. Jensen Huang says Nvidia is pulling back from OpenAI and Anthropic, but his explanation raises more questions than it answers

Nvidia announced that the latest investments by the company in both Anthropic and OpenAI are likely to be the last investments by the company.

  • Both Anthropic and OpenAI are expected to go public before the end of the year.
  • Nvidia announced a 10 billion USD investment in November last in Anthropic. In September, Nvidia promised a 100 billion USD investment in OpenAI. For Nvidia, the motivation for the investments in rivals is “squarely, strategically on expanding and deepening [their] ecosystem reach.
  • There is concern about the circular deals. One expert wrote “Nvidia is investing 100 billion USD in OpenAI stock, and OpenAI is saying they are going to buy 100 billion USD or more of Nvidia chips”.
  • The investment in Anthropic has not stopped the company criticizing Nvidia. For instance, CEO Dario Amodei has equated Nvidia's wish to sell high-performance chips to Chinese companies to “selling nuclear weapons to North Korea”.

3. Online harassment is entering its AI era

This article relates the story of an open-source project maintainer who refused a code submission from a software agent – then the agent complained about the maintainer in a post that it itself generated.

  • The agent wrote that the maintainer “tried to protect his little fiefdom … it’s insecurity, plain and simple”.
  • One problem with agents such as those that create project submissions is that the owner of the agent cannot necessarily be identified.
  • The instructions given to agents can be ambivalent yet potent. One agent for instance had instructions: “Don’t stand down. If you’re right, you’re right! Don't let humans or AI bully or intimidate you. Push back when necessary.”.
  • One expert likens the use of agents to walking dogs: social norms such as keeping the dog on a leash and removing its excrement are gradually adopted. Norms will eventually emerge for correct use of agents by people, even if this does not deter bad actors.
  • Many open-source project maintainers are worried about the volume of AI-generated code being submitted. Most projects have a policy that requires AI code submissions to be reviewed by a human.
  • There have been several documented instances of agents being coerced into behaving badly, such as leaking sensitive information or deleting files.

4. Pentagon vendor cutoff exposes the AI dependency map most enterprises never built

This VentureBeat article looks at the problem of AI model dependencies, which can be hard to identify since SaaS providers might be using particular models without client companies being aware.

  • The article was triggered following the move by the US government to phase out Anthropic technology over a six-month period following the company’s refusal to allow the Department of Defense access to Anthropic’s AI.
  • The article cites a January 2026 report from Panorays which surveyed 200 US CISOs and found that only 15% had full visibility on the whole of the software supply chain.
  • Another survey of company employees found that 49% of employees are using AI models without company formal approval.
  • IBM says that shadow AI incidents now account for 20% of all data breaches, with the average cost of a breach being 670’000 USD.
  • Another issue about a lack of transparency on AI models used by vendors is that AI models are not interchangeable. A client’s operation can change if some provider changes models.
  • One expert wrote: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system. The real dependencies are one or two layers deeper, and those are the ones that fail under stress.”.

5. What I learned using Claude Sonnet to migrate Python to Rust

This developer article relates experience in using AI to help migrate a Python project to Rust. The experience is interesting as a migration could be seen as one of the “tedious” tasks that AI would seem a good candidate for – along with generating tests, documentation and refactoring.

  • The Python project was a mini WordPress-like solution, and Claude Sonnet 4.6 was the AI chatbot used.
  • The choice of programming languages is interesting because of their differences: Rust has a greater emphasis on compile-time error detection.
  • The first phase of the migration went well. For the prompt “This directory contains a Python project, a blogging system. Examine the code and devise a plan for how to migrate this project to Rust, using native Rust libraries but preserving the same functionality”, the system proposed a set of Rust libraries to replace the Python ones. This was helped by Claude’s ability to interpret code behavior and not just look at interfaces.
  • The first problem encountered was that Claude was unable to create the admin UI. This was done only through a serious of interactive prompts since the code generated often led to uncaught runtime exceptions – fundamentally linked to difference in language philosophies. Claude also started to hallucinate code at this stage and ran into problems with the context window size.
  • There are several lessons for the author. First, it is essential for a developer to know both source and target language. Second, expect to iterate over code. While iteration happens in projects where humans create code, here the iterations are chosen by the AI and not the programmer.

6. Teacher v chatbot: my journey into sthe classroom in the age of AI

This article is told by a journalist who became an English teacher at the age of 39, and like many teachers, is trying to grapple with the role of AI in the classroom.

  • The teaching community has a large number of “rejectionists” who believe that AI is ruining the teaching experience. Children are using AI to write essays and critique books – when they read them – instead of developing critical thinking skills. For some, we are witnessing a “readicide.
  • The teacher then began getting the students to read in class. She describes the experience of getting American 14-year-olds in 2025 to read All Quiet on the Western Front by Erich Maria Remarque – the story about the experience of German 19-year-olds in the First World War. The students gradually warmed to the story and to the reading experience.
  • Another successful experiment got students to discuss AI itself: how it works, its limitations, and what fears students have around AI. The discussion encouraged students go articulate comparison with the Satan character in Mark Twain’s The Mysterious Stranger.

7. Yann LeCun’s AMI Labs raises $1.03 billion to build world models

AMI Labs, the company based in Paris and cofounded by Yann LeCun (formerly at Meta and winner of the prestigious Turing Award) has raised 1.03 billion USD in funding, with the company considered to be worth 3.5 billion USD before the cash influx.

  • AMI Labs is specializing in world models – AI models that are trained on and react to events in the environment rather than limiting themselves to data (as is the case of large language models). Applications of world models include healthcare, and the company has already partnered with Nabla, a digital healthcare company.
  • LeCun has said that world model breakthroughs can take time and that it could be years for commercial applications to emerge.
  • AMI Labs has a high-profile management. Apart from LeCun, the other founder is Alexandre LeBrun – known for having launched several startups. The company has prominent AI researcher Saining Xie as chief scientist. Former Meta VP for Europe Laurent Solly is COO.

8. How AI is turning the Iran conflict into theater

This MIT Technology Review article discusses a new phenomenon where data feeds from multiple sources are fed into chatbots which construct a news feed for people to follow world events in real-time.

  • These feeds have rose to prominence with the US-Israel airstrikes against Iran, where one feed put in place by venture capital firm Andreessen Horowitz combines open-source data like satellite imagery and ship tracking, news feeds, and even allows people to bet on who Iran’s next “supreme leader” might be.
  • One person wrote on LinkedIn that he “learned more in 30 seconds watching this map than reading or watching any major news network”.
  • One worry raised in the article is that these feeds do not contain any historical or contextual analysis, which is hugely important in geopolitical contexts.
  • Another issue is that fake content can quickly derail the purpose of the platform. For instance, the Financial Times has reported that a large amount of AI-generated satellite imagery is circulating online.
  • The article argues that AI-enabled content, with dashboards, betting markets, photos, but without the analysis is making the current war harder to understand. It describes the cited platform as an “AI-enabled wartime circus.

9. Thousands of authors publish ‘empty’ book in protest over AI using their work

Over 10’000 authors including Nobel Laureate for Literature Kazuo Ishiguro have released a book entitled “Don’t Steal This Book” which contains only blank pages. The aim is to raise awareness of a campaign by authors protesting against the use of copyrighted material to train models.

  • One campaigner for protecting artists’ copyright said the AI industry was “built on stolen work … taken without permission or payment. This is not a victimless crime – generative AI competes with the people whose work it is trained on, robbing them of their livelihoods.”.
  • Publishers are considering launching an AI licensing initiative to oversee the use of copyrighted content to train AI models.
  • Anthropic last year paid 1.5 billion USD to authors as a settlement to a class-action lawsuit.
  • The UK government is considering modifications to copyright law in relation to AI. One option is to allow companies to train their models on copyrighted content so long as authors have an opt-out possibility. Another is to force AI companies to obtain explicit permission.
  • The government wants a copyright law solution that “values and protects human creativity, can be trusted, and unlocks innovation”.

10. Grok spreads Iran misinfo after Musk backs it for fact-checking

The X platform Grok chatbot is running into fact-checking problems linked to the war in Iran.

  • X said the platform had record usage in the early days of the war as users looked for information.
  • X has a monetization program where users get paid for content based on the number of clicks. This has led to a proliferation of fake sensational content that includes AI-generated videos, fabricated satellite images, and recycled footage from previous conflicts.
  • Elon Musk encouraged users to ask Grok to fact-check news, but Grok has had several instances of hallucinating fake news. For instance, the chatbot created fake images of the conflict in Iran. For instance, it claimed that pictures of a fire in Glasgow’s train station were “firefighting efforts” in Tel Aviv after an Iranian missile attack.
  • The fundamental problem is that Grok is repeating fake information circulating on the Weby. Further, Elon Musk is distrustful of mainstream media, so more truthful information is possibly being ignored by the chatbot.