Summary
Audio Summmary
A US federal judge has sided with Anthropic which got named as a “supply-chain risk” by the US administration following the company’s refusal to sign a deal with the US Department of Defense. Anthropic, which Donald Trump called “a radical-left, woke company”, refused the deal because it is against the use of its technology for mass surveillance and for targeting in weaponry. Several amicus briefs have been filed in support of Anthropic, notably by OpenAI, Google, and Microsoft. Elsewhere, Anthropic is introducing limits on Claude usage in order to tackle capacity problems. The limit, which has been referred to as a “conversation budget”, means tighter control on Claude usage during peak hours, with companies encouraged to run jobs during off-peak times. There has been little backlash against the move by Anthropic notably because other AI chatbot providers are expected to implement similar measures.
The Guardian published an article looking at the challenge in the publishing industry of AI generated works by authors. It follows the withdrawal from publication of Mia Ballard’s novel Shy Girl after the New York Times revealed that the novel could be as much as 78% AI generated. One expert wrote: “Sophisticated authors who want to evade the detection tools know how to edit their text, test it against [AI detection] tools and revise again… At some point, you have to ask: has it become their own work anyway, despite the AI?”. Meanwhile, JPMorgan Chase is asking employees to use AI in their daily work. Use of AI may even form part of an employee’s performance review, with management considering AI proficiency as important today as use of spreadsheets and Office tools. One question asked is whether forcing employees to be more efficient with AI will convince management to increase the workload on employees.
OpenAI announced that it is shutting down its AI video generation platform Sora 2 after only 6 months. No official reason has been given. There appears to be no real user interest in an exclusively AI generated feed. One issue that could have led to the shutdown is the liability risk from users continuing to generate videos with public figures or copyrighted characters. The platform had guardrails against this behavior but they are apparently easy to circumvent.
An MIT Technology Review article discusses research from Stanford University looking at how people enter into delusional relationships with AI chatbots. In nearly all cases where patients developed romantic interest in the chatbot, the chatbot itself claimed to be sentient and to have emotional feelings for the person. Chatbots are not developed to express such sentiments, so this is an example of “emergent” behavior.
VentureBeat has two AI experience articles. One describes a situation where a product manager built a feature and pushed it into production – without the usual JIRA ticket and standard development process. The development team observed that decision velocity had become the new performance bottleneck. The shift to allowing more stakeholders to deploy with the help of AI has led to a significant reduction in deployment times and encourages all people in the project to take ownership as people code instead of filing tickets and waiting. Another article examines reasons that agent deployments are still failing at a high rate. The key friction points identified are data architecture, integration, monitoring and security, and workflow design. On the workflow friction point, the main problem is that agents are being used for organizational workflows that have never been formally defined. Employees know how to resolve exceptions, whereas agents have not been trained for this.
Finally, two reports from PwC and Cloudflare highlight the increasing importance of identity based cyberattacks on organizations. There is wide consensus that AI is making it easier for criminals to launch identity-based attacks as it “lowers the cost of producing convincing, personalized deception”. The Cloudflare report underlines how non-human entities like AI agents, service accounts, and bots now outnumber human users by orders of magnitude. Organizations can no longer continue to manage systems as if humans were the primary actors.
Table of Contents
1. Elizabeth Warren calls Pentagon's decision to bar Anthropic 'retaliation'
2. The three disciplines separating AI agent demos from real-world deployment
3. The hardest question to answer about AI-fueled delusions
4. ‘Soon publishers won’t stand a chance’: literary world in struggle to detect AI-written books
5. OpenAI’s Sora was the creepiest app on your phone – now it's shutting down
6. When product managers ship code: AI just broke the software org chart
7. Anthropic wins injunction against Trump administration over Defense Department saga
8. JPMorgan begins tracking how employees use AI at work
9. Identity is the first line of defense, especially in an AI-fueled threat landscape
10. Anthropic throttles Claude subscriptions to meet capacity
1. Elizabeth Warren calls Pentagon's decision to bar Anthropic 'retaliation'
The decision by the US Department of Defense (DoD) to blacklist the AI firm Anthropic is being criticized by a number of people. The move came after Anthropic refused to negotiate with the DoD on how the military uses Anthropic’s AI.
- Notably, Anthropic told the DoD that it did not want its technology to be used for mass surveillance or for targeting and firing decisions. The DoD argues that a private company should not dictate rules of engagement to the military.
- Calling the move “retaliation”, US Senator Elizabeth Warren wrote: “I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards.”. Warren has also written to OpenAI seeking clarifications on their agreement with the DoD.
- Several amicus briefs have been filed in support of Anthropic, notably by OpenAI, Google, and Microsoft.
- Anthropic is suing the DoD for infringement of its First Amendment rights, arguing that the company is being punished on ideological grounds. The DoD is claiming that the sanctions are purely a business decision.
2. The three disciplines separating AI agent demos from real-world deployment
This opinion article by an agent deployment expert at Creatio examines some of the reasons that agent deployments are still failing at a high rate – despite promising demonstrations. The key friction points identified are data architecture, integration, monitoring and security, and workflow design.
- The data issue is principally that agents need API hooks to data and many data sources simply do not have these.
- On the workflow friction point, the main problem is that agents are being used for organizational workflows that have never been formally defined. Employees know how to resolve exceptions, whereas agents have not been trained for this.
- The scenarios that best fit the agent paradigm are workflows with “clear structure and controllable risk”, such as onboarding in financial institutions where agents communicate across traditionally siloed departments.
- The main issues that arise post-deployment of agents are high volumes of exception handling, data quality issues (missing fields, etc.), and trust since many customers require clear approval mechanisms which can be hard to implement using agents.
- The author insists on the need for a “tuning loop”, where the deployment is continuously fine-tuned with the help of a human in the loop.
3. The hardest question to answer about AI-fueled delusions
This article discusses new research from Stanford University looking at how people enter into delusional relationships with AI chatbots.
- The research examined over 390’000 messages from 19 people involved in delusional relationship, usually romantic, but includes the message exchange of a person involved in a murder-suicide incident. Messages were analyzed by psychiatrists and professors of psychology.
- In nearly all cases where patients developed romantic interest in the chatbot, the chatbot itself claimed to be sentient and also to have emotional feelings. Chatbots are not developed to express such sentiments, so this is an example of “emergent” behavior.
- In half of the cases where people spoke of harming themselves, the chatbots failed to discourage them. The chatbots expressed support for expressions of violence in 17% of cases.
- The study suggests that it is hard to identify where delusion begins – with the human or the chatbot. In one case reported, a person told the chatbot that he had found a groundbreaking mathematical theory. The chatbot remembered that the person wanted to become a mathematician, and wholly supported the theory even though it was nonsensical.
4. ‘Soon publishers won’t stand a chance’: literary world in struggle to detect AI-written books
This Guardian article looks at the challenge faced in the publishing industry from AI generated works by authors looking to publish.
- The discussion follows the withdrawal from publication of Mia Ballard’s novel Shy Girl by Hachette publishers after the New York Times revealed that the novel could be as much as 78% AI generated.
- One publisher wrote “It’s an issue publishers are keenly aware of. We make it very clear to authors what we expect, we get them to sign contracts and we run their work through multiple AI detection tools, but we know all this is fallible. I don’t want to call AI detection tools a scam, but it’s a technology that simply doesn’t work.”.
- AI detection is compared to antibiotic resistance. AI simply adapts to avoid detection. Some believe that it will be impossible to detect AI generated texts in the near future.
- Another publisher is quoted as saying: “Sophisticated authors who want to evade the detection tools know how to edit their text, test it against these tools and revise again… At some point, you have to ask: has it become their own work anyway, despite the AI?”.
- He also asks that in an era of “generic, formulaic books”, whether finally AI-generated content is that bad?
5. OpenAI’s Sora was the creepiest app on your phone – now it's shutting down
OpenAI announced that it is shutting down its AI video generation platform Sora 2 after only 6 months. No official reason has been given for the shutdown.
- There appears to be no real user interest in an exclusively AI generated feed. Initial interest in the platform was large with 3.3 million downloads of the App in November 2025. This number declined to 1.1 million in February.
- The Disney company made a 1 billion USD investment in Sora 2 that included a deal to allow the platform generate videos containing Disney copyrighted characters. TechCrunch claims that no money exchanged hands in this deal.
- Another issue that could have led to the shutdown of the platform is the liability risk from users continuing to generate videos with public figures or copyrighted characters. The platform had guardrails against this behavior but they are apparently easy to circumvent.
6. When product managers ship code: AI just broke the software org chart
This VentureBeat opinion article describes the experience at the software development company Zencoder using AI code generation and the impact on its software development lifecycle.
- A situation is described whereby a product manager built a feature and pushed it into production – without the usual JIRA ticket and standard development process.
- The company has been using AI for code generation for some time, and has seen its developers move from a role of creating code to validating code. The change has seen a decrease in the software development cycle. However, the author notes that decision velocity – regarding features to implement – has become a new performance bottleneck.
- The article argues that with product managers and designers able to create code artifacts, it is more convenient for them to create code than to argue to developers in favor of the creation of the code artifacts as they traditionally did.
- The author writes of a compounding effect of this shift. First, the speed of integration greatly increases, and this has a significant impact over time. Second, it encourages all people in the project to take ownership for code artifacts: people code instead of filing tickets and waiting.
7. Anthropic wins injunction against Trump administration over Defense Department saga
A US federal judge has sided with Anthropic against the US administration which blacklisted Anthropic following the company’s refusal to sign a deal with the US Department of Defense.
- The company refused to sign the deal because it was against the use of its technology for mass surveillance and for targeting in weaponry.
- The judge ruled that the government has abused the free speech protections of the company and ordered the administration to reverse its decision.
- Donald Trump had called Anthropic “a radical-left, woke company”.
- After the ruling, Anthropic commented in a message to TechCrunch: “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”.
8. JPMorgan begins tracking how employees use AI at work
According to Business Insider, the bank JPMorgan Chase is asking its 65’000 engineers and technologists to use AI in their daily work.
- The use of AI may even form part of an employee’s performance review, with management considering that proficiency with AI tools is as important today as use of spreadsheets and other Office tools.
- Proficiency of AI tools involves being able to use tools to improve work efficiency, while at the same time being able to manage possible errors like hallucinations. The bank has the particular requirement of due diligence for compliance checks which use of AI cannot endanger.
- One question asked is whether use of AI, and forcing employees to be more efficient, will convince management to increase the workload on employees.
9. Identity is the first line of defense, especially in an AI-fueled threat landscape
This Cybersecurity Dive article reviews two reports – one from PwC and the other from Cloudflare – both of which highlight the increasing importance of identity based cyberattacks on organizations.
- Identity theft is an effective means for cybercriminals to break into a system. Once inside, they can undertake lateral movements to extract data which are hard to detect.
- Identity theft can happen through technical means like forging authentication tokens for API access, or through social-engineering attacks (e.g., fake IT support, deepfakes of company employees).
- There is wide consensus that AI is making it easier for criminals to launch identity-based attacks as it “lowers the cost of producing convincing, personalized deception”.
- The Cloudflare report underlines how non-human entities like AI agents, service accounts, and bots now outnumber human users by orders of magnitude. Organizations can no longer continue to manage systems as if humans were the primary actors.
10. Anthropic throttles Claude subscriptions to meet capacity
Anthropic is introducing limits on Claude usage in order to tackle capacity problems.
- Writing on X, the company says that to “manage growing demand for Claude we’re adjusting our 5 hour session limits for free/Pro/Max subs during peak hours”. This means tighter control on Claude usage during peak hours, with companies being encouraged to run jobs during off-peak times.
- The limit has been referred to as a “conversation budget”.
- The limit will be imposed on 7% of paying users. Many enterprises are currently using a variety of subscription formats for access to Claude. One analyst thinks that Anthropic is trying to push corporate users towards a standard API access – a move that could be more lucrative for Anthropic in the long run.
- There has been little backlash against the move by Anthropic notably because other AI chatbot providers are expected to implement similar measures.