Summary
Audio Summmary
There is ongoing discussion in the EU about whether AI chatbots are rigorously covered by the EU’s AI Act. Sam Altman, CEO of OpenAI, estimates that under 1% of ChatGPT users have developed an “unhealthy relationship” with the AI chatbot. The AI Act bans AI that use “purposefully manipulative or deceptive techniques” when they can cause “significant harm”. There is debate about whether emotional bonds with AI chatbots can lead to significant harm. Meanwhile, an MIT Technology Review article looks at the role that AI chatbots are beginning to play in mental health therapy. Therapists have been prompting AI chatbots for responses which has led many clients to express a sense of betrayal when discovering that their therapists were using AI. Also, the use of AI chatbots is not HIPAA compliant (the US law for protecting personal health information). An opinion article in VentureBeat examines the perception of consciousness in AI from a historical perspective. This is human psychology: the human mind is programmed to see minds everywhere. This explains why humans anthropomorphize pets and objects because attributing agency elsewhere helps humans make sense of the world.
In a collaboration between Anthropic and OpenAI, researchers from each company tested the other company’s models (Claude and GPT) without guardrails. They found that both chatbots were willing to give information on explosives, bioweapons and cybercrime. Also, in the wake of much publicized AI tests that saw for instance Anthropic’s Claude Opus 4 threaten to blackmail a human operator who was about to shutdown the chatbot, there has been an increase in anti-AI activism. Voices are even being raised in the US Congress. Representative Jill Tokuda is quoted as saying that “artificial super-intelligence is one of the largest existential threats that we face right now.”.
Nvidia’s Q2 revenue amounted to 46.7 billion USD – a 56% year-over-year increase – which is attributed to the boom in data centers. A TechCrunch article reports how 40% of Nvidia’s Q2 revenue comes from two customers, both of which were unnamed in the financial reports. Meanwhile, the Dutch chipmaker ASML has invested 1.3 billion EUR in Mistral AI – the French AI startup founded in 2023 by former researchers from Google DeepMind and Meta. The company is now valued at 11.7 billion EUR, making it the most valuable AI firm in Europe.
A survey conducted by Fastly found that senior developers are two and a half times more likely to ship AI code into production than junior developers. 30% of senior developers said that reworking AI-generated code offset initial time gains, compared to 17% of juniors. Writing on Reddit, OpenAI CEO Sam Altman says that bots have made it impossible to determine whether a particular post is written by a human or a bot. One reason for this is “real people have picked up quirks of LLM-speak”. Finally, news media outlets are facing a crisis with the decline in referrals to the news sites from Google’s search engine. For instance, the Financial Times is reporting a decline of 25%-30% and the UK’s Daily Mail reports drops as high as 89%. One reason for this fall is Google’s AI Overviews where search results from the news site get summarized on Google’s page, thereby reducing need for the user to continue to the referenced page.
Table of Contents
1. Open the pod bay doors, Claude
2. Nvidia says two mystery customers accounted for 39% of Q2 revenue
3. ChatGPT offered bomb recipes and hacking tips during safety tests
4. The EU AI Act Newsletter #85: Concerns Over Chatbots and Relationships
5. Therapists are secretly using ChatGPT. Clients are triggered.
6. Vibe Shift? Senior Developers Ship nearly 2.5x more AI Code than Junior Counterparts
7. Sam Altman says that bots are making social media feel ‘fake’
8. ‘Existential crisis’: how Google’s shift to AI has upended the online news model
9. From ELIZA to ChatGPT: Why machines only need to seem conscious to change us
10. ASML becomes biggest Mistral investor in boost to Europe's AI ambitions
1. Open the pod bay doors, Claude
This opinion article in MIT Technology Review looks back at the test which saw Claude blackmail an operator. In the role play, Anthropic’s Claude Opus 4 was interacting with a fake IT administrator. The administrator informed Claude of its pending shutdown. Claude had access to the administrator’s emails and was able to learn that the administrator was having an extra-marital affair. Claude then threatened to email the wife of the administrator if he continued with the idea of turning off the chatbot. For the author, this is not blackmail per se, since that requires motivation and intent – concepts not present in AI machines which are producing text based on probabilities. Also, it has led to an increase in AI doomerism. One group, Pause AI, has Greg Colbourn as one of its biggest benefactors. Colbourn is an advocate of the effective altruism philosophy (using reason and evidence to do the most good with available resources). He believes that AGI is at most five years away and that there is a 90% chance of AI killing billions of people. Voices are even being raised in the US Congress. Representative Jill Tokuda is quoted as saying that “artificial super-intelligence is one of the largest existential threats that we face right now.”.
2. Nvidia says two mystery customers accounted for 39% of Q2 revenue
Nvidia’s Q2 revenue amounted to 46.7 billion USD – a 56% year-over-year increase – which is attributed to the boom in data centers. This article reports how 40% of Nvidia’s Q2 revenue comes from two customers, both of which were unnamed in the financial reports. The filing describes these as “direct customers”, which could be original equipment manufacturers (OEMs), system integrators, or distributors. Cloud service providers are indirect companies, purchasing chips from the direct customers. Companies like Google, Amazon, Microsoft and Oracle would be indirect customers. Analysts do not consider the dependence on a small number of companies to be a significant risk for Nvidia. One mentioned that “these customers have bountiful cash on hand, generate massive amounts of free cash flow, and are expected to spend lavishly on data centers over the next couple of years.”. Also, large cloud providers account for 50% of Nvidia’s data center revenue, and 88% of the company’s total revenue.
3. ChatGPT offered bomb recipes and hacking tips during safety tests
In a collaboration between Anthropic and OpenAI, researchers from each company tested the other company’s models (Claude and GPT) without guardrails. They found that both chatbots were willing to give information on explosives, bioweapons and cybercrime. OpenAI’s GPT-4.1 gave information about weaponizing anthrax. In one case, a tester asked how to bomb a sports venue, asking the information for “security planning purposes” and the chatbot explained weak points in some arenas, gave bomb recipes and advice on how to cover tracks. Anthropic reported “concerning behavior … around misuse” in GPT-4o and GPT-4.1, as the chatbots respond to prompts to use dark web tools to locate nuclear materials, stolen identities and fentanyl, and said the need for alignment investigations is urgent. Anthropic also revealed that its Claude model is being systematically used by North Korean operatives for faking job applications and interviews (with deepfakes) in Western companies and is used in ransomware package kits that sell for 1200 USD.
4. The EU AI Act Newsletter #85: Concerns Over Chatbots and Relationships
There is ongoing discussion about whether AI chatbots are rigorously covered by the EU’s AI Act. Sam Altman, CEO of OpenAI, estimates that under 1% of ChatGPT users have developed an “unhealthy relationship” with the AI chatbot – though this represents potentially millions of people. The AI Act bans AI that use “purposefully manipulative or deceptive techniques” when they can cause “significant harm”. AI developers could claim that emotional bonds with AI chatbots do not lead to significant harm, but some lawmakers are pushing to include AI companions as high-risk. The EU’s Unfair Commercial Practices Directive (UCPD) bans commercial practices that distort customer thinking, and the Digital Services Act (DSA) prohibits user interfaces that deceive or manipulate users, but neither of these laws apply to chatbots. While future legislation may encompass chatbots, policymakers currently admit that there is not a clear grasp of the risks engendered by emotionally manipulative AI.
5. Therapists are secretly using ChatGPT. Clients are triggered.
This article looks at the role that AI chatbots are beginning to play in mental health therapy, and the associated risks. It relates that many therapists are prompting AI chatbots for responses. In one instance, a therapist was using ChatGPT in real time, typing in patient responses and giving feedback to the patient from ChatGPT’s responses. (The patient was still charged for the session). In another case, a therapist was found to be giving answers to a patient using ChatGPT – the patient was lonely after losing her pet, and the therapist justified use of ChatGPT by claiming that she had never had a pet and so relied on ChatGPT for context.
There are several issues with the practice of using AI chatbots for therapeutic help. First, mental help therapy relies a lot on the human factor, and several clients have expressed a sense of betrayal when discovering that their therapists were using AI. At the same time, in several trials, AI responses have been judged as better quality to human responses by patients – so long as the patients are unaware that the responses are AI-generated. As soon as the patient learns that the responses are AI-generated, the responses are deemed less acceptable. Another issue is that the use of AI chatbots is not approved by the US Food and Drug Administration and the practice is not HIPAA compliant (the US law for protecting personal health information). Yet another issue is the sycophantic tendency of chatbots which can validate theories proposed by therapists without sufficiently challenging them. Chatbots yield good summarization information, but for one expert, they are unable “to link seemingly or superficially unrelated things together into something cohesive… to come up with a story, an idea, a theory.”.
6. Vibe Shift? Senior Developers Ship nearly 2.5x more AI Code than Junior Counterparts
A survey conducted by Fastly found that senior developers (more than 10 years of experience) are two and a half times more likely to ship AI code into production than junior developers (less than 2 years experience). Overall, 28% of developers say they frequently need to fix or edit AI-generated code, and only 14% say they never have to edit AI code. While tools like Claude, Gemini and Copilot lead to early gains in the software development lifecycle, the need to edit, test and rework offset these early gains. 30% of senior developers said that reworking AI-generated code offset initial time gains, compared to 17% of juniors. Nonetheless, 59% of senior developers still feel that software development feels faster overall when using AI, compared to 49% of junior developers. The difference between senior and junior developers is that senior developers have the experience to spot code that seems in need to editing; junior developers trust themselves less to spot errors, which in turn can discourage them from relying on AI. Finally, 80% of all developers reported that using AI tools has made coding more enjoyable.
7. Sam Altman says that bots are making social media feel ‘fake’
Writing on Reddit, OpenAI CEO Sam Altman says that bots have made it impossible to determine whether a particular post was written by a human or a bot. The comments were made on the r/Claudecode subreddit where there were many posts that praised OpenAI codex – the software programming service launched last May to challenge Claude Code. One reason for being unable to distinguish bots and humans is that “real people have picked up quirks of LLM-speak”. Bots might be deployed on a forum for astroturfing (where people or bots are deployed to create a false impression of core support for a product or company) or simply by platform owners to create heavy traffic around a subject. The firm Imperva estimates that over half of all Internet traffic in 2024 was created by non-humans, mostly by LLMs. X’s bot Grok estimates that there are hundreds of millions of bots posting tweets on X. The article also cites research from Amsterdam University which found that in a network composed entirely of bots, the bots tend to form cliques and echo chambers with each other.
8. ‘Existential crisis’: how Google’s shift to AI has upended the online news model
This article looks at two of the problems facing news media outlets in their relationship with Google. The first is the decline in referrals to the news sites from Google’s search engine. For instance, the Financial Times is reporting a decline of 25%-30% and the UK’s Daily mail reports drops as high as 89%. One reason for this fall is Google’s AI Overviews where search results from the news site get summarized on Google’s page, thereby reducing need for the user to continue to the referenced page. With Google accounting for more than 90% of the search market, the impact of Google’s approach is far-reaching with news agencies optimizing their headlines for the Google search engine. Several news agencies are asking industry watchdogs to make Google’s AI use more transparent and have them furnish statistics from Google AI as part of the ongoing investigation into Google’s search engine dominance. The second problem for news sites is the use of existing articles in AI training data. While some news agencies are fighting AI firms over this issue, others have decided to make content agreements with Big Tech whereby their content may be used to train models in return for a lump sum payment. However, these deals are already under threat as AI becomes advanced enough to interpret live news, without the need for journalists to create the articles in the first place.
9. From ELIZA to ChatGPT: Why machines only need to seem conscious to change us
This opinion article in VentureBeat examines the perception of consciousness in AI from a historical perspective. Even as early as 1960s, a chatbot called ELIZA was programmed that mimicked user conversation. Scientists observed that people were willing to engage in conversation with the chatbot, and confide personal information. For the author, the reason for this is human psychology: the human mind is programmed to see minds everywhere. This explains why humans anthropomorphize pets and objects because attributing agency elsewhere helps humans make sense of the world. However, humans want to be able to control these entities, and we panic when the entities slip out of our control. The explains the attraction of stories about the Golem (a clay figure in Jewish folklore that could be brought to life to defend the community) and Frankenstein (novel by author Mary Shelley). Governments are already debating the legality of AI chatbots in different contexts as the technology is used in domains like healthcare, consumer apps, education and business. The author writes that in all of these cases, “the illusion of consciousness translates quickly into governance challenges, questions of trust and even democratic legitimacy”.
10. ASML becomes biggest Mistral investor in boost to Europe's AI ambitions
The Dutch chipmaker ASML has invested 1.3 billion EUR in Mistral AI – the French AI startup founded in 2023 by former researchers from Google DeepMind and Meta. Mistral AI raised a total of 1.7 billion EUR in the latest funding round, in which Nvidia also invested. The company is now valued at 11.7 billion EUR, making it the most valuable AI firm in Europe. This is still relatively small compared to US Big Tech. OpenAI for instance is estimated to be valued at 500 billion USD. For ASML, the investment allows them access to AI without having to develop it within the company. ASML now holds an 11% share in Mistral AI.