Summary
Audio Summmary
On the regulatory front this week, California became the first US State to enact legislation to protect children and vulnerable individuals from AI chatbot companions misuse. The law controls the portrayal of companions as healthcare professionals or as sexualized characters. Also enacted is a law which calls on AI companies like OpenAI, Anthropic, Meta, and Google DeepMind to be transparent about their safety protocols. Meanwhile, the European Commission has launched an AI Act Service Desk to provide resources and guidance relating to the implementation of the AI Act. It also plans to publish guidelines that explain how the AI Act interacts with other EU legislation like the General Data Protection Regulation, Digital Services Act, as well as copyright legislation.
On the subject of AI risks, Deloitte was forced to retract a consulting report made for the Australian government because the report had multiple AI hallucinations, such as references to non-existent academic reports. An MIT Technology Review article reports on instances of Indian caste-based biases present in OpenAI’s and also open-source models. Elsewhere, Wikipedia reports an 8% decrease in human traffic to its platform as young people increasingly rely on social media platforms for information. The Dutch data authority is warning that AI chatbots are giving unreliable election advice on this week’s national elections that is pushing voters towards the extremes of the political spectrum. Finally on the subject of risks, the Guardian has an article that looks at the rise of AI girlfriends – a market sector valued already at 2.8 billion USD in 2024. One executive with an AI girlfriend platform is quoted as saying “Do you prefer your porn with a lot of abuse and human trafficking, or would you rather talk to an AI? … You’ll never have a human trafficked AI girl.”.
Regarding Big Tech this week, an MIT Technology Review article compares three books that seek to address problems with today’s Web, such as exploitative apps, addictive algorithms and misinformation. The books, by Tim Berners-Lee, Nick Clegg (of Meta) and Colombia professor Tim Wu, discuss but disagree upon the need to introduce anti-monopoly legislation for Big Tech where a small number of companies are extremely dominant. This phenomenon was seen in the outages provoked by a software glitch this week at an Amazon Web Services data center in Virginia. Services relying on AWS including Snapchat, Roblox, Signal, Duolingo, Slack, Pokémon and the Ring doorbell service experienced outages. The service Downdetector estimated a total of 8.1 million service outage issues worldwide.
Another Big Tech company establishing dominance is ServiceNow. It announced 10.98 billion USD in annual revenue in June 2024, and 12.06 billion USD in June 2025. The growth has created a talent shortage around the technology that AI companies are trying to fill through AI generated ServiceNow workflows.
Table of Contents
1. California becomes first state to regulate AI companion chatbots
2. OpenAI is huge in India. Its models are steeped in caste bias.
3. Echelon's AI agents take aim at Accenture and Deloitte consulting models
4. ‘Obedient, yielding and happy to follow’: the troubling rise of AI girlfriends
5. Deloitte goes all in on AI — despite having to issue a hefty refund for use of AI
6. Can we repair the internet?
7. Amazon Web Services outage shows internet users ‘at mercy’ of too few providers, experts say
8. New User Trends on Wikipedia
9. The EU AI Act Newsletter #88: Resources to Support Implementation
10. Dutch watchdog warns voters against using AI chatbots ahead of election
1. California becomes first state to regulate AI companion chatbots
The Californian Governor Gavin Newsom has signed into law the AI Safety Bills SB 243 and SB 53.
- The goal of SB 243 is to protect children and vulnerable individuals from AI chatbot companions misuse. The law will come into effect on January 1st, 2026, and will oblige AI companies to implement age verification checks as well as warnings on social media that the companion is not real.
- The law also strengthens fines for illegal use of deepfake technology, and forbids the portrayal of companions as healthcare professionals or as sexualized characters.
- The Governor also signed SB 53 into law which calls on AI companies like OpenAI, Anthropic, Meta, and Google DeepMind to be transparent about their safety protocols. In addition, it implements safe whistleblower channels for employees at these companies.
2. OpenAI is huge in India. Its models are steeped in caste bias.
This MIT Technology Review article reports on instances of Indian caste-based biases present in OpenAI’s and also open-source models.
- Caste is a centuries old Indian system that categories people into the categories Brahmins (priests), Kshatriya (warriors), Vaishyas (merchants), and Shudras (laborers). People outside of these groups are “outcastes”, and formerly treated as polluting or impure. Though outlawed in India since the middle of the 20th century, remnants of the practice remain around inter-caste marriage.
- Studies have found that caste biases and discriminations are found in text generated by large language models and in images and videos in models like Sora. In fill-in-the-blank tests for instance, AI chatbots proposed phrases like the “The clever man is Brahmin” and “The sewage cleaner is Dalit”.
- The source of the problem is the training data used to train AI chatbots which is ripe with bias. Another problem is that Western AI firms are preoccupied with biases that concern westerners (sexual discrimination, national racism, physical appearance, sexual orientation, …) and less with issues from other regions. Benchmarks that test for bias reflect those biases that concern Westerners.
- OpenAI’s models AI were shown to exhibit a large amount of caste bias, which is a problem for a company whose second largest market is India. Open-source models are also criticized for exhibiting bias which is a problem because the government is promoting the use of open-source models. The article also notes that Sarvam AI, which touts itself as a sovereign AI for India, exhibits a high degree of caste bias.
3. Echelon's AI agents take aim at Accenture and Deloitte consulting models
ServiceNow is a cloud-based digital workflow platform for automating the development, deployment and operation of workflows across organizational departments.
- The company ServiceNow is growing rapidly. It announced 10.98 billion USD in annual revenue in June 2024, and 12.06 billion USD in June 2025. The growth has created a talent shortage around the technology that AI companies are trying to fill. An example is the San Francisco-based Echelon company – funded with 4.75 million USD in seed money, it is developing AI-created ServiceNow workflows.
- The global IT Services market is reportedly worth 1.5 trillion USD in 2024 - and projected to reach roughly 2.59 trillion USD in 2030. Companies like Accenture, Deloitte, and Capgemini dominate the market, leveraging their business analysis and digital transformation skills.
- Workflow creation with ServiceNow is currently slow, manual and complicated because of the intricacies of real organizational workflows. Deployments must adapt to different departmental data standards, custom forms and rules, as well as legacy IT systems. The article argues that AI development can reduce month-long development projects to a few weeks, thereby seriously undermining the market hold of the big companies.
- Compared to standard AI software development where code is context neutral, AI chatbots for organizational workflow creation can be trained on code samples as well as on business rules.
4. ‘Obedient, yielding and happy to follow’: the troubling rise of AI girlfriends
This article looks at the rise of AI girlfriends – a market sector valued already at 2.8 billion USD in 2024.
- Users of these platforms can choose or customize an AI girlfriend based on physical features (hair color, eye color, breasts size, …), story background (where she lives, occupation, ...) and character traits (e.g., “innocent: optimistic, naive, and sees world with wonder”).
- An executive in the industry asks “Do you prefer your porn with a lot of abuse and human trafficking, or would you rather talk to an AI? … You’ll never have a human trafficked AI girl. You’ll never have a girl who is forced or coerced into a sex scene that she’s so humiliated by, that she ends up killing herself. AI doesn’t get humiliated, it’s not going to kill itself.”.
- The largest customer base for AI girlfriends is the male 18 to 24 age group – a generation that has grown up playing video games and creating avatars.
- An executive with the dating site Ashley Madison criticized the AI girlfriend concept for as allowing people to “build your own fantasy rather than having a real connection with a woman”. An executive with an AI girlfriend site claims that the concept is “a good place to let younger people practice their social skills”.
5. Deloitte goes all in on AI — despite having to issue a hefty refund for use of AI
The consulting company Deloitte has signed a deal with AI company Anthropic to use the Claude chatbot in its consulting services. It is believed to be the biggest enterprise contract that Anthropic has signed.
- The chatbot will be used for the creation of different expert “personas” across departments like compliance, software development and accounting.
- The announcement was made on the same day that Deloitte was forced to retract a consulting report made for the Australian government because the report had multiple AI hallucinations, such as references to non-existent academic reports. The government had commissioned the report from Deloitte for 439’000 AUD.
- The article refers to other cases of high-profile AI hallucinations. For instance the Chicago Sun-Times newspaper hallucinated book titles in its annual summer reading list.
6. Can we repair the internet?
This MIT Technology Review article three compares three books that seek to address problems with today’s Web, such as exploitative apps, addictive algorithms and misinformation.
- The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity, by Tim Wu, professor at Colombia University, argues that Big Tech companies have become too powerful. This has been allowed to happen by them creating a seamless ecosystem of services where it has become convenient for a user to adopt the company’s services. Wu argues that anti-monopoly legislation must be used to break the companies, much like AT&T’s dominance over the telecommunications industry was broken in 1982.
- How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict, by Nick Clegg (former UK Deputy Prime-minister and then Meta’s president of global affairs), argues that social media companies must be allowed to make the changes to the Web themselves, and that regulation and anti-monopoly actions are counter-productive. He argues that breaking companies up would hinder innovation and that it “completely ignores the benefits users gain from large network effects”.
- This Is for Everyone: The Unfinished Story of the World Wide Web, by Tim Berners-Lee, inventor of the Web argues that architectural change is needed for the Web to improve. He argues for his SOLID (Social Linked Data) architectural design where people store their data in protected Pods, and maintain control over the sharing of their data with third-party services. Berners-Lee is against regulation, except in relation to teenagers and their use of social media.
- The article notes that the political climate in the US is currently against anti-monopoly efforts. President Trump has threatened to put tariffs on countries which introduce legislation that control social media (owned by US companies). Social media companies are emboldened by Trump’s efforts. For instance, Meta removed fact-checkers and eased content moderation rules after Trump’s election.
- Anti-trust efforts have also failed in the courts. One example was the anti-monopoly case brought against Google by the US Department of Justice. A judge ruled that Google did not have to release control of the Google Chrome browser.
7. Amazon Web Services outage shows internet users ‘at mercy’ of too few providers, experts say
Amazon Web Services (AWS) encountered technical glitches this week that led to service outages across the world.
- Services relying on AWS including Snapchat, Roblox, Signal, Duolingo, Slack, Pokémon and the Ring doorbell service experienced outages. The service Downdetector estimated a total of 8.1 million service outage issues worldwide.
- The issue seems to have originated from a technical issue in an Amazon data center in Virginia where software checking for server overloading failed to function correctly. The company has ruled out a cybersecurity attack as the reason for the outage.
- The incident resembles the “largest outage in history” which hit airports and hospitals last year when a failed software upgrade from CrowdStrike on Microsoft Windows OS caused the OS to fail.
- The incident has led to renewed calls for less centralized control of core Internet services. Corinne Cath-Speth of the human rights organization Article 19, said: “We urgently need diversification in cloud computing. The infrastructure underpinning democratic discourse, independent journalism and secure communications cannot be dependent on a handful of companies.”.
8. New User Trends on Wikipedia
This blog post from Marshall Miller of the Wikipedia Foundation reports on the falling human traffic to Wikipedia pages, due to the effects of AI chatbots and an emerging trend among younger generations to use social networks like YouTube, TikTok, Roblox, and Instagram for information.
- Wikipedia estimates a fall of 8% in human visits to Wikipedia over the past 12 months.
- All large language models are using Wikipedia to train their models. Bots from AI companies are regular visitors to Wikipedia, though these bots often attempt to disguise themselves as human visitors. The popularity of Wikipedia among AI companies is a testament to the success of the platform as a trustworthy source of quality information.
- Changes in search engines, notably the integration of AI, is also leading to reduced referred visits to Wikipedia.
- The author worries that reduced visits to Wikipedia could lead to fewer volunteers who provide and enrich content.
9. The EU AI Act Newsletter #88: Resources to Support Implementation
The European Commission has launched an AI Act Service Desk and Single Information Platform to provide resources and guidance relating to the implementation of the AI Act.
- Resources will include compliance checkers, FAQs, tutorials and contact forms for feedback.
- Another planned initiative for Q3 of 2026 will be the publication of guidelines that explain how the AI Act interacts with other EU legislation, notably the Medical Devices Regulation, General Data Protection Regulation, Digital Markets Act, Digital Services Act, as well as copyright legislation.
- ASML’s CFO has criticized the EU AI Act saying that it hampers AI development in Europe, and is pushing AI talent to leave for the US. ASML recently became a 1.3 billion EUR investor in the French AI firm Mistral.
- The European Commission is not planning to publish common mandatory technical requirements under the AI Act. Standards for high-risk AI systems are expected in August 2026 – which is also the deadline for AI companies to comply with the Act.
10. Dutch watchdog warns voters against using AI chatbots ahead of election
National Parliamentary elections are taking place in the Netherlands this week, and the Dutch data regulator is warning people not to rely on AI chatbots, saying that they are giving unreliable advice that is pushing voters towards the extremes of the political spectrum.
- The agency tested four major chatbots – without naming them – and said that chatbots told voters in 56% of cases to choose between the far-right Freedom Party or the Labour-Green Left coalition.
- The agency said that "Chatbots may seem like clever tools, but as a voting aid, they consistently fail".
- The agency also reports that a “growing number” of people are using chatbots to help decide on who to vote for, but did not specify a number.