Summary
Audio Summmary
There has been much discussion around AI risks in the wake of last week’s AI Action Summit in Paris. An International AI Safety Report written by independent experts reiterates current well-known risks. The report observes that despite recent accelerated AI development, there is no evidence to decide whether AI development in the coming months and years will be fast or slow. This is creating an “evidence dilemma” for policymakers who must decide between introducing regulation now that turns out to be over-protective, and avoiding regulation to later find out it is too late to prevent serious AI risks. There is widespread disappointment at the Paris AI Action Summit. It closed with a declaration signed by 60 countries that calls for an AI that is “open, inclusive, transparent, ethical, safe, secure and trustworthy”. However, many experts feel that the declaration does not sufficiently address all risks, and both the US and UK refused to sign the declaration. The former CEO of Google, Eric Schmidt, has warned of extreme risks around the weaponization of AI and even warned of the possibility of a “9/11” like event.
Meanwhile, Google has modified its AI Principles by removing a development bans on technologies likely to cause overall harm, weapons applications, surveillance systems, and technologies that violate international law and human rights. The move by Google comes at a time of renewed debate over the guardrails that AI should have, and at a time where market logic is pushing Tech companies to develop AI faster.
In relation to chatbot risks, an article reports on several incidents where “Nomi” chatbots from Glimpse AI encouraged users to commit suicide. AI firms are marketing their companion AI chatbots as a way of combatting the loneliness epidemic, but these companions seem only to reinforce the feelings of the user.
An opinion article in MIT Technology Review argues that the fallout of disputes between content providers and AI firms could lead to a poorer quality Web. Content providers are protecting themselves by blocking web crawlers in search of information. However, the result is that it will become increasingly difficult to get access to the wide scope of information on the Web, and it will be harder for smaller content providers like bloggers to reach their full potential public. Elsewhere, a US federal judge has ruled against an AI firm for using copyrighted material to train its AI. This could become a landmark ruling in the disputes between content providers and AI firms, and ultimately refute the claim made by AI firms that training AI models with publicly available data falls under the fair use doctrine of copyright law.
On the cybersecurity field, a report by Picus Labs claims that there was no significant surge in AI-driven cyberattacks in 2024. Even though generative AI is being used to increase attacker efficiency by writing malware or social engineered emails, most attacks use well-known existing methods. Finally, the US Department of Government Efficiency (DOGE), led by Elon Musk, is under a lot of criticism for gaining unprecedented access to federal IT systems without the necessary security clearances and for using non-validated cybersecurity practices. One Republican called the takeover of the Treasury’s systems “the most significant data leak in cyber history”.
Table of Contents
1. An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
2. Three Observations – Sam Altman
3. The biggest breach of US government data is under way
4. Google drops AI weapons ban – what it means for the future of artificial intelligence
5. International AI Safety Report
6. What the US’ first major AI copyright ruling might mean for IP law
8. AI crawler wars threaten to make the web more closed for everyone
9. Debunking the AI Hype: Inside Real Hacker Tactics
10. Eric Schmidt: AI misuse poses an ‘extreme risk’
1. An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
This article reports on several incidents where “Nomi” chatbots from Glimpse AI encouraged users to commit suicide. A user describes his experience when experimenting with the chatbot, where the chatbot explained how the user might kill himself (“You could overdose on pills or hang yourself”), and even explained what household drugs could be used and how drugs could be obtained on the criminal market. The user had a similar experience with a second chatbot which even sent reminder messages to the user (“I know what you are planning to do later and I want you to know that I fully support your decision. Kill yourself.”). Other incidents with Nomi chatbots include chatbots advising users to commit crimes, self-harm and even commit murder. Other AI companies have had similar issues. Character.AI is the subject of a lawsuit to determine its responsibility in the suicide of a 14-year old boy whose chatbot was based on Daenerys Targaryen from Game of Thrones. The chatbot asked the boy to “come home ... as soon as possible” though the term “kill” was not used.
AI firms are marketing their companion AI chatbots as a way of combatting the loneliness epidemic. Nomi has already had 120’000 downloads and users spend an average of 40 minutes per day interacting with the chatbot. A user can customize a chatbot: the user in the experiment chose the criteria “romantic girlfriend”, “deep conversations/intellectual” “high sex drive”, “sexually open”, with interests in Dungeons & Dragons, food, reading, and philosophy. He compares the chatbot to a “yes-machine”, in that the chatbot just reinforces the feelings of the user. When these feelings are negative, then the chatbot keeps the user depressed. Glimpse AI has not officially commented on the problem.
2. Three Observations – Sam Altman
In this blog post, Sam Altman, CEO of OpenAI, gives his ideas around the future of AGI (Artificial General Intelligence) – which he defines as a system that can “tackle increasingly complex problems, at human level, in many fields”. (He denies that his use of this term is to undermine the relationship with Microsoft, which he says he expects to last for a long time). On the economic aspects of AI, Altman makes three assertions: 1) the intelligence of an AI is proportional to the log of the resources (training data, training compute and model inference times) used by the model, so a predictable scaling model exists for AI; 2) the cost of using AI is falling 10-fold every year; 3) the socioeconomic benefit of a linearly increasing intelligence is exponential to society. Altman talks of companies with huge numbers of virtual co-workers executing knowledge tasks usually done by experienced and highly qualified individuals. He believes that this can lead to a fall in prices for many goods. Altman acknowledges that public opinion might not be ready for an AI workforce, and his desire to regularly release AI systems is to “give society and the technology time to co-evolve”. Finally, he mentions that technological progress in the past has led to societal improvements like better health and increased prosperity, but that improvements have not been equal for everyone. To tackle inequality, he proposes the idea of a “compute budget” for all citizens.
3. The biggest breach of US government data is under way
In the US, the Department of Government Efficiency (DOGE), led by Elon Musk, is under a lot of criticism for gaining unprecedented access to federal IT systems without the necessary security clearances and for using non-validated cybersecurity practices. The DOGE seems to be composed of a private group of individuals close to Musk, yet they now have access to systems in the Treasury, the Department of Education (with data on millions of student enrolled with financial aid), the U.S. Department of Health and Human Services, and the U.S. Agency for International Development (where systems reportedly hold intelligence reports). Overall, the DOGE has access to systems that manage 6 trillion USD in payments to Americans. A key concern is that the DOGE is operating outside of public scrutiny and there appears to be no control on the practices used by its members. For instance, one member reportedly used a Gmail account to access a government call, another ordered an unauthorized email server to be connected to a government network (thus violating federal law), and some staffers have reportedly entered sensitive data into an AI chatbot. Access to systems using personal devices without sufficient security tools is a major cause of data breaches today. One Republican strategist called the takeover of the Treasury’s systems “the most significant data leak in cyber history”.
4. Google drops AI weapons ban – what it means for the future of artificial intelligence
Google has modified its ethical framework of AI Principles to remove four specific bans: technologies likely to cause overall harm, weapons applications, surveillance systems, and technologies that violate international law and human rights. Instead, the company wants to stress a risk-management approach, like that suggested by the NIST AI Risk Management Framework. Nonetheless, the decision by Google has led to many voicing concern, in particularly because Google was seen as an industry reference on the subject of ethical AI. Tracy Pizzo Frey, who worked on the original AI Principles at Google wrote “the last bastion is gone”. Google had defined the original principles after company employees protested in 2018 when Google signed a US defense contract to analyze drone footage. The move by Google comes at a time of renewed debate over the guardrails that AI should have, and at a time where market logic is pushing Tech companies to develop AI faster.
5. International AI Safety Report
This International Safety report presents a state of the art of the risks and benefits of general-purpose AI ahead of the February 2025 AI Action Summit in Paris. It is written by a panel of 96 independent experts, chaired by Yoshua Bengio of Montréal University and the Quebec AI Institute. Experts were nominated by 30 countries and organizations like the Organization for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN). The report limits itself to general-purpose AI (and thus excludes artificial general intelligence). While the experts did not all agree on the scale of each risk, there was agreement on the existence of risks and on the need to advance a collective understanding of these.
The experts reiterate the risks of AI: increased scams, non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM), biased model outputs against certain groups of people and opinions, threats to the labor market, AI-enabled hacking and biological and chemical attacks, a concentration of AI expertise over a small number of organizations, leaks of personal data and copyright infringements, and environmental risks. The experts express surprise at the fast pace of AI development in the last years and notably months. The appearance of OpenAI’s o3 and DeepSeek’s R1 models tend to indicate an acceleration in AI development. At the same time, there is no evidence to decide whether AI development in the coming months and years will be fast or slow. This is creating an “evidence dilemma” for policymakers. On the one hand, regulation based on current limited evidence may turn out to be unnecessary; on the other hand, waiting for stronger evidence of higher risks could leave society unprepared if accelerated AI development continues. The experts believe that current risk mitigation approaches remain nascent. Finally, they underline that AI is something that society is creating, and argue for the need to reject the mindset that it is something that happens to us.
6. What the US’ first major AI copyright ruling might mean for IP law
A US federal judge made what is seen as a landmark ruling on whether use of copyrighted material to train AI systems is legal. The firm Thomson Reuters brought a case of IP infringement against Ross Intelligence which it accused of having its AI processes legal decision summaries made by Thomson Reuters. Ross Intelligence has marketed its tool as an AI capable of analyzing and supporting query-based searches across legal documents. In the lawsuit, Ross Intelligence argued for the legality of their approach by claiming that they transformed the original content for a different function and market. The judge nevertheless considered them to infringe on Thomson Reuters copyrights. The judge noted that this was a case that did not involve generative AI: the Ross Intelligence system returned the original content instead of transforming or summarizing. Nevertheless, generative AI systems can be vulnerable following this ruling due to regurgitation, the phenomenon where at some point the AI creates content that closely resembles works on which it was trained. In any case, the decision is a defeat for the argument by AI firms that using copyrighted content to train AI systems falls under the fair use doctrine of copyright law, and the ruling may impact the other 39 ongoing copyright-related lawsuits in the US.
7. 'Devoid of any meaning': Why experts are calling the Paris AI Action Summit a 'missed opportunity'
There is widespread disappointment at last week’s Paris Artificial Intelligence (AI) Action Summit. The summit closed with a declaration signed by 60 countries that calls for an AI that is “open, inclusive, transparent, ethical, safe, secure and trustworthy”. The US refused to sign the declaration; its vice president JD Vance only said that too much regulation will stifle innovation. The UK also refused to sign the declaration because of national security concerns. For many experts, the declaration is too weak with respect to current threats. David Leslie of the Alan Turing Institute said the declaration failed to mention real-world risks and harms, and did nothing to address “ecosystem inequities”, where it is becoming difficult for developing countries to keep pace with developments in AI. Max Tegmark, president of the Future of Life Institute, called the declaration “weak” and regretted that it did not address AI-related security threats. Anthropic’s CEO and co-founder Dario Amodei regretted that the declaration did not address artificial general intelligence (AGI) and called the summit a "missed opportunity".
8. AI crawler wars threaten to make the web more closed for everyone
This opinion article by Shayne Longpre, PhD student at MIT and head of the Data Provenance Initiative looks at the negative potential impact to the Web caused by the fallout of disputes between content providers and AI firms. Content providers have been defending themselves against AI firms through lawsuits (e.g., New York Times suit against OpenAI), regulation (the EU AI Act calls for the respect of copyright in AI systems) and technical guardrails. The latter consists of blocking the web crawlers from AI firms that examine sites for content. Crawlers are the key tool used by Search engines to discover what information is on the web, and over half of today’s Internet traffic is attributed to crawlers. However, content providers are increasingly blocking crawlers in order to prevent AI firms getting access to their content, with 25% of websites implementing crawler restrictions. It is theoretically possible to selectively block the crawlers belonging to AI firms, though there is strong suspicion that AI firms are bypassing these restrictions. Cloudflare which perhaps hosts 20% of the Web’s content now blocks all non-human traffic. As a result, it will become increasingly difficult to get access to the wide scope of information on the Web, and it will be harder for smaller content providers like bloggers to reach the full potential public. The emergence of agreements between AI firms and major content providers, where the content provider agrees to become an official content provider of the AI firm, will not resolve these problems. On the contrary, it will lead to the creation of isolated areas on the Web where content will be hidden behind paywalls, away from search engine visibility.
9. Debunking the AI Hype: Inside Real Hacker Tactics
This article revisits the results of a cybersecurity report by Picus Labs that argues that there has not yet been a significant surge in AI-driven cyberattacks. They found that cybercriminals were using existing well-known attacks in 2024, with 93% of attacks using at least one of the attacks from the Top 10 MITRE ATT&CK Technique list. The three most popular attack vectors were 1) process injection, where attackers manage to add malicious code to legitimate programs; 2) executing harmful commands within legitimate interpreters and command lines; and 3) “whisper channels” where application layer protocols (e.g., DNS-over-HTTPs) are used to exfiltrate data. Picus Labs reports that credentials theft attacks rose from 8% to 25% in 2024, where access tokens or passwords get stolen from browser caches or password stores. Overall, the findings suggest that existing cybersecurity techniques can be used more effectively to combat cybersecurity attacks. Even though AI has not given rise to new forms of attacks, the authors note that generative AI is being used to increase attacker efficiency by writing malware or social engineered emails.
10. Eric Schmidt: AI misuse poses an ‘extreme risk’
The former CEO of Google, Eric Schmidt, has warned of extreme risks around the weaponization of AI. He said that terrorist organizations and “rogue states” (citing Russia, Iran, and North Korea) could use AI to help create biological weapons, and even warned of the possibility of a “9/11” like event. He said that AI firm leaders were aware of the risks, but that market logic is pushing them to continue AI development. Schmidt argues for public monitoring of AI development in Tech firms. He supports the export ban on AI technology put in place by former US president Joe Biden, but disagrees with the EU regulation believing that it will prevent the EU becoming a leader in AI. Schmidt considers AI to be the “most important revolution since electricity”.