Summary
Audio Summmary
Pope Leo revealed that he chose his papal name because of the current Tech revolution. The last pope Leo, Leo XIII who was pope from 1878 to 1903, was a strong advocate of workers’ rights. The new Pope Leo XIV says the church must now “respond to another industrial revolution and to innovations in the field of artificial intelligence that pose challenges to human dignity, justice and labor”. In a podcast interview, Geoffrey Hinton says that he is increasingly concerned about the dangers of AI. The 2024 Nobel Prize laureate admits that the pace of AI development cannot be stopped, but that research into AI safety can be stepped up. He questions whether Big Tech leaders have the moral compass to oversee this development. Author Adam Becker writes that Big Tech leaders are cultivating an “ideology of technological salvation” where technology goals like super-intelligence are framed as existential imperatives for humanity. The danger is that their vision leads to actions on their part that remain unchecked and unregulated.
New York State lawmakers have passed the RAISE AI Safety Act that now awaits the State governor’s signature or veto. The bill enforces safety practices on AI companies producing large language models. It resembles the failed Californian SB 1047, except that AI model providers to do not need to include a “kill switch” and will not be held accountable for models that are post-trained by other organizations. The bill has received criticism from Tech companies who are under pressure to develop models given strong international competition.
A UK study suggests that increased use of AI leads to reduced critical thinking, notably in young people. The researchers found that higher educational levels help mitigate the fall in critical thinking, which suggests a role for educators in combatting critical thinking decline. A danger is that AI will lead to a generation of professionals who are extremely efficient but who lack the independent deep critical thinking skills that many professions require.
A VentureBeat article looks at the ramifications of the release of DeepSeek’s R1 model family, which have the same power as OpenAI’s, but operate at 5% to 10% of the cost. One impact is an increased focus on the search for energy efficiency gains through new model architectures and software engineering techniques. Meanwhile, OpenAI CEO Sam Altman has criticized Meta for trying to poach talent from OpenAI with compensation packages of up to 100 million USD. OpenAI is reportedly developing its own social media platform where the algorithm selects content that more closely resembles what users wish to see.
Researchers in the Netherlands and Iran have developed an open-source tool that scans Github repositories to identify vulnerabilities. Software developers often reuse libraries without thorough review, thereby perpetuating vulnerabilities. A further risk is that software on portals like Github is being used to train language models, also perpetuating vulnerabilities.
Finally, two Italian journalists have had their phones infected with Graphite – a spyware developed by Paragon Solutions in Israel. The journalists have been critical of the right-wing Meloni government in Italy, notably on the subject of immigration. The Guardian has made several warnings over the past months about increasing dangers to the freedom of the press.
Table of Contents
1. Tech billionaires are making a risky bet with humanity’s future
2. New York passes a bill to prevent AI-fueled disasters
3. Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
4. European journalists targeted with Paragon Solutions spyware, say researchers
6. Geoffrey Hinton: I Tried to Warn Them, But We’ve Already Lost Control!
7. Sam Altman says Meta tried and failed to poach OpenAI’s talent with $100M offers
9. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
1. Tech billionaires are making a risky bet with humanity’s future
This article contains an interview with Adam Becker, author of the book “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity”, in which he expounds the idea that Big Tech leaders like Altman, Bezos, and Musk are cultivating an “ideology of technological salvation”. It is based on the idea of framing technology goals as existential imperatives for humanity, e.g., super-intelligence for solving humanity’s problems, colonizing Mars, and a quasi-religious obsession with transcending physical and biological limits. This vision is shared by many venture capitalists and industry leaders in Silicon Valley. It has led to an excess of faith by the public in the vision offered by Big Tech leaders. The danger is that their vision leads to actions on their part that remain unchecked and unregulated because of “the promise of an amazing future, full of unimaginable wonders – so long as we don’t get in the way of technological progress”.
2. New York passes a bill to prevent AI-fueled disasters
New York State lawmakers have passed the RAISE AI Safety Act, and the bill now awaits the State’s governor to either sign or veto the bill. The goal of the bill is enforce safety practices on AI companies producing large language models whose use could lead to disaster scenarios – defined as causing death or injury to more than 100 people, or causing damage in excess of 1 billion USD. In what would be the first law in the US obliging transparency for AI development, Tech companies will be required to publish safety and security reports for their models, and to report safety incidents around model behavior or theft of the model by bad actors. Failure to comply with the law will be penalized with a fine of up to 30 million USD. The new law resembles the Californian SB 1047 which passed the state legislature there but which was vetoed by the State Governor. In contrast to that bill, the New York bill does not require AI model providers to include a “kill switch” and they will not be held accountable for models that are fine-tuned by other organizations. The bill has received criticism from Tech companies who are under pressure to develop models given the new international competition on AI development, but for lawmakers, “the window to put in place guardrails is rapidly shrinking given how fast this technology is evolving”.
3. Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
This VentureBeat article looks at the ramifications of the release of DeepSeek’s R1 model some months on. DeepSeek developed the R1 model without the use of high-grade GPUs because of a US export embargo. Therefore, unlike US firms who until the end of 2024 saw more powerful hardware and larger training data sets as the key to more powerful models, the Chinese focused on optimizing training and model operation. DeepSeek’s models have the same power as OpenAI’s, but operate at 5% to 10% of the cost. OpenAI reportedly spent 500 million USD training its “Orion” model while DeepSeek trained a model with superior benchmark results for 5.6 million USD. One optimization employed by DeepSeek is hardware optimization. While DeepSeek did not have high-grade GPUs, optimizations at the networking and memory layers leading to increased parallelization can improve model operating efficiency. Another optimization was the use of synthetic data (artificially created data with the same probability distributions as real-world data) and existing model output (thereby using the process of distillation, or teacher-student learning approach to training models).
The impact of R1 is an increased focus on the search for energy efficiency gains through model architecture and software engineering techniques. Microsoft is attributing a portion of its 80 billion USD 2025 AI budget into efficiency research. Meta’s Llama 4 model family uses the mixture-or-experts architecture and Meta includes now DeepSeek models in benchmark comparisons. In reaction to DeepSeek, OpenAI launched a funding round of 40 billion USD, with the company now valued at 300 billion USD. With its operating costs estimated at up to 8 billion USD annually, having a competitor offering an open-source model for free made CEO Sam Altman wonder if OpenAI’s proprietary model was “on the wrong side of history”.
4. European journalists targeted with Paragon Solutions spyware, say researchers
This article in the Guardian newspaper reports that two Italian journalists had their phones infected with Graphite – a spyware developed by Paragon Solutions in Israel. Researchers at Citizen Lab said they found “digital fingerprints” of the spyware on the phones. The Israeli firm says they sell the spyware to democratic governments only, and only caution its use in the fight against terrorism, counter-espionage and organized crime. The two journalists targeted by the spyware have been critical of the right-wing Meloni government in Italy, notably on the subject of immigration. The Guardian has made several warnings over the past months about increasing attacks on the freedom of the press. For its part, Citizen Lab wrote on their website: “The lack of accountability available to these spyware targets highlights the extent to which journalists in Europe continue to be subjected to this highly invasive digital threat, and underlines the dangers of spyware proliferation and abuse.”.
5. Eradicating the Unseen: Detecting, Exploiting, and Remediating a Path Traversal Vulnerability across GitHub
One of the known risks associated with free and open-source software is that bugs and vulnerabilities in popular software are propagated between codebases. Software developers often reuse libraries without thorough review, just as there have been many cases of copy and pasting incorrect and vulnerable code from sites like StackOverflow. A further risk is that software on portals like Github is being used to train language models, thereby perpetuating vulnerabilities. Correcting all of the software vulnerabilities on Github cannot be done manually. However, researchers from Leiden University in the Netherlands and Mashhad Technical University in Iran have developed an open-source tool that scans Github repositories to identify vulnerabilities. The tool currently examines Node.js projects, looking for the path traversal vulnerability (CWE-22) which when exploited on a Web server allows the cybercriminal to visit different directories, thereby stealing files that are not meant to be seen. Path traversal was among the top ten of exploits used by cybercriminals in 2024. After scanning a Github repository, the tool executes the project in a local Node.js environment. At the time of publishing, the researchers had identified 1756 exploitable instances of the vulnerability. 63 instances have already been fixed, others are being fixed. Worryingly, the researchers showed that current software chatbots create code that contains the vulnerability.
6. Geoffrey Hinton: I Tried to Warn Them, But We’ve Already Lost Control!
In this podcast interview, Geoffrey Hinton says that he is increasingly concerned about the dangers of AI. Hinton won the Nobel Prize in 2024 for his work on AI, but is spending his retirement campaigning for increased work in the development of AI safety mechanisms and on regulation. In the podcast, he classifies AI risks as external (risks linked to people’s misuse of AI like cyberattacks) and inherent risks (where AI figures out that it can accomplish tasks more efficiently without humans, so decides to eliminate them). One practical example of a means for AI to destroy humans is to do a cyberattack on nuclear defense systems to make them believe that nuclear missiles were incoming – thereby forcing the country to fire its own weapons in retaliation. Hinton admits that the pace of AI development cannot be stopped, but that research into AI safety can be stepped up. He questions whether current Big Tech leaders have the moral compass to oversee this development. Further, he dismisses a prevailing conception that machines cannot have feel emotions. For Hinton, feelings have a behavioral and physiological aspect. For instance, when a being encounters a predator, the emotion of fear tells the being to run (behavioral aspect) and provokes the physiological effect of heightened heart rate, etc. AI for its part might not have the physiological aspect of emotions, but it can develop the behavioral aspects.
7. Sam Altman says Meta tried and failed to poach OpenAI’s talent with $100M offers
This TechCrunch article reports on a podcast given by OpenAI CEO Sam Altman in which he criticized Meta for trying to poach talent from OpenAI (and from Google DeepMind) with compensation packages of up to 100 million USD. Meta has reportedly had mixed success on this hiring. Underlying this move is the assumption by Altman that Meta is trailing OpenAI, Google and Anthropic in the quest for “super-intelligence”, despite Meta having recently acquired Scale AI. Altman says that Meta is not “a company that’s great at innovation” despite its achievements, and that the mission of achieving super-intelligence requires a company culture that cannot be created by dishing out high salaries. OpenAI is reportedly developing its own social media platform where the algorithm selects content more closely resembling what users wish to see.
8. Pope Leo Takes On AI
Pope Leo has revealed that he chose his papal name because of the current Tech revolution. The last pope Leo (Leo XIII was pope between 1878 and 1903) was a strong advocate of workers’ rights in the later stages of the industrial revolution, pushing for labor laws which earned him the nickname of “Pope of the Workers”. The new Pope Leo XIV says that the church must now “respond to another industrial revolution and to innovations in the field of artificial intelligence that pose challenges to human dignity, justice and labor”. The Vatican is currently hosting a conference on AI, ethics and corporate governance that has participants from Google, Meta, IBM, Anthropic, Cohere and Palantir.
The Vatican has been calling for AI regulation for some years now. Its 2020 Call for AI Ethics insisted on an AI that does not violate human rights and on responsible AI development. IBM and Cisco signed the document, Google and OpenAI have not. The late Pope Francis issued several warnings around AI, highlighting the risk of a “technological dictatorship” in the absence of an international treaty on regulation. He said that algorithms “miss humanity because you cannot reduce the human being to... data” and children with chatbots as guides risk growing up in a dehumanized world.
9. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
This UK study examines the impact of the use of AI on critical thinking skills. Critical thinking is the ability to analyze, evaluate, and synthesize information for making reasoned decisions. It is essential for academic and professional success, as well as for informed citizenship. The issue with AI is that it leads to increased cognitive offloading: this is the phenomenon where cognitive tasks are delegated to external tools, thereby reducing individual engagement in deep and reflective thinking. The article cites a previous study which showed that search engines negatively affect human memory retention levels and the inclination to process information deeply. A danger with external IT tools is that they are leading to a generation of professionals who are extremely efficient but who lack the independent deep critical thinking skills that many professions require.
In this study, the researchers surveyed 666 individuals classified into three age groups: 17–25 years (young), 26–45 years (middle-aged), and 46 years and older (older). They also conducted semi-structured interviews with 50 participants. The results correlate increased use of AI to reduced critical thinking, measured by increased cognitive offloading, especially in cases where participants were young or when participants placed a large amount of trust in AI. The researchers found that higher educational levels helped mitigate the fall in critical thinking, which the authors argue can lead to a role for educators to combat critical thinking decline.