Quantum Physics Approach Shrinks and De-censors DeepSeek R1

Google, Anthropic and OpenAI All Release Models in November

Posted on November 29th, 2025

Summary

Audio Summmary

US President Trump announced the Genesis project – a “closed-loop AI experimentation platform” that connects 17 national laboratories, federal supercomputers, and government scientific databanks into a single “cooperative system for research”. Some journalists have pointed to the absence of cost estimates for the imitative, and asked whether AI companies will be able to use Genesis to underwrite their AI infrastructure costs.

OpenAI expects to have operating losses up until 2028, but hopes to become profitable from 2030. The company expects 9 billion USD losses this year, and 74 billion USD in 2028 as CEO Sam Altman insists on the need to invest in infrastructure. The company is desperately searching for a plausible business model to support its revenue objective of 200 billion USD by 2030. Meanwhile, Cloudflare experienced a technical disruption last week that led to availability issues on 2.1 million platforms worldwide. Cisco says that the number of Internet service outages has remained consistent over the last few years, but warned of the increased dependence on a small number of infrastructure companies. Elsewhere, Nvidia has reported revenues of 57 billion USD in Q3 of 2025, a 62% improvement on 2024. The company’s chip selling for its data center business brought 51.2 billion USD in revenue in Q3, compared to 5.8 billion USD for gaming.

Scientists at the Spanish firm Multiverse Computing have managed to create a version of the DeepSeek R1 model that is 55% smaller and that eliminates the model’s official Chinese censorship. The approach uses mathematical models from quantum physics that use high-dimensional grids to represent and process large data sets. The technique for correcting the censorship could also be applicable to fixing model biases and hallucinations in general.

Anthropic has released Opus 4.5, a key competitor of OpenAI’s GPT 5.1 and Google’s Gemini 3, both of which were released in November. Google launched its Antigravity agentic coding platform. There is an increased demand for AI help with agents that can autonomously write, refactor and debug code. Antigravity seems to be designed to compete with Codex from OpenAI, Claude Code from Anthropic, and Cursor.

A Guardian article interviews people rating content for AI chatbots. A common sentiment is that they cannot possibly flag all factual errors or hidden racial slurs. Experts point out that people with cursory knowledge of scientific or politically sensitive subjects are rating content. One expert compared the need for AI ethics to the textile industry where people took time to realize that cheap clothes were created in sweatshops or with child labor. He says the public is not yet aware of the challenges of AI raters, the energy footprint and copyright infringement issues around AI.

Meanwhile, Google is helping India fight digital fraud by introducing on-device scam detection software on Pixel 9 phones. Digital fraud has increased in India with the increase in Phone-based payments. Both Google and Apple are under pressure to control the Apps distributed on their App stores. Finally, an MIT Technology Review article discusses the implications of over-reliance on AI companions. The intimate conversational content is a unique source of data for AI companies in desperate need of revenue to pay for their AI, and companion conversations provide training material and targeted advertising possibilities.

1. Why it feels like your favorite websites keep going down

The US Internet infrastructure company Cloudflare experienced a technical disruption last week that led to availability issues on 2.1 million online platforms across the world.

  • The platforms impacted included ChatGPT, Spotify and even Donald Trump’s Truth Social platform.
  • The outage was caused by an error in a configuration file that caused a log file to grow out of storage size, provoking cascading server errors.
  • Cloudflare handles 81 million HTTP (Web server) requests each second.
  • Cisco says that the number of Internet service outages has remained consistent over the last few years. The key difference is the increased dependence of companies on a small number of infrastructure companies. Cisco expects more outages of the type experienced by Cloudflare last week, AWS last month, and the CrowdStrike incident last year.
  • Cisco has observed 12 major outages in 2025, compared to 23 in 2024, 13 in 2023 and 10 in 2022.

2. Quantum physicists have shrunk and “de-censored” DeepSeek R1

Scientists at the Spanish firm Multiverse Computing, which specializes in AI techniques inspired by quantum computing, have managed to create a version of the DeepSeek R1 model that is 55% smaller and that eliminates the model’s official Chinese censorship.

  • Multiverse uses mathematical models from quantum physics that use high-dimensional grids to represent and process large data sets. This allows model sizes to be reduced, thus reducing the energy and financial costs of models.
  • Regarding the reversal of Chinese censorship, the scientists were able to map out the model’s correlations and identify those linked to the censorship information. The new model, called DeepSeek R1 Slim, gives the same level of factual information as Western models regarding the events in Tiananmen Square in 1989.
  • The technique for correcting the censorship could also be applicable to fixing model biases and hallucinations in general.
  • Some experts say that efforts to reverse censorship in Chinese models are hard because the information used to train the models in China are censored by the authorities. Content must align with the country’s laws and “socialist values”.
  • Multiverse’s approach to removing censorship and bias seems to be more effective than the existing distillation approach, where a larger model is used to teach the trained (student) model. The distilled model tends to perform poorer on reasoning tasks.

3. Nvidia’s record $57B revenue and upbeat forecast quiets AI bubble stalk

Nvidia has reported revenues of 57 billion USD in Q3 of 2025, which is 62% higher than Q3 of 2024. The GAAP annual income is 32 billion USD.

  • Nvidia’s chip selling for its data center business brought 51.2 billion USD in revenue in Q3. Its gaming revenue was 5.8 billion USD.
  • Its GPU business for data centers led to the sale of 5 million GPUs. The Blackwell Ultra, released in March, is the most popular GPU among customers.
  • The good results compensate for the loss of the Chinese market (where government sponsored data centers are no longer buying foreign GPUs).
  • Nvidia CEO Jensen Huang says that the results disprove the presence of an AI bubble:“From our vantage point, we see something very different”.

4. Google Antigravity introduces agent-first architecture for asynchronous, verifiable coding workflows

Google has launched the Antigravity agentic coding platform which runs over its Gemini 3 AI model.

  • Given the increased amount of code being produced with AI tools, there is an increased demand for (AI) help with agents that can autonomously write, refactor and debug code.
  • Agents can install dependent packages and run commands in the terminal. They also have an integrated Web browser for testing Web applications. Agents also have an integrated editor which allows the agent to navigate through different code files.
  • Antigravity allows agents built using Gemini 3, Claude Sonnet 4.5 models, and OpenAI’s open-source gpt-oss.
  • The Antigravity platform will integrate development environments running over Linux, Windows and MacOS.
  • Google has already released the coding assistant Jules which integrates into many development environments, as well as Gemini Code Assist and Gemini CLI. Antigravity seems to be designed to compete with Codex from OpenAI, Claude Code from Anthropic, and Cursor.

5. Anthropic releases Opus 4.5 with new Chrome and Excel integrations

Anthropic has released Opus 4.5, after launching Sonnet 4.5 in September and Haiku 4.5 in October.

  • Haiku is the lightweight version of the Claude models, Sonnet the mid-tier, and Opus the high-end model for complex reasoning.
  • Opus 4.5 is the reportedly the first model to score over 80% on the SWE-Bench coding benchmark. It also scores well on benchmarks for tool use (tau2-bench and MCP Atlas), and general problem solving (ARC-AGI 2, GPQA Diamond).
  • Anthropic is using its Claude 4.5 models in its Claude for Chrome and Claude for Excel products.
  • The models also try to address the problem of limited context windows (where models are forced to truncate prompt context) by having context selectively compressed. Anthropic hopes to exploit this feature for Haiku-powered agent platforms.
  • Opus 4.5 main competitors are now OpenAI’s GPT 5.1 and Google’s Gemini 3, both of which were released in November.

6. Google steps up AI scam protection in India, but gaps remain

Google is attempting to help India fight against digital fraud by introducing on-device scam detection software on Pixel 9 phones as well as screen-sharing alerts for financial apps.

  • Digital fraud has increased in India with the increase in the number on-line payments from phones. The Reserve Bank of India reports that fraud in digital transactions accounted for half of all reported fraud in 2024. 13’516 of the reported cases amounted to losses of around 58.61 million USD.
  • Google’s new real-time scam detector relies on Gemini Nano to scan calls to the device. Google says that conversational data is not sent to Google and that audio is not recored. The system currently is only for English-speaking Pixel 9 users.
  • According to Statcounter, Android has nearly 86% of the market in India, and Pixel devices have under 1% share.
  • Both Google and Apple are under pressure to control the Apps distributed on their App stores. Both have been used by criminals to distribute fraudulent banking Apps. Google said that it blocked 115 million dangerous app installations last year.

7. Meet the AI workers who tell their friends and family to stay away from AI

A Guardian article interviews people working as raters or moderators for text, images and videos created by AI chatbots.

  • People interviewed include Amazon Mechanical Turk employees, a company selling itself as a “marketplace that connects businesses and researchers, called requesters, with workers to complete online tasks, such as labeling images, answering surveys, transcribing text or reviewing AI outputs”. The company has rated content for Google Gemini and Grok.
  • A sentiment among many reviewers is that they cannot possibly flag all issues – be these factual errors or hidden racial slurs. Experts point to the problem of people with cursory knowledge of scientific or politically sensitive subjects being asked to rate content.
  • There is also a problem of pressure being placed on raters because of time constraints. One expert writes that “there are probably incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored”.
  • Another issue is the tendency for chatbots to give factually incorrect information in a confident tone, rather than admitting that it do not know the answer. Top generative AI models, including ChatGPT, Gemini and Meta’s AI were audit by the media literacy non-profit organization NewsGuard. The audit found that the non-response rates of the chatbots went from 31% in August 2024 to zero in August of this year, while the chatbots’ likelihood of giving false information increased from 18% to 35%.
  • One expert compared the need for AI ethics to the textile industry – where people took time to realize that cheap clothes were created in sweatshops or with child labor. The public is not yet fully aware of the challenges of AI raters, the energy footprint and copyright infringement issues around AI.

8. The State of AI: Chatbot companions and the future of our privacy

MIT Technology Review journalist Eileen Guo and FT tech journalist Melissa Heikkilä discuss privacy implications of reliance on chatbots after a recent study showed that one of the top uses of generative AI is for companionship.

  • Chatbots have been named “addictive intelligence” by MIT researchers as people share intimate conversations with them. For MIT researchers, AI developers are making “deliberate design choices ... to maximize user engagement”.
  • The intimate conversational content is a unique source of data for AI companies. The venture capital firm Andreessen Horowitz wrote that "Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to generate market value in the emerging AI value stack.”.
  • The conversational data is also interesting to marketeers. The security company Surf Shark looked at companion Apps on Apple Store and found that four out of five of these were collecting user or device IDs, which when cross referenced with third-party data could be used for targeted advertising.
  • Companies like OpenAI and Meta are under enormous pressure to generate revenue streams in light of the spending engagements they have made on AI infrastructure. Instagram and LinkedIn are now using personal data to train AI models.

9. What enterprises should know about The White House's new AI 'Manhattan Project' the Genesis Mission

This article reviews the announcement made by US President Trump on the Genesis mission. The goal is to promote scientific discovery through the development of a “closed-loop AI experimentation platform which connects 17 national laboratories, federal supercomputers, and government scientific databanks into a single “cooperative system for research”.

  • Research domains given priority include biotechnology, critical materials, nuclear fission and fusion, quantum information science, and semiconductors.
  • Under the control of the US Department of Energy, the project also signals an effort to connect manufacturing, energy infrastructure, and scientific supply chains.
  • Some journalists have pointed to the absence of cost estimates for the imitative. Some wonder if AI companies will be able to use Genesis to underwrite their AI infrastructure costs.
  • There is no mention of open-source models in the initiative. There is also concern that AI companies are getting access to federal datasets.
  • The Genesis mission seems to validate the assumption that cloud computing costs are going to rise in the coming months and years.

10. OpenAI says it plans to report stunning annual losses through 2028—and then turn wildly profitable just two years later

OpenAI expects to have operating losses up until 2028, and hopes to become profitable from 2030.

  • The company expects a 9 billion USD loss this year – spending 22 billion USD against 13 billion USD in sales. Operating losses could rise to 74 billion USD in 2028 according to The Wall Street Journal.
  • CEO Sam Altman insists on the need to invest in infrastructure with a commitment to spend 1.4 trillion USD in the next 8 years. 100 billion USD is being spend on backup data center capacity.
  • The company is currently surviving on continuous fundraising.
  • OpenAI is desperately searching for a plausible business model to support its claim that it can achieve revenues of 200 billion USD by the end of the decade. Recent efforts include the Sora 2 video creation app and the Atlas Web browser. The company is also researching robotics, and is investigating business models based on e-commerce and advertising.
  • Anthropic for its part expects to break even in 2028. It is overspending currently by 33%, with this figure expect to fall to 9% in 2027. Corporate customers account for 80% of Anthropic sales.