AI Bubble Ready to Burst?

"Vibe Coding" is Word of the Year for Collins Dictionary

Posted on November 9th, 2025

Summary

Audio Summmary

Concerns about an AI bubble persist. Meta is under pressure by investors to justify its intended 600 billion USD spending on AI infrastructure, with no tangible indication of a successful AI product. Meta’s stock price has dropped by 20% over the past three months. OpenAI is also causing worries after signing a 38 billion USD deal with AWS for compute space, following a 250 billion USD deal with Azure. OpenAI’s annual revenues are expected to be around 13 billion USD. Its income is hugely dependent on outside capital. Nvidia also has worries, with its market share in China dropping from 95% to near-zero in a few months as China seeks to remove technological dependence on foreign firms. The Chinese government is now making state funding in AI only available to data centers that use Chinese chips, thereby supporting companies like Huawei Technologies.

A VentureBeat article examines whether large reasoning models “think” in the biological human way. It argues that several facets of human thinking can be seen in AI models. For instance, the auditory loop that allows a person to talk to himself when resolving a problem is very similar to chain-of-thought reasoning, and the insight and reframing mode that leads to our sudden aha! moments is created by the brain learning from examples it tries to solve, and this is one of the modes in DeepSeek-R1. Elsewhere, the journal reports that companies are less worried about the cost of AI, but more about the capacity of cloud providers to meet compute demand.

On the regulatory front, the implementation of the EU AI Act is still facing issues, mainly due to the lack of standards for evaluating AI risks. Only 15 standards of the 45 foreseen have been published, and as many as 50% may be incomplete by next August when full enforcement is due to begin. Also, OpenAI, xAI and Mistral may face penalties under the Act after their chatbots gave doubtful and partial election advice in the recent Dutch general elections. Experts are asking if chatbots giving election advice should be classified as “high-risk systems”.

On society issues, the Collins English dictionary has elected “vibe coding” as its word of the year for 2025. The term “vibe coding” describes a style of programming where one prompts a chatbot to create code, rather than doing traditional programming. Elsewhere, the families of four people who died by suicide and of three people interned in psychiatric care have filed a lawsuit against OpenAI, claiming that the company’s GPT-4o model contributed to the deaths and internments because the model had insufficient guardrails. The lawsuits make the claim that OpenAI rushed safety testing of GPT-4o in order to be released before Google’s Gemini. Also, a Guardian article argues that social media is a key contributor to radicalization in the Gen X baby boomer generation, due to a relative inhibition of expressing radicalized and hate speech online, and that this generation has been ignored when warning people of the dangers of social media platforms.

Finally, a Bitdefender report writes that 84% of this year’s cybersecurity incidents did not use malware, but rather loopholes in existing enterprise tools. Criminals are far more likely to break into a system using stolen credentials than through malware, and Business Email Compromise attacks have increased by 66% this year.

1. Meta has an AI product problem

Meta is under pressure by investors to justify its intended 600 billion USD spending on AI infrastructure, amid rising fears that an AI bubble has been created.

  • Despite 20 billion USD in quarterly profits, the company’s operating expenses have increased by 7 billion USD over the last 12 months, with capital expenditure at 20 billion USD. AI spending accounts for a large part, despite no tangible sign of a successful product. (The article does not see evidence that Meta’s AI Assistant is making a real difference to the social media products).
  • Mark Zuckerberg said: “Our view is that when we get the new models that we’re building in MSL in there and get like truly frontier models with novel capabilities that you don’t have in other places, then I think that this is just a massive latent opportunity.”.
  • After a call with financial analysts who were not reassured, Meta’s stock price dropped 12% last week. The stock price has declined by about 20% over the past three months.
  • OpenAI, Nvidia and Google have similar concerns, but both these companies can point to existing successful chatbots and related products.

2. Large reasoning models almost certainly can think

This article returns to the current debate about whether large reasoning models (LRMs) “think” in the biological human way.

  • There are five aspects to human thinking. The first is problem representation. This involves the prefrontal cortex for working memory and for breaking problems into sub-problems, and the parietal cortex which helps encode symbolic structure for puzzle-like problems.
  • The second aspect of human thinking is mental simulation. One part of this is the auditory loop that allows a person to talk to himself – an aspect very similar to chain-of-thought processing in LRMs. Another aspect in human brains is visual imagery that helps us navigate problems, an aspect that LRMs do not have.
  • The third aspect is pattern-matching and retrieval, which is the basis of machine learning: identifying and retrieving related memories and facts.
  • Monitoring and evaluation in the anterior cingulate cortex is where errors, contradictions and dead-ends are detected.
  • Finally, insight and reframing is the default mode network of the brain which gets activated when the thought process gets stuck, leading to the sudden “aha!” insight. This can happen since the brain learns from the examples it tries to solve, which is one of the modes in DeepSeek-R1.

3. Meet gen X: middle-aged, enraged and radicalized by internet bile

This Guardian article discusses the effect of social media on the radicalization of the Gen X generation – the baby boom population born between 1965 and 1980.

4. ‘Vibe coding’ beats ‘clanker’ to be Collins dictionary’s word of the year

The Collins English dictionary has elected vibe coding” as its word of the year for 2025.

  • Collins has created the 24 billion word Collins Corpus, which compiles words from media sources as well as social media in an attempt to analyze the evolving English language.
  • The term “vibe coding” was coined by Andrej Karpathy, one of the founders of OpenAI, and describes a style of programming where one prompts a chatbot to create code, rather than doing traditional programming.
  • Other words on the short list for words of the year include biohacking – meaning altering natural body processes for improved health or longevity – as well as “clanker”, which is a derogatory term for AI chatbots or platforms. The term “clanker” was apparently used in the Star Wars movie franchise to denote robots.
  • Other words on the short list that can be linked with AI is “taskmasking” which is the act of giving a false impression of productivity by just being busy, and broligarchy” which refers to the owners of the main Big Tech company following their attendance at Donald Trump’s inauguration last January.

5. Seven more families are now suing OpenAI over ChatGPT’s role in suicides, delusions

Families of four people who died by suicide and of three people interned in psychiatric care have filed a lawsuit against OpenAI, claiming that the company’s GPT-4o model contributed to the deaths and internments because the model had insufficient guardrails.

  • The families claim the chatbot encouraged their family member to commit suicide. In one instance, when told of the suicide plan, the chatbot responded: “Rest easy kid. You did good.”.
  • The GPT-4o was largely criticized on its release for excessive sycophancy. This can encourage delusional beliefs as well as encourage people to commit suicide.
  • The lawsuits make the claim that OpenAI rushed safety testing of GPT-4o in order to be released before Google’s Gemini.
  • It also states: “This tragedy was not a glitch or an unforeseen edge case – it was the predictable result of [OpenAI’s] deliberate design choices.”.

6. Nvidia AI chip ban: Can tech giants navigate a geopolitical zero-sum game?

Nvidia’s market share in China has dropped from 95% to near-zero in a few months as China seeks to remove technological dependence on foreign firms.

  • Nvidia has been trying to effect a geopolitical balancing act. The US government had agreed last summer to allow Nvidia and Intel to sell lower grade GPUs in China – in return for 15% of the revenues. This would have allowed Nvidia to maintain its foothold in China.
  • However, the Chinese government is now making state funding in AI only available to data centers that use Chinese chips, thereby supporting companies like Huawei Technologies. Even projects currently under construction will have to replace foreign chips they might have installed.
  • China’s state funding in AI is estimated at 100 billion USD since 2021.
  • In addition, the US government now says that Nvidia is not permitted to sell some of its downgraded chips, notably the B304 chip, in China.
  • China had represented between 20% and 25% of Nvidia’s data centre revenue, which equates to over 41 billion USD in revenue.

7. Ship fast, optimize later: Top AI engineers don't care about cost — they're prioritizing deployment

This VentureBeat article reports on a round-table discussion that the journal organized with leaders from companies who have been investing heavily in AI development.

  • A lesson from the discussion is that the cost of AI is not the main concern of the leaders. Rather, the concern is that cloud providers might not be able to keep up with the demand for compute power by companies.
  • One company, Wonder, uses AI in a broad range of work processes, including recommendations and logistics. It started to receive “signals” from its cloud provider of capacity challenges some months ago, whereas it had budgeted on sufficient capacity for at least another two years.
  • Wonder is a meal order company. It says that the IT cost associated with a meal order is around 14 cents. AI adds another 2 to 8 cents to the order.
  • One challenge is to understand how to optimize cloud costs. For instance, it uses prompt engineering to optimize AI model results. However, 50% to 80% of AI costs is sending its AI “corporate corpus” back and forth to the cloud.
  • Another company, Recursion, built its own data center to avoid the unpredictable pay-per-use model of the cloud. They found that even older chips like the Nvidia A100 still perform well and have a lifetime greater than the three years commonly cited.

8. OpenAI spends even more money it doesn’t have

This article discusses the unusual, and precarious, financial situation of OpenAI.

  • The company has signed a 38 billion USD deal with AWS for compute space, following a 250 billion USD with Azure. There are two interpretations to this deal. First, OpenAI wants to diversify its cloud provision to prevent dependence on a single provider. Second, it considers that no single cloud provider is able to meet its upcoming cloud needs.
  • Investors are raising eyebrows. OpenAI’s annual revenues are expected to be around 13 billion USD. Its income is hugely dependent on outside capital.
  • For one analyst, the deals with cloud providers are precarious because the providers rely on electrical grid access and cooling capacity – which they do not directly control. The deals might also penalize other companies which require cloud computing services. The deals also highlight a shift in AI where software and algorithms were the key asset, to infrastructure being the key asset.
  • Fears persist of an AI bubble being about to burst. It is unclear how OpenAI expects to generate revenue to pay for its spending. One possibility is that it is aiming for a monopoly on agentic web and generative search, with AI agents handling form-filling and payments. Another theory is that the company is seeking to control the cloud centers in the future.

9. The EU AI Act Newsletter #89: AI Standards Acceleration Updates

The implementation of the EU AI Act is still facing issues, mainly due to the lack of standards for evaluating AI risks.

  • Two EU standards bodies are reportedly accelerating six delayed drafts, with the aim of being ready when the EU AI Act enters into full operation in August 2026.
  • Nevertheless, only 15 standards of the 45 foreseen have been published, and as many as 50% may be incomplete by next August. This has led to worries in companies about being able to comply with the Act, in addition to concerns about the regulation damaging AI innovation in companies.
  • A further indication of the EU being behind schedule, only eight of the 27 member states have designated national market surveillance authorities for the Act’s implementation.
  • Meanwhile, OpenAI, xAI and Mistral may face penalties under the Act after their chatbots gave doubtful and partial election advice in the recent Dutch general elections. Experts are asking if this constitutes a “systemic risk” under the Act, and whether chatbots giving election advice should be classified as “high-risk systems”.

10. Bitdefender Cybersecurity Assessment Report

Bitdefender has released a report on the cybersecurity situation of 2025. It is compiled from a survey of 1200 security professionals across 6 countries, as well as from analysis by its internal security team.

  • 84% of the 700’000 incidents analyzed did not use malware, but rather common tools (Powershell, Windows Management Interface, Remote Desktop Protocol), that can permit privilege escalation when abused. These attacks are also known as living off the land attacks.
  • Criminals are far more likely to break into a system using stolen credentials than through malware. Business Email Compromise attacks have increased by 66% this year according to organizations. The FBI estimates that these attacks have led to losses worth 55 billion USD worldwide in the last decade.
  • The default approach to cybersecurity is no longer to block accesses by default, but to flag deviations from standard behavior. The core challenges for security professionals are now balancing security and usability, securing legacy systems, supply chain attacks, and over-privileged users.
  • AI is lowering the barrier for cybercriminals regarding the efficiency of attacks, and 67% of organizations report seeing an AI-powered attack this year. On the other hand, there is little evidence of AI being used to create sophisticated malware.
  • 58% of organizations have reportedly discouraged employees from reporting security breaches – despite this being required under regulations like the GDPR or CCPA.