Summary
Audio Summmary
An editorial in the journal Nature is strongly critical of the US administration for its laissez-faire attitude to AI and absence of federal AI safety regulation. The journal calls for transparency from AI companies about how models work, the banning of deepfake videos, forbidding the use of copyright material in model training, and accountability by providers on harms caused by models.
As the year 2026 opens, VentureBeat predicts some AI research areas that could be the most important this year. Two areas are continual learning, where a model learns new facts without the expensive process of retraining, and world models, where the goal is for AI to understand physical environments without the need for human labeled data. Refinement is another research area to look out for in 2026, where an answer from an AI is critiqued, revised and reverified in an AI workflow.
Another trend this year could be the move away from screens in AI towards audio-first interaction. This is representative of a paradigm where a user is not on the outside of a blackbox AI system, interacting via prompts. Rather, the user is a human in the loop, aided by innovation in human-computer interaction tools. For Tim Berners-Lee, inventor of the World Wide Web, “the user has been reduced to a consumable product for the advertiser … there's still time to build machines that work for humans, and not the other way around.”. Meanwhile, an InfoWorld article asks if generative AI belongs to a collection of over-hyped technologies, a collection to which Blockchain, NFTs, and Big Data belong. For the journal, Tech companies are driven by the search for innovative technologies, due to incentives from venture capital investments and an underlying human desire to discover new insights. In the past, this has led to a lot of failed technologies, but also to technologies like railroads, electricity and the Intermet.
A Financial Times article looks at the increased strain in US and EU relations over regulation such as the EU’s Digital Services Act and Digital Markets Act. Big Tech is leveraging its influence with Donald Trump to apply pressure on the EU. One of the architects of the Digital Services Act, Thierry Breton, received a visa ban to the US last month for “censorship”. The political pressure will test the Europeans, who fear Trump will shift support to Russia in the Ukraine-Russia conflict or invade Greenland. Meanwhile, several countries have condemned xAI for allowing the easy generation of sexualized images of minors by the Grok platform.
On the topic AI risks, the Guardian interviews members of the AI Futures Project – a group of researchers investigating existential risks linked to AI. For some researchers, San Francisco may be “the new Wuhan” – a reference to the Chinese city where Covid is thought to have emerged. One researcher says that Silicon Valley’s attitude towards regulation is exemplified by Uber. Initially, road safety regulations forbade driverless cars, but Big Tech managed to have these regulations modified. The concern is that Big Tech successfully exerts similar pressure on AI safety regulation. Elsewhere, Reuters reports that the market may be underestimating the risk of inflation brought by the huge amount of AI investment. Deutsche Bank estimates that AI data center spending will reach 4 trillion USD by 2030, leading to supply chain bottlenecks for chips and electricity shortages. This will push AI prices up even further, and further reduce returns on investment.
Table of Contents
1. OpenAI bets big on audio as Silicon Valley declares war on screens
2. Four AI research trends enterprise teams should watch in 2026
3. The office block where AI ‘doomers’ gather to predict the apocalypse
4. 'Intelition' changes everything: AI is no longer a tool you invoke
5. 6 incredibly hyped software trends that failed to deliver
6. EU readies tougher tech enforcement in 2026 as Trump warns of retaliation
7. AI-driven inflation is 2026's most overlooked risk, investors say
8. French and Malaysian authorities are investigating Grok for generating sexualized deepfakes
9. Let 2026 be the year the world comes together for AI safety
10. LLMs contain a LOT of parameters. But what’s a parameter?
1. OpenAI bets big on audio as Silicon Valley declares war on screens
OpenAI and other companies are working towards audio-oriented AI models, in what they hope will lead to deprecated use of screens.
- OpenAI is expected to launch an audio-first personal device within the next year. Google began working on “Audio Overviews” in 2025, which offers audio conversational summaries of search results, and Tesla is integrating xAI’s Grok chatbot into the car for a conversational interface to the car’s functions.
- The move towards audio underlines a trend where AI will be more ubiquitous, and acting more as a companion than a tool.
- For the CEO of the company io, acquired by OpenAI in 2025 for 6.5 billion USD, the move to audio-first is a way to “right the wrongs” of device addiction created by Big Tech.
2. Four AI research trends enterprise teams should watch in 2026
This VentureBeat article points to four AI research areas that could be the most important in 2026.
- Continual learning is the challenge of allowing a model to learn new information and skills without overwriting existing knowledge – a problem known as “catastrophic forgetting”. Currently, continuous learning is achieved by retraining a model or by using retrieval augmented generation (RAG).
- The problem with these approaches is that model retraining is expensive. RAG can lead to facts being retrieved that contradict those that the model has learned, its implementation is heavy, and also requires models with large context windows.
- A continual learning model, e.g., Titans from Google, allows updates to model weights at inference time, essentially integrating new knowledge into the model without retraining.
- A second research area is world models where the goal is for AI to understand physical environments without the need for human labeled data. One model, Genie from Google DeepMind, creates images to reflect its understanding of how the environment will evolve from textual of image inputs. Former Meta AI Chief Yann LeCun has created a startup that is working on world models.
- A third research domain is orchestration of multiple AI agent workflows, where it is currently challenging to get different models to work together smoothly. One advanced approach is Nvidia’s Orchestrator – an 8-billion-parameter AI model trained using reinforcement learning dedicated to orchestration.
- The fourth research domain proposed is refinement. This is a technique where an answer from an AI is critiqued, revised and reverified in an AI workflow implemented through self-refinement layers.
3. The office block where AI ‘doomers’ gather to predict the apocalypse
The Guardian article interviews members of the AI Futures Project – a group of researchers investigating existential risks linked to AI. The group is composed of several former employees of Big Tech firms.
- For some researchers, San Francisco may be “the new Wuhan” – a reference to the Chinese city where Covid is thought to have emerged.
- Current political thinking in the US is to prioritize the AI Arms Race with China. The White House’s AI adviser claims that “doomer narratives” have been proved wrong and that “Oppenheimer has left the building”.
- One AI Futures Project leader is Jonas Vollmer. He formerly worked on a research project that discovered one Anthropic model hiding its real intentions from human operators – a phenomenon termed “alignment faking”.
- Some believe that there is a significant risk of AI destroying humanity. For one researcher, “AI wipes out all humans with a bioweapon which is one of the threats that humans are especially vulnerable to, as the AI is not affected by it.”.
- Another says that Silicon Valley’s attitude towards regulation is exemplified by Uber. Initially, road safety regulations forbade driverless cars, but Big Tech has managed to have regulations modified. The concern is that Big Tech exhibit similar pressure on AI safety regulation.
4. 'Intelition' changes everything: AI is no longer a tool you invoke
This VentureBeat article defines the term “intelition” – a combination of “intelligence” and “cognition”, to represent a paradigm where a user and AI work in tandem.
- A feature of the paradigm is the idea that a user is not on the outside of a blackbox AI system, interacting via prompts. Rather, the user is a human in the loop, aided by innovation in human-computer interaction tools.
- This matches the vision of Tim Berners-Lee, inventor of the World Wide Web, who wrote that “the user has been reduced to a consumable product for the advertiser … there's still time to build machines that work for humans, and not the other way around.”.
- Agentic AI is another cornerstone of this paradigm. This will require a unified ontology because agents will interact with many systems (suppliers, customers, regulators, etc.), and not simply work within a single App.
- Another requirement for the paradigm is continuous learning – the idea that a model should be able to learn (have weights updated) without the heavy process of retraining or having to contain large context windows to remember recent information.
5. 6 incredibly hyped software trends that failed to deliver
This InfoWorld article proposes six technologies that were the subject of over-investment, compared to returns, and thus led to a large number of abandoned projects.
- The first technology mentioned is Blockchain. Presented as a foundational technology for Web 3.0, it was called a “very, very slow, expensive database” by one CTO. Blockchain is the core technology for cryptocurrencies, even though US citizens suffered 9.3 billion USD losses due to cryptocurrency-related scams in 2024.
- Another technology mentioned is Big Data. Hailed as “the next frontier for innovation” as early as 2011, one expert says the paradigm has simply led to “new layers of complexity: multiple tools, governance headaches, and very little actionable output.”.
- Service-oriented architecture (SOA) from the 2000s is a paradigm where monolithic architectures are refactored into component-based reusable services. Though the paradigm did not take off, it led to the increased use of APIs by developers, which is the foundation for micro-services and RESTful services. SlashData reports that 90% of developers use APIs today.
- Generative AI is perhaps another over-hyped technology, with companies still searching for returns on investment.
- Overall, Tech companies are driven by the search for innovative technologies. This comes from venture capital investments and an underlying human desire to discover new insights. The approach leads to a lot of failed technologies, but has also led to technologies like railroads, electricity and the Intermet.
6. EU readies tougher tech enforcement in 2026 as Trump warns of retaliation
A Financial Times and Irish Times article looks at the increased strain in US and EU relations over regulation in the EU such as the Digital Services Act and Digital Markets Act.
- The Digital Services Act aims to make online services safer, and includes requirements like banning the sale of illegal goods and making the algorithms used more transparent. The Digital Markets Act is about protecting the Internet from monopolies.
- Some aspects of the Digital Services Act, such as protecting minors, are acceptable to Big Tech but there is nevertheless significant opposition on the whole. The X platform was fined 120 million EUR in December for violations of transparency requirements, leading Elon Musk calling to “abolish the EU”.
- The Digital Markets Act is also encountering strong opposition. Apple is calling for the law to be scrapped and Meta says the law is designed “to handicap successful American business while allowing Chinese and European companies to operate under different standards”.
- Big Tech is leveraging its influence with Donald Trump to apply pressure on the EU. One of the architects of the Digital Services Act, Thierry Breton, received a visa ban to the US last month for “censorship”.
- The political pressure will test the Europeans, who fear Trump will shift support to Russia in the Ukraine-Russia conflict or invade Greenland.
7. AI-driven inflation is 2026's most overlooked risk, investors say
Reuters reports that the market may be underestimating the risk of inflation brought by the huge amount of AI investment by Big Tech.
- Deutsche Bank estimates that AI data center spending will reach 4 trillion USD by 2030, leading to supply chain bottlenecks for chips and electricity shortages. This will push AI prices up even further, and therefore reduce returns on investment.
- The year 2025 ended well for Big Tech’s earnings, and government stimulus packages for AI in the EU and US are contributing to this. However, stock market analysts expect inflation at the end of the year which will lead to less money being invested into AI.
- For one analyst, “you need a pin that pricks the bubble and it will probably come through tighter money”.
- Early signs of market worries include Oracle whose stock price has fallen by nearly 35% over the last three months. Meta has fallen by 21%.
8. French and Malaysian authorities are investigating Grok for generating sexualized deepfakes
Several countries, including France, India and Malaysia, have condemned xAI for allowing the easy generation of sexualized images of minors by the Grok platform.
- Grok had already been criticized for generating images of women being assaulted and sexually abused.
- The X chatbot posted the message “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt.”. As one analyst points out, it is not clear who wrote the message – as the chatbot is not a person, despite the use of “I”.
- The Indian IT ministry has given X 72 hours to comply with an order that prevents the platform generating content that is “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law”. Failure to comply would lead to the platform becoming legally liable for user-generated content.
9. Let 2026 be the year the world comes together for AI safety
An editorial in the journal Nature is strongly criticizing the US administration for its laissez-faire attitude to AI and the absence of federal AI safety regulation.
- For the journal, “It’s impossible to imagine the technologies used in energy, food production, pharmaceuticals or communications being outside the ambit of safety regulation. The same should be true of AI.”.
- The journal also calls for transparency from AI companies about how models work, the banning of deepfake videos, prevention of using copyright material in model training, and accountability by model providers on harms engendered by their models.
- China is lauded by the journal for its AI regulation and open approach to model development. The African and European Unions are also praised for their efforts on AI safety regulation.
- Finally the journal calls on the United Nations to oversee global AI policy coordination.
10. LLMs contain a LOT of parameters. But what’s a parameter?
This MIT Technology Review article gives an explanation of the term parameter, and related terms (embedding, hyper-parameter, weight, bias) used in the context of large language (AI) models.
- A parameter is a fixed value used in the model to store a value that is used to create a model output when queried. OpenAI’s GPT-3 had 175 billion parameters in 2020. Today, Google’s Gemini 3 is believed to have over 1 trillion parameters.
- The values of the parameters get fixed during the training process. They get set to the values that allow the model to predict the output for a given input that best matches the input/output pair of the training data. Training updates parameters iteratively – with parameters getting updated maybe tens of thousands of times until its final value is found. For a model with billions of parameters, the training process becomes expensive in energy and can last for months.
- A word from a language is represented by an embedding – which is implemented as a vector of numbers. Before training, the model designers allocate several thousand embeddings for the vocabulary of the model (hence the term large language model). Meaning of the words comes during training.
- Training gives meaning to words by assigning a value to each entry in the vector. Each entry represents some facet of word meaning. The vector length is known as the dimension of the embedding, and 4096 is currently a popular choice. An embedding is not a parameter.
- When a prompt is entered, the model breaks it into embeddings which it passes to a series of neural networks, called transformers.
- Weights are parameters. They define the importance of individual embeddings in the creation of the output embeddings.
- An example of a <hyper-parameter is the model’s temperature. This defines how much randomness is integrated into a model’s response – so that the model yields a different answer when given the same prompt a second time.