AI Can Still Contribute to Materials Science

Criticism of Trump's Executive Order on AI Regulation by States

Posted on December 20th, 2025

Summary

Audio Summmary

An MIT Technology Review article discusses Big Tech attitudes to the AI bubble bursting. The main AI company leaders seem to agree that there is a bubble. Consultants at Bain have estimated that AI companies will need to make 2 trillion USD in annual revenue to match the current wave of spending into AI infrastructure, which is more than the combined annual revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia. By spending on infrastructure, Tech companies might be positioning themselves for arrival at the other side of the bubble bursting. When the dot-com bubble burst, Amazon survived whereas competitors like Buy.com did not, principally because Amazon had a computing infrastructure that it could continue to exploit.

There has been vociferous reaction to Donald Trump’s executive order that seeks to prevent US states from implementing their own AI regulation. Organizations have questioned the legality of the order, and accused the Trump administration of not providing an alternative and comprehensive federal framework for regulating AI in place of state laws. Meanwhile, the latest AI Safety Index from the Future of Life Institute shows that Anthropic, OpenAI and Google DeepMind are performing better than competitors on safety initiatives. All companies are performing badly on measures for existential safety, as they continue to research artificial general intelligence.

Another MIT Technology Review article looks at the impact that AI is having on the search for new materials, and explains that the benefits of AI might have been exaggerated until now. There is a great need for new materials to address the world’s challenges, including more powerful batteries, and compounds that cheaply ingest carbon dioxide from the air. Despite research efforts by Google DeepMind, Meta, and Microsoft on AI for materials, a core problem is that many types of materials, notably crystals, cannot have their behavior understood purely by calculating atomic structures, but they have to be built to see how they behave in reality.

A report by McKinsey on agentic AI stresses the importance of designing workflows only after identifying the pain points of humans in existing workflows that agents can solve. One project leader suggests that onboarding agents is more like hiring a new employee than deploying software. A VentureBeat article claims the key factor limiting agentic coding is not the power of the AI model, but having access to a sufficiently large corporate codebase to make coherent software development. Meanwhile, 92% of European IT leaders see open-source software as vital for sovereignty. Competitive advantage is moving from owning AI models to “controlling training pipelines and energy supply” as compute scarcity becomes linked to grid access.

OpenAI has released GPT-5.2 which it claims offers significant gains over GPT-5.1. However, one expert says that one cannot judge performance when there is little transparency about the training data, and where the company that creates the model also creates the benchmarks. An InfoWorld article warns that moves by cloud providers to incorporate GPUs and AI infrastructure are creating a hidden risk of AI cloud lock-in for companies. Finally, LinkedIn is being criticized for its content suggestion algorithm. Some question whether the algorithm has an “innately white, male, Western-centric viewpoint. A movement labeled #WearThePants emerged where women change their status from female to male to investigate bias against women. One woman claimed that her “impressions jumped 238% within a day”.

1. OK, what’s going on with LinkedIn’s algo?

This TechCrunch article discusses the impact of recent changes by LinkedIn to their content suggestion algorithm. In particular, some question whether the algorithm is biased with an “innately white, male, Western-centric viewpoint.

  • A movement labeled #WearThePants emerged recently where women change their status from female to male to investigate bias against women. One woman claimed that her “impressions jumped 238% within a day”.
  • LinkedIn claim that the “algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed.”.
  • For one social media expert, the LinkedIn approach is “an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly”. Concerning content promotion, even “changing of one’s profile photo and name is just one such lever” in getting increased promotion for one’s content.
  • It seems that recent changes have destabilized some users because of impact on engagement for their content. One user noted that the recommendation system “no longer cares how often you post or at what time of day…. It cares whether your writing shows understanding, clarity, and value.”.
  • For LinkedIn, the changes became necessary because of the increase in user activity, with posting up 15% over the last year, and commenting up 24%.

2. Gavin Newsom pushes back on Trump AI executive order preempting state laws

There has been vociferous reaction to Donald Trump’s executive order (of December 11th) that seeks to prevent US states from implementing their own AI regulation.

  • The executive order calls for an “AI litigation task-force” that would review state laws that do not “enhance the United States’ global AI dominance. The task-force might then engage legal action against states, or potentially withhold federal broadband funding.
  • Organizations have questioned the legality of the executive order, claiming that Trump does not have the authority to interfere in state legislation. They have also criticized the lobbying carried out by the Tech industry to get the order.
  • Other critics have accused the Trump administration of not providing an alternative and comprehensive federal framework for regulating AI in place of state laws.
  • The CEO of the philanthropic tech investment company wrote that ignoring AI’s impact on the country “through a blanket moratorium is an abdication of what elected officials owe their constituents”.

3. OpenAI launches GPT-5.2 as it battles Google’s Gemini 3 for AI model supremacy

OpenAI has released GPT-5.2 which it claims offers significant gains over GPT-5.1 released just one month ago in November.

  • OpenAI wrote: “We designed GPT-5.2 to unlock even more economic value for people; it’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects”.
  • Access to the GPT-5.2 API is priced at 1.75 USD per one million input tokens and 14 USD per one million output tokens. The corresponding prices for the GPT-5.2 API are 1.25 USD and 10 USD. OpenAI justify this price increase by “the cost of attaining a given level of quality ended up less expensive due to GPT-5.2’s greater token efficiency”.
  • One expert is unimpressed with the benchmark improvement claims of OpenAI, e.g., that GPT-5.2 matched or exceeded human users in 70.9% of tests with the GDPVal benchmark, compared to a score of 38.8% for GPT-5.1. The expert says that one cannot really judge performance for a model where there is little transparency about the data used to train the model, and where the company that creates the model is also the company that created the benchmark.
  • For the agentic AI company Vectara which has its own Hallucination Evaluation Model, GPT-5.2 ranks 33rd on the leaderboard with a hallucination rate of 8.4%. By comparison, the hallucination rate for a recent Gemini 3 model was 13.6% and was 17.8% for Grok 4.1.
  • The release of GPT-5.2 only one month after the release of GPT-5.1 is also linked to competition from Google’s Gemini, which OpenAI perceive as increasingly competitive. OpenAI CEO Sam Altman sent a memo about a “code red” emergency calling for rapid deployment of GPT-5 due to the danger of falling behind Gemini.

4. Why most enterprise AI coding pilots underperform (Hint: It's not the model)

This VentureBeat article looks at the evolution over the last year of AI coding tools from powerful autocomplete tools to agentic tools, capable of planning and executing changes to a codebase. The underlying shift is one of assistance to agency.

  • For the authors, the key factor that limits agentic coding is not the power of the AI model, but the depth of the agents’ context – having access to a sufficiently large corporate codebase and toolbox to make coherent software development.
  • The shift to agentic coding from code snippet generation entails having agents that can design, test and deploy code. This has motivated Github to develop agent frameworks like Copilot Agent and Agent HQ.
  • Use of the tools so far has not led to more efficient code production. A survey cited in the article reports how engineers lose a significant amount of time verifying the output created by AI agents. This might be because engineers are employing AI in existing human processes, whereas efficient AI adoption requires new processes for AI and human collaboration.
  • A future key factor for AI coding success will not be the ability for agents to replicate how humans produce code, but to replicate the human reasoning process in the design and production of code.

5. What even is the AI bubble?

This article by Alex Heath, author of the Sources newsletter on AI, discusses current attitudes in Big Tech to the AI bubble bursting, and the repercussion of the burst.

  • AI leaders seem to agree that there is a bubble. OpenAI Sam Altman is quoted as saying “Are we in a phase where investors as a whole are overexcited about AI? … My opinion is yes.. Google CEO Sundar Pichai says that if and when the bubble bursts, “no company is going to be immune, including us.”.
  • Google DeepMind CEO Demis Hassabis is quoted as saying that “It feels like there’s obviously a bubble in the private market… You look at seed rounds with just nothing being tens of billions of dollars. That seems a little unsustainable.”.
  • Anthropic CEO Dari Amodei has highlighted the “circular deals” where Nvidia invests in AI companies, which return the favor by purchasing Nvidia chips. He said “If you start stacking these where they get to huge amounts of money, and you’re saying, ’By 2027 or 2028 I need to make $200 billion a year,’ then yeah, you can overextend yourself.”. OpenAI needs to make 200 billion USD in a few years.
  • Consultants at Bain have estimated that AI companies will need to make 2 trillion USD in annual revenue to match the current wave of spending into AI infrastructure. This figure is more than the combined annual revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia.
  • The AI companies clearly believe that they need more infrastructure for AI to continue its development. On the one hand, some data centers are rationing their resources in favor of Big Tech. Also, they are positioning themselves for being ready for arrival at the other side of the bubble bursting. When the dot-com bubble burst, Amazon survived whereas competitors like Buy.com did not, principally because Amazon had a computing infrastructure that it could continue to exploit.

6. AI in 2026: Experimental AI concludes as autonomous systems rise

This article gets opinions from industry leaders in Europe on the evolution of AI in 2026.

  • For the CTO at Cloudera, energy optimization is key: “The new competitive edge won’t come from the largest models, but from the most intelligent, efficient use of resources.”.
  • For J12 Ventures: “Compute scarcity is now a function of grid capacity” meaning that energy will have the greatest impact on AI policy in Europe.
  • For Red Hat, the evolution will be agents that go beyond automation for self-configuring and self-healing network infrastructures. For Cloudera, the logistics and manufacturing domains can reap greater benefit from AI than consumer-facing applications.
  • Also for Cloudera, regulatory and cost issues will see the end of digital hoarding: “AI-generated data will become disposable, created and refreshed on demand rather than stored indefinitely”, reflecting the current rise in the synthetic data industry.
  • 92% of European IT and AI leaders see open-source enterprise software as vital for achieving sovereignty. Competitive advantage is moving from owning models to “controlling training pipelines and energy supply with open-source advancements allowing more actors to run frontier-scale workloads.

7. One year of agentic AI: Six lessons from the people doing the work

One of the common themes in AI this year was agentic AI, and a September report by McKinsey into its experience overseeing over 50 agentic AI projects is increasingly cited.

  • The report stresses the importance of designing workflows only after identifying the pain points of humans in existing workflow processes that agents can solve. In one project, all user edits in documents were logged and analyzed to come up with better agentic prompts.
  • Agents are not always the appropriate solution for a task. One project leader suggests evaluating agents like one would evaluate humans: “What is the work to be done and what are the relative talents of each potential team member – or agent – to work together to achieve those goals?”.
  • Another said that “Onboarding agents is more like hiring a new employee versus deploying software”. Getting the agent to do its most appropriate task is key to reducing the amount of AI Slop created in the workflow.
  • The report also insists on the need to do quality control at each phase of the workflow – and not just on the final output. This helps identify where poor quality data derails the process, and ensures that the most appropriate agent is being deployed. McKinsey claims that getting the right agents in place can eliminate between 30% and 50% of nonessential work.

8. AI Safety Index Winter 2025

The latest version of the AI Safety Index from the Future of Life Institute shows that Anthropic, OpenAI and Google DeepMind are performing better than competitors on safety initiatives. However, all companies are performing badly on measures for existential safety, as companies continue to research artificial general intelligence (AGI, or “super-intelligence”).

...
Latest AI Safety Index from the Future of Life Institute.
  • The AI Safety Index was developed by the Future of Life Institute with the collaboration of an independent panel of technical and governance experts. It provides an impartial evaluation of how responsibly leading AI companies are approaching challenges from the risks of AI.
  • The evaluation was based on evidence from company-specific data (publicly available research papers, policy documents, news articles, and industry reports), as well as surveys sent to AI companies about safety-related practices, processes and structures.
  • The AI companies evaluated Anthropic, Alibaba Cloud, DeepSeek, Google DeepMind, Meta, OpenAI, xAI, and Z.ai. Generally, all companies fall short on the safety-related measures required by the EU Code of Practice, linked to its 2024 AI Act.

9. AI materials discovery now needs to move into the real world

This article looks at the impact that AI is having on the search for new materials, and explains that the benefits of AI might have been exaggerated until now.

  • There is a great need for new materials to address the world’s challenges, including more powerful batteries, compounds that cheaply ingest carbon dioxide from the air, better magnets and different types of semiconductors. A major sought-after innovation is a room-temperature superconductor (allowing electrical flows without resistance, and therefore heat generation). This would revolutionize power grids and quantum computers.
  • Materials science has lacked any big win recently – the author points to lithium-ion batteries as the only major applied breakthrough in the last 20 years. A business model that accompanies materials discovery is also often missing.
  • Using AI for materials discovery gained renewed attention when Google DeepMind demonstrated that its AlphaFold2 model could predict the three-dimensional structure of proteins.
  • Despite research efforts by Google DeepMind, Meta, and Microsoft on AI for materials, a core problem is that many types of materials, notably crystals, cannot have their behavior understood purely by calculating atomic structures, but they have to be built to see how they behave in reality.
  • An example of this challenge was seen in 2023 when Google DeepMind announced that it had discovered “millions of new materials” including 380’000 crystals that it declared “the most stable, making them promising candidates for experimental synthesis”. It was a breakthrough for AI rather than material science, with some material scientists declaring that Google DeepMind’s discovery offered “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.
  • AI is being used today more for automating the experiments taking place in the lab, and helping to synthesize the results from these.

10. Why your next cloud bill could be a trap

This article takes a critical look at how AI is forcing up the cost of cloud for companies, and how a new risk of AI lock-in is appearing.

  • Generative AI has dramatically increased the spending made by cloud providers as they move from generic infrastructure to AI-native platforms. GPUs and AI accelerators contribute to this increase. There is also a move to support foundation models, agent frameworks, vector databases and AI management services.
  • One impact is that even companies which are not explicitly doing AI, are using AI in the background, e.g., their database has a vector database component for semantic search of the customer portal. This creates a dependance of the company on an AI infrastructure.
  • Sometimes existing service features are bundled with AI services, or business units become dependent on AI assistants that are created using the company data. The problem arises when the AI technology used by the cloud provider is proprietary, since the cost of changing provider increases significantly. The company is less likely to be able to adopt open-source models, and is more exposed to technical changes at the provider.
  • The article suggests not accepting free AI trials from cloud providers, and designing an AI strategy with AI portability from the start. This entails using open formats for data embeddings and separating application code from proprietary AI services and orchestration tools.