Summary
In the news this week, an article from MIT Technology News explains how AI is helping to solve problems that were once thought to be feasible only by quantum computers. Many of the problems of practical interest to chemists and materials scientists can be simulated using density functional theory (DFT), and this has been exploited by researchers to generate data on chemicals, biomolecules, and materials, which in turn has been used to train AI systems. Elsewhere, Google Mind has released their Gemma Scope tool as open-source. This tool implements mechanistic interpretability which helps highlight model features that get activated when an AI model takes a decision. This is an important step towards explainability for AI models.
Regarding challenges for AI, Bloomberg reports that OpenAI, Google and Anthropic are all secretly unhappy about the performance of their latest models. Lack of high-quality data is seen as the main reason for under-performance. The question that AI firms are asking is whether it makes sense to continue to develop new large models, or concentrate on improvements to existing models. Elsewhere, a Gartner report says that data center electricity demand will increase by as much as 160% in the next two years, which could lead to 40% of data centers becoming operationally constrained by power availability by 2027.
An article from InfoWorld highlights the renewed interest in private clouds since the advent of generative AI. IDC forecasts that spending on private clouds will be over 20 billion USD in 2024 and double that in 2027. The Hacker News reviewed how AI can improve Identity Access Management. Fundamentally, AI can learn to recognize expected “normal” behavior, and therefore flag suspicious behavior like unusual access patterns or large data transfers.
The X platform (formerly known as Twitter) has been in the news a lot. Its owner, Elon Musk, has extended his lawsuit against OpenAI. The gist of the lawsuit is that Musk is complaining that OpenAI is actively trying to eliminate competitors, including Musk’s company xAI by “extracting promises from investors not to fund them” and by an inability to obtain compute services from Microsoft on terms as favorable as those given to OpenAI. Meanwhile, the Guardian newspaper has announced that it is no longer posting on the platform due to its “often disturbing content” and Musk’s questionable declarations during the recent US elections.
Finally, a report by the US Cybersecurity and Infrastructure Security Agency (CISA), FBI, and NSA indicates that malicious actors are increasingly using zero-day exploits (where the vulnerability in the software is unknown to the software producer). 11 of the most exploited CVEs (Common Vulnerabilities and Exposures) in 2023 were zero-day.
Table of Contents
1. OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
2. Google DeepMind has a new way to look inside an AI’s “mind”
3. Musk’s amended lawsuit against OpenAI names Microsoft as defendant
4. Gartner Predicts Power Shortages Will Restrict 40% of AI Data Centers By 2027
5. The rise of specialized private clouds
6. Why AI could eat quantum computing’s lunch
7. Guardian will no longer post on Elon Musk’s X from its official accounts
1. OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
This article reports how AI companies (OpenAI, Google and Anthropic) are seeing the performance of their large AI models beginning to plateau. Insiders from these companies are quoted as saying that performance expectations of their new models are not being met. For instance, OpenAI's new Orion model is falling short on software coding questions. Google’s latest Gemini model and Anthropic’s Claude 3.5 Opus reportedly also have issues. The two key factors contributing to the plateau are the lack of high-quality data and the cost of training. Concerning data, quantity does not compensate for quality and diversity. AI firms hope to address this issue through contracts with content providers (like news sites) as well as through collaboration with human annotators. Concerning costs, the article reports that the training cost of a large model this year is around 100 million USD and this figure will reach 100 billion USD. The question that AI firms are asking is whether it makes sense to continue to develop new large models, or concentrate on improvements to existing models.
2. Google DeepMind has a new way to look inside an AI’s “mind”
One of the major challenges for AI is explainability – understanding how an AI comes to a decision. The field of mechanistic interpretability focuses on the model features that get activated when a request is made. For instance, when the line “To be or not to be, that is the question” is processed, the model can activate features like “questions that provoke critical thinking and inquiry”, “words related to personal possessions and actions of individuals”, and “references to significant events, locations, or figures in theatrical history”. A team at Google Mind working on mechanistic interpretability has released their Gemma Scope tool as open-source. The team has also partnered with the Neuronpedia platform for mechanistic interpretability to build a demo.
By identifying the features activated, mechanistic interpretability mechanisms (e.g., sparse encoders) may give greater control over the response generated. One applications of this could be to prevent models from generating toxic content and to correct biases. For instance, a team led by Samuel Marks, now at Anthropic, studied the features of a small model that associated professions with gender. Eliminating certain features made the model exhibit less gender bias. However it is not clear if this approach can work in large models. One example cited is that of bomb-making. Currently, a platform like ChatGPT appends secret prompts to user queries that ask for the platform not to reveal information on how to make a bomb. However, this approach is susceptible to jailbreaking. Mechanistic interpretability mechanisms may seem an appealing alternative, but the problem is that by tuning down features that relate to bomb-building, the model becomes “lobotomized” in regard to much of its chemistry knowledge.
3. Musk’s amended lawsuit against OpenAI names Microsoft as defendant
Elon Musk has amended his lawsuit against OpenAI. The gist of the lawsuit is that Musk, an original board member, claims that OpenAI benefited from his early involvement in OpenAI when it was a non-profit organization with the goal of developing AI for the benefit of humanity. The filing complains that OpenAI is actively trying to eliminate competitors, including Musk’s company xAI by “extracting promises from investors not to fund them” and by an inability to obtain compute services from Microsoft on terms as favorable as those given to OpenAI. The amended lawsuit names Microsoft’s LinkedIn co-founder Reid Hoffman and Microsoft VP Dee Templeton. For Hoffman, the complaint mentions that as OpenAI and Microsoft board member, and, partner to the Greylock investment firm, he had privileged access to information about the company’s dealings. For Templeton, the complaint alleges that as a non-voting OpenAI board member, she was in a position to facilitate agreements between Microsoft and OpenAI, in violation of antitrust principles. The complaint also names Shivon Zilis as a plaintiff. Zilis is a former OpenAI board member, is also the mother of three children with Musk, and currently works for Musk’s Neuralink company. Zilis argues that she complained about OpenAI’s business dealings while at OpenAI and is claiming the “injured employee” status under California’s Corporations Code.
4. Gartner Predicts Power Shortages Will Restrict 40% of AI Data Centers By 2027
Gartner argues that AI and generative AI are forcing data centers to increase their demand for electricity. This demand will increase by as much as 160% in the next two years, which could lead to 40% of data centers becoming operationally constrained by power availability by 2027. The power needed by data centers running AI services will be as high as 500 terawatt-hours (TWh) per year in 2027, which is 2.6 times higher than power consumption in 2023. The demand increase impacts on power generation and also on transmission infrastructures. One impact of the expected power shortages will be an increase in electricity prices, and this will increase the prices of AI services and products. Gartner is encouraging organizations to include price hikes into their risk analyses. Organizations are also encouraged to consider other options such as small language models, edge-computing approaches, fixed long-term contracts with AI providers and controlling energy consumption. A second impact of the increased demand for electricity for AI data centers is an increase in CO2 production. The increased demand cannot be met by renewable power sources (solar or wind), so fossil fuel power plants must be kept longer in production.
5. The rise of specialized private clouds
This opinion article reviews the role of private clouds in the age of AI. A private cloud is a cloud infrastructure dedicated to a single client. Earlier private clouds where often a migration of on-premises infrastructure to the cloud. These clouds lacked the scaling-on-demand and self-provisioning of public clouds. However, the last two years have seen increased investment in private clouds. A referenced article cites IDC forecasts that spending on private clouds will be over 20 billion USD in 2024 and double that in 2027. Private clouds today can be customized by topic, e.g., high-performance computing (HPC), storage, disaster recovery clouds, edge computing, and compliance and security. Clouds can also be tailored to industries. In the case of financial services for instance, clouds specialize in high-speed transactions and regulatory compliance.
An AI private cloud offers services for deployment over GPU clusters as well as MLOps processes. The key advantages for organizations are data sovereignty, security, reduced latency, and the possibility to capitalize on real-time data processing. On the other hand, they introduce the risk of vendor lock-in, technology stagnation as AI frameworks evolve, and a large cost. While public clouds operate on a pay-per-use model, private clouds have important installation and operational costs. The article claims that hardware costs can range from 2 million USD to 10 million USD, and that licenses can exceed 500’000 USD annually.
6. Why AI could eat quantum computing’s lunch
This article questions the belief that quantum computers will be needed to solve problems in chemistry and physics. A leitmotif of quantum computing is that problems from domains like chemistry and materials science need the simulation of systems that have quantum effects, and a computer build around quantum properties is best armed to do this. However, today’s quantum computers are relatively small (the largest having around one thousand qubits), and experts believe that quantum computers will need to have tens of thousands or even millions of qubits for quantum computers to definitively outperform electronic computers. Further, the problem of inputting and outputting data remains a challenge.
A key observation of this article is that not all of the domains where quantum computing is considered (medicine, finance, logistics, chemistry, etc.) are the same. It comes down to a physical property called entanglement, which models how quantum states of distant particles interact. Modeling entanglement is mathematically complicated and therefore challenging on classical computers. However, this complexity depends on the degree of entanglement. Many of the problems of practical interest to chemists and materials scientists simulate systems where entanglement is weak. One approach for weakly correlated systems is density functional theory (DFT) which has been exploited by researchers to generate data on chemicals, biomolecules, and materials. This data has then been used to train AI systems. For instance, Meta’s recently released materials data set is made up of DFT calculations on 118 million molecules. All of this means that the scale of problems that can be addressed by AI is increasing rapidly, and milestones like precisely simulating how drugs bind proteins could be reached sooner than expected with the contribution of AI.
7. Guardian will no longer post on Elon Musk’s X from its official accounts
The UK’s Guardian newspaper announced that it is stopping its posts to the X platform (formerly known as Twitter). The paper cites the platform’s “often disturbing content”, notably in relation to far-right conspiracy theories and racism. The platform’s coverage of the US presidential election cemented the paper’s decision. The Guardian wrote: “The US presidential election campaign served only to underline what we have considered for a long time: that X is a toxic media platform and that its owner, Elon Musk, has been able to use its influence to shape political discourse.”. In reply, Musk posted on X that the Guardian was a “laboriously vile propaganda machine”. Musk considers himself to be “free speech absolutist” and the platform has already been criticized by anti-hate speech campaign groups. In the US, National Public Radio (NPR) and the broadcaster PBS have already left the X platform. The Guardian has over 80 accounts on X with around 27 million followers. It says it will continue to use the platform for news-gathering purposes.
8. How AI Is Transforming IAM and Identity Security
This article looks at some of the potential benefits (though not the drawbacks) of AI in the field of Identity Access Management (IAM). In today’s systems, the role of an IAM is not just to manage human users, but also IoT devices, external APIs and autonomous systems. An example of this in the area of software development is authenticating and authorizing access to code repositories for CI/CD pipeline tools and DevOps platforms. The fundamental premise of the article is that AI can learn to recognize expected “normal” behavior, and therefore flag suspicious behavior like unusual access patterns or large data transfers. AI is also useful for simulating attacks and proposing fixes. AI can also strengthen the principle of least privilege by making it adaptive and just-in-time. IAM is adaptive when access is based on job function or current threat intelligence. IAM can be just-in-time when a decision to grant access is only made at the moment the managed entity requires access, with the decision being made after a real-time risk analysis. Finally, AI can help with administrative tasks such as compliance report generation and also capture that data most relevant to the regulatory standards being followed by the organization.
9. 2023 Top Routinely Exploited Vulnerabilities
The US Cybersecurity and Infrastructure Security Agency (CISA), Federal Bureau of Investigation (FBI), and National Security Agency (NSA), in collaboration with New Zealand, Australian, Canadian and UK cybersecurity agencies have published a report on the most commonly exploited vulnerabilities of 2023. The report shows that malicious actors are increasingly using zero-day exploits (where the vulnerability in the software is unknown to the software producer). 11 of the most exploited CVEs (Common Vulnerabilities and Exposures) in 2023 were zero-day. Three of the top 15 CVEs exploited were already known in 2022, calling into question the ability of organizations to react to published CVEs.
The report makes recommendations for software producers and end-users. Software producers are encouraged to follow secure design practices (e.g., secure configurations by default, eliminate default passwords and having users configure security, use static and dynamic testing tools, avoid memory unsafe languages, adapt to published CVEs). The report recommends NIST’s Secure Software Development Framework (SSDF) to developers.
Advice to end-users includes the timely application of patches, the use of security tools like endpoint detection and response (EDR), web application firewalls, and network protocol analyzers, as well as asking software providers to publish information on their security processes (such as how they are working to remove vulnerabilities). All VPN connections should have phishing-resistant multi-factor authentication (MFA). Organizations should not deploy Internet facing services that they are unable to patch – such services should be managed by competent service providers. Organizations should implement Zero-Trust Network Architectures to control access to applications, devices, and databases so as to limit lateral movement of attacks. Another recommendation for organizations is that they compile a Software Bill of Materials (SBOM) with all software used in order to facilitate vulnerability monitoring and to help reduce time to respond to identified vulnerabilities.