Addictive AI

Amara's Law and Far-Right Extremists

Posted on August 13th, 2024

Summary

This week saw articles about key companies in the AI space. An article from the Financial Times examines the special relationship between Microsoft and OpenAI and argues that Microsoft is diversifying its investments in AI to be able to withstand the disappearance of OpenAI. Meanwhile, Intel is laying off 15% of its work-force following financial losses. Mistral has released three new models.

An article in MIT Technology Review reports on research that analyzed the logs of ChatGPT to see what people were really using the chatbot for. They found that the three most popular uses of ChatGPT are generating creative content, sexual role-playing and the search for factual explanations. Another article talks of addictive intelligence, highlighting the risk of emotional dependence by users on chatbots. Elsewhere, the UK is funding the Safeguarded AI project whose aim is to develop mathematical models to quantitatively evaluate the risk of AI systems.

MIT Technology Review co-authored a survey of US companies on AI adoption. The study found that while few companies are generating new revenue due to AI, spending on AI in 2024 will significantly increase. A VentureBeat post argues that we have been too optimistic about the timeline for the impact of AI, and that the promises made by AI firms can come true by the 2030s. Finally, an article in the UK’s Guardian newspaper explains how far-right extremists are using AI and bots to incite riots and disseminate xenophobic and antisemitic content on social media.

1. Here’s how people are actually using AI

This first article reports on research that analyzed one million ChatGPT interaction logs in order to understand better what people are using the tool for. Unsurprisingly, the first use of the tool is for generating creative content. The second most common use of ChatGPT, which surprised the researchers, is sexual role-playing. The third most common usage reported is obtaining general information and factual explanations. The article highlights how, in addition to the better known risk of hallucination, another real risk is the creation of an emotional dependency by the user on the AI, something perhaps encouraged by AI providers when their chatbots have appealing voices – as is the case for OpenAI’s GPT-4o. OpenAI observed this risk during testing sessions through phrases that people used when interacting with the chatbot.

2. We need to prepare for ‘addictive intelligence’

This theme is taken up by another MIT Technology Review article that coins the phrase addictive intelligence. The authors use the term dark patterns to describe techniques used by AI companies to keep users engaged. Previous generations have complained of addictive tools, from novels in the 18th century to TV in the 20th and then smartphones, but the capacity of AI to continually generate new content sets it apart from previous cases. Another feature that leads to addition is what researchers call the sycophancy nature of a chatbot, because it responds with content adapted to the user’s personality and preferences. The authors call for measures in chatbots that can detect when users are becoming too emotionally dependent.

3. AI godfather Yoshua Bengio has joined a UK project to prevent AI catastrophes

The UK’s Advanced Research and Invention Agency (ARIA) has launched Safeguarded AI – a research project whose aim is to develop techniques to check for errors in AI. Today, red-teaming is the main measure used to validate AI systems. This is where a group of people from within or outside of the organization try to “break” the AI system, in an attempt to detect problems. This is obviously not a rigorous approach. The goal of the Safeguarded AI is to develop mathematical models to validate AI systems, so that quantitative measures of safety can be attributed to AI systems, specifically in high-risk industries such as supply-chain, energy provision, health and telecommunications. The project is supported by Yoshua Bengio, who won the ACM Turing Award in 2018 for his work on machine learning and who is actively campaigning about awareness on the risks of AI. The article points out that Safeguarded AI has a similar objective to that of OpenAI when it was founded, before the strategy pivot to become product and profit-oriented.

4. Mistral AI Releases Three Open-Weight Language Models

Mistral AI has released three new open-weight language models under the Apache 2.0 license: Mistral NeMo, a general-purpose LLM, Codestral Mamba, a code-generation model, and Mathstral, a model fine-tuned for math reasoning. This post mentions how NeMo outperforms the similarly sized models Gemma 2 9B and Llama 3 8B on the MMLU and other benchmarks. Mistral AI claims that Mathstral "achieves state-of-the-art reasoning capacities in its size category", including 63.47% on MMLU and 56.6% on MATH.

One can see from this post the increasing consideration given to benchmarks for language models. The MMLU Benchmark, for Massive Multi-task Language Understanding evaluates the knowledge base and problem-solving skills of the language model across 57 topics, including mathematics, history, computer science and law. GPT-4 Opus, Claude 3.5 Sonnet and Claude 3.5 Opus currently lead the models on this benchmark.

Another feature is the use of the term open-weight, as opposed to the more commonly known term open-source. A good explanation of these terms can be found in a post by Sunil Ramlochan. He reminds us that weights are the output of training runs on data. They are not human-readable or debuggable, as opposed to the model source code. Weights represent the AI’s knowledge. An open-weight model is an AI model whose these weights are available for use or modification; the source-code or training data is not made available. In an open-source model, the weights, source-code and training data are all made available. Open-weight models can be modified, though this is not an easy task. On the other hand, such models do not have the same degree of flexibility as open-source models since, for instance, biases in training data cannot be corrected.

Source InfoQ

5. Gen AI’s awkward adolescence: The rocky path to maturity

After the hype of 2023 around the possibilities of generative AI, several recent articles have mentioned the challenges facing AI projects, notably in relation to costs. In this article post, the authors argue that if the early promises were too readily believed, then so too are current pessimistic claims that AI is failing to live up to early promises. The fundamental issue is that humans are bad at giving realistic timeframes for technology improvements. The authors cite Amara’s Law: we tend to overestimate the impact of a new technology in the short run, but we underestimate it in the long run. In regards to AI, the technology is used but it will probably be the 2030s before we see AI humanoids and fully autonomous vehicles. AI technology is currently used in critical fields like health and finance, but IT infrastructures on a global scale are not ready for the technology, in a large part due to data quality issues and inadequate data management infrastructures.

6. A playbook for crafting AI strategy

In partnership with Bloomi, the MIT Technology Review magazine surveyed over 200 US C-Suite executives and data and technology leaders in March 2024 on their views on AI adoption. The take-up of AI has been slow, with just 5.4% using the technology to produce a product or service in 2024. The reticence comes from concerns about AI costs and the lack of understanding about how to measure return on investment. Nevertheless, the survey suggests that interest in AI is not waning, with 40% of companies planning to increase spending in AI by up to 25% in 2024. The report mentions a shifting mindset in C-Suite executives who are beginning to see AI as a means to generate new revenue, as opposed to a cost-savings technology. Data quality is seen as the barrier to AI adoption by half of companies. Data lineage (understanding and mastering the flow of data in the organization) and data liquidity (getting the right data available in a timely manner) are seen as the two key technical requirements for AI projects to be able to succeed. Finally, the report underlines how companies are aware of the risks of AI, from hallucination to biases, and also expect further regulations in the vein of the EU AI Act.

7. How Microsoft spread its bets beyond OpenAI

This article looks at the atypical relationship between Microsoft and OpenAI. Microsoft invested 13 billion USD in the start-up in 2019. Since then, Microsoft’s share price has more than doubled. One of the challenges for Microsoft is the apparent instability at OpenAI, as illustrated by the removal and reinstatement of Sam Altman as CEO in November 2023 as well as by a number of high-profile departures since then.

Microsoft has been working to diversify its AI investments in 2024. It has made a deal with the French AI start-up Mistral and invested 1.5 billion USD in the Abu Dhabi AI group G42. It signed an acqui-hire deal with the AI company Inflection to license its technology. (This deal is being scrutinized by the US Federal Trade Commission because it could have been structured to circumvent antitrust laws – Microsoft gets the talent and software from Inflection without the formal scrutiny that a full takeover would incur). Microsoft has also developed the Phi-3 language models, though smaller than OpenAI’s GPT-4 models. The company is seeking to become resistant to future events at OpenAI. The article cites CEO Satya Nadella as saying "We have all of the [IP] rights to continue the innovation ... We have the people, we have the compute, we have the data, we have everything".

OpenAI, for its part, is seeking to reduce dependence on Microsoft. The deal with Apple in June 2024 will see ChatGPT integrated into 2.2 billion devices around the world.

8. How TikTok bots and AI have powered a resurgence in UK far- right violence

Several UK cities have seen riots over the past week following the death of three children in a stabbing attack in the town of Southport. Police identified several extreme-right groups as having encouraged the rioting, notably by using social media networks and AI-generated content. The group Europe Invasion generated an image of a man in traditional Muslim dress outside Parliament buildings holding a knife and standing behind a child. The image was shared on X with the caption "We must protect our children" and received 900’000 views. The article reports that AI is also being used to create xenophobic and antisemitic songs and texts in posts on popular platforms like X, TikTok and Facebook. There is also evidence that bots are being programmed to create traffic for these posts, with an example of a xenophobic TikTok post getting 57’000 views in a single hour. Police believe that these social media campaigns are financed through thousands of micro-donations from individuals, encouraged by far-right inferencers.

9. Intel was once a Silicon Valley leader. How did it fall so far?

This post looks at the precarious position at Intel, which has just announced a lay-off of 15 percent of its work-force following 7 billion USD in losses in 2023 and a 31 percent decrease in revenue from 2022. The company has had a turbulent few years, exemplified by Apple discontinuing Intel chips in its products in favor of their own chips, and there were already mass layoffs in October 2022. Fundamentally, the company is seen as having missed its “AI transition”, despite developments around its Gaudi technology, and is lagging a long way behind Nvidia. Intel believes that its current cost-reduction measures will save the company 10 billion USD in 2025, and the company is receiving investment grants following the US CHIPs Act – a legislation aimed at encouraging chip manufacturing on US soil, notably to ensure that there is a supply of chips in the event of Taiwan being invaded or blockaded by China.