A Review of AI in 2024

One year of Technology Watch

Posted on January 22nd, 2025

A Year in Review

Audio Summmary

AI was one of the most impactful technologies of 2024. Large investments by technology companies in the development of AI continued. Regulation was enacted to control AI risks, while at the same time, the technology is increasingly used by bad actors.

The attached document, A Review of AI in 2024, presents some key lessons and offers perspectives for 2025. Among the key lessons:

  • Large language models (LLMs) continued to grow in size. Nonetheless, it is possible that AI models are hitting limits due to the absence of quality data and the high cost of training. The challenge for data is that AI firms cannot (or should not) be using copyrighted content to train models and should only use personal data from social media platforms with the express permission of users (according to the General Data Protection Regulation). One solution to this data problem, potentially, is the use of synthetic data (i.e., data that is artificially created but which has the same degree of randomness as data from the real world).
  • OpenAI completed its transformation from a non-profit to a for-profit entity. It has an expected annual loss of 5 billion USD and is the target of lawsuits, including one from the New York Times accusing the company of using copyrighted articles to train its GPT models. Nonetheless, it still managed to raise over 6 billion USD in funding.
  • The year saw small language models (SLM) become popular. An SLM can be a watered-down version of a large model (fewer weights, or weights represented with smaller units), or can be a model tailored to a very specific task, e.g., text-to-SQL generation. Future company application suites could be 50% traditional applications and 50% SLMs. SLMs can run on-premises on a single GPU.
  • Despite lawsuits made by content provider companies (newspapers, music companies) against AI firms for using their content to train models, we can expect agreements to emerge between these parties. AI companies need content; content providers need revenue. Such an agreement has already been made between OpenAI and Reuters, Le Monde and others.
  • The European Union’s Artificial Intelligence Act entered into force on August 1st, 2024 and will become fully effective in two years time (August 2026). Its goal is to protect the health, safety and fundamental rights of EU citizens from the risks of AI systems.
  • The AI field won Nobel prizes in both chemistry and physics in 2024. AI is now as critical to scientific breakthroughs as IT and even mathematics were in the past.
  • Big Tech dominance in AI remains intact due to the high cost and resource requirements of AI development. Nvidia’s market cap reached 3 trillion USD in June, only the third US company to achieve this figure after Microsoft and Apple. Revenue for Nvidia from data centers using their GPUs will be around 100 billion USD in 2025, exceeding its gaming revenue.
  • AI is pushing up data center demands for electricity, and many data centers will not be able to cope in the next few years. Big Tech has signed deals with nuclear and solar energy providers. Nonetheless, power shortages will occur and this will lead to increases in AI service and product prices.
  • The largest criminal use of generative AI today is by pedophile groups using it to create images and videos of children being sexually abused. Sextortion is also evolving where criminals “nudify” photos of victims taken from social media. Another criminal use of AI are “heists” where deepfake technology is used to impersonate company executives and convince employees to transfer large sums of money. Generative AI has also been used to create content for covert influence operations on social media to influence elections.

All of these issues are treated in detail in the report which can be found here.