Mining Rare Metals

Tough Week for Telegram and X (formerly Twitter)

Posted on August 31st, 2024

Summary

This week saw an in-depth article by MIT Technology Watch on the mining of precious metals for technology components. In addition to the problem of limited resources, the article highlights how mining is environmentally damaging as large amounts of rock need to be mined for moderate amounts of precious metals, and dangerous chemicals are often required to separate metals.

On the uptake of generative AI, the advisory firm KPMG has published its 2024 survey on the use of generative AI in large companies – 78% of respondents expect a return of investment within 3 years. Gartner’s 2024 Hype Cycle for Emerging Technologies report says that generative AI is on downward slope of the hype cycle, but this just means that interest in the technology is more measured, with organizations having more realistic objectives. An article by science writer Bill Gourgey argues that the limit of AI, compared to humans, is its inability to “feel” which is fundamental in humans’ learning and achieving expertise.

It was a challenging week for social networks. Paval Durov, the founder of Telegram, was arrested in France as part of an on-going investigation into criminal activity on the platform. The Brazilian Supreme Court has banned the X (formerly Twitter) platform in the country, stemming from a row over the use of X accounts to disseminate disinformation.

On the issue of training data, the web-crawlers of major AI firms are increasingly blocked by news sites who fear copyright violations. Elsewhere, an MIT led research project analyzed 1’800 data sets on popular hosting sites and found that 70% of datasets had missing, incorrect or incomplete licensing and data provenance information.

Finally, on the geo-political front, a hacker group linked to the Chinese government was discovered to have developed malware to exploit a zero-day flaw in software used by a major US Internet service provider.

1. Generative AI is sliding into the ‘trough of disillusionment’

This Computer World article reviews the place of Generative AI in the 2024 Gartner Hype Cycle for Emerging Technologies. Gartner’s Hype Cycle model is a graph that denotes how interest and adoption of technologies evolve over time. The first part of the graph shows a huge increase in interest in the technology, leading to a peak of inflated expectations. Interest then falls until, as may companies discover the technology is not mature enough or suitable for them, to a trough of disillusionment. Then, interest begins to rise again in a more measured manner, which the model calls the slope of enlightenment until it reaches a plateau of productivity.

For Gartner, generative AI technologies have begun their decline from the peak of inflated expectations to the trough of disillusionment, despite the success of AI-assisted code generation tools. A key point cited by Gartner is that companies are unsure about how to create or measure return on investments from the technology, especially with the data management and governance challenges that need to be resolved. Nonetheless, Gartner expects spending by companies on generative AI to reach 42 billion USD by 2030.

A Github blog post reports that GitHub has been named a leader by Gartner in the field of AI Code Assistants, with 77’000 organizations using Copilot today. Github plans to continue development on Copilot citing the following strategic software engineering concerns: application modernization, streamlining code migration, enhancing performance, reducing technical debt, boosting development speed, and addressing skills gaps.

2. This rare earth metal shows us the future of our planet’s resources

This article takes a deep look at the issues around the supply of the rare metals used in technical products. It mainly cites the example of neodymium – a metal used to color decorative glass, but more recently also used in high-powered magnets in devices ranging from smartphones to wind turbines. The metal’s reserves, which is the amount economically feasible to extract, is estimated at 12.8 million tons, but wind turbines alone require 121’000 tons each year. Among the issues raised in the article:

  • Predictions on reserves tend to be pessimistic as new reserves are regularly discovered and mining techniques improve. The example of oil is made, where in 1956, a Shell company geologist estimated that peak oil (the point at which production would start to decline because of diminishing supplies) would be reached around the year 2000. However, with advances in drilling and fracking techniques, more oil is being produced today than ever before.
  • The demand for materials is not constant. Some demands may wane as technologies evolve. In the case of oil for instance, the development of renewable energies may lead to the effective peak oil moment.
  • The extraction of rare metals like neodymium can be environmentally damaging. The metals are not found in bulk, so a lot of rocks need to be mined to obtain sufficient amounts of the metal. Further, harsh chemicals are often used to separate the metal, and mines can even become radioactive because thorium and uranium are often found along with the metals.

In the long run, efforts need to be made to prolong the lifetimes of electronic products, and this can require investment in infrastructures for processing end-of-life equipment to extract metals.

3. Telegram founder arrest part of cybercrime inquiry, say prosecutors

Paval Durov, the founder of Telegram, was arrested in France as part of an on-going investigation into criminal activity on the platform. Telegram is accused of not doing enough to prevent this activity and of not cooperating closely enough with police forces. The French police are looking at 12 alleged cases that include organized crime gangs using the platform for fraud and drugs offenses. The platform is also accused of complicity in the distribution of sexual images containing children. Telegram has almost 1 billion users today. The article mentions how Telegram has been used by pro-democracy activists in Belarus, Hong Kong and Iran, but was also used by extreme right-wing groups in organizing riots last month in the UK, following the knife attack that killed three children in England.

This case makes the EU’s Digital Services Act worth mentioning. This act, adopted in 2022, is designed to prevent social media platforms and online marketplaces distributing illegal content. The aim is to ban the sale online of materials that are illegal to sell offline, such as items not meeting EU standards. Platform providers are also responsible for removing content that has been signaled as being illegal. Platforms are expected to cooperate with national authorities, and on-line platforms with more than 45 million monthly active users in the EU (LinkedIn, Facebook, TikTok, YouTube etc.) have the largest set of obligations. The Act also covers issues such as transparency in relation to content marketing (i.e., explain how recommendation algorithms work) and in relation to moderation decisions.

4. KPMG Generative AI Survey 2024

The advisory firm KPMG has just published its 2024 survey on the use of generative AI in large US companies (those with over 1 billion USD in revenue). 225 C-Suite and senior business leaders were questioned in June and July, 2024. Though it focuses on large companies, the results are interesting because the companies are already using the technology and investing heavily. The main sectors using generative AI are inventory management (64%), healthcare for document assessments (51%), technology and media for workflow automation (43%), and financial services for customer service chatbots (30%). 71% of respondents say that GenAI is already impacting their business models through data analysis in making decisions. Over half of respondents are expanding GenAI initiatives to other sectors of their organization, and 78% expect a return of investment within 3 years. The driving factors for investment are increased revenue and productivity. Most companies are purchasing their AI tools from vendors (only 12% are developing completely in-house) and this dependence is seen as one of the primary risks of the technology. The other major risks perceived are cybersecurity and data loss, as well as poor data quality. Companies expect more regulation on AI and over half of the companies fear the cost of making their AI infrastructures compliant.

5. Will computers ever feel responsible?

In this article Bill Gourgey, a science writer based in Washington, DC, takes a skeptical view of the potential of AI. He cites the well-known Dreyfus skill acquisition model which describes human learning as moving through five stages: novice, advanced beginner, competent, proficient, and expert. The novice relies heavily on rules and guidelines, advanced learners have developed competence but still require guidelines for more complex tasks, a competent person is self-reliant and can adapt to challenges, a proficient person can perform tasks with accuracy, and an expert relies on intuition gained to complete tasks. For the author, experts have embedded the rules of a task in their brains and "nervous systems", and feel "responsible" for the outcomes of their decisions. This is the inherent limit of AI: its inability to care or share concerns. Citing Stuart Dreyfus, one of the creators of the learning model: "It seems to me that in any area which involves life-and-death possibilities, AI is dangerous, because it doesn’t know what death means".

6. Chinese government hackers targeted US internet providers with zero-day exploit, researchers say

The hacker group Volt Typhoon, linked to the Chinese government, was discovered to have developed malware to exploit a zero-day flaw in the Versa Director software. This software is used by internet service providers (ISPs) and managed service providers (MSPs) throughout the US, which explains why the software is an attractive target for hackers. The malware was designed to steal credentials for services, thereby allowing it to propagate to further services over connected networks. The Volt Typhoon group’s main aim is to target critical infrastructures in the US, notably in anticipation of a potential conflict with the US following an invasion by China of Taiwan.

7. Major Sites Are Saying No to Apple’s AI Scraping

This Wired article reports how an increasing number of news websites and social media platforms are blocking Apple’s web-crawling bot, Applebot-Extended, which is collecting data to train Apple’s AI. The sites blocking the bot include Facebook, Instagram, Craigslist, Tumblr, The New York Times, The Financial Times, The Atlantic, Vox Media, the USA Today network, and Condé Nast. Applebot-Extended is an evolution of Applebot – Apple’s web-crawler for Siri and Spotlight. A website can block bots by adding the name of the bot to the robots.txt file hosted on the platform, though bots can technically bypass this restriction. One criticism of the robots.txt approach is that it is opt-out and websites need to know that the bot exists to then block it, whereas the principle of copyright protection must apply whether technical measures are present or not. AI bots from Google, OpenAI and Anthropic are also largely blocked by news websites who fear that their content is provided by AI platforms without them receiving (financial) recognition. However, the tendency is towards commercial agreements between news companies and AI firms. For instance, the article cites an agreement between the news company Condé Nast and AI firm OpenAI which has led to Condé Nast removing the block it had put in place against OpenAI’s web-crawling bot.

8. Study: Transparency is often lacking in datasets used to train large language models

An MIT led research project has developed a tool – the Data Provenance Explorer – to help AI researchers choose their datasets. The project analyzed over 1’800 data sets on popular hosting sites and found that 70% of datasets had missing, incorrect or incomplete licensing and data provenance information. A core issue is that datasets are treated often as monolithic blocks, rather than as a collection of many individual data coming from different sources. Information about the sources of individual data can be lost, and this can make it harder to detect potential leaks of personal data, infringement of copyright, or just increase the possibility of biased or erroneous content in the dataset. The research has been published in a Nature Magazine article. The Data Provenance Explorer tool generates easy-to-read summaries of a dataset characteristics based on content creators, data sources, licenses, and allowable uses.

Another Nature article looks at the issue of academic papers being used in training data by AI firms. The academic principle of fair attribution requires researchers to get cited for their works, but AI models do not yet have the technical means to respect this requirement, despite advances in retrieval-augmented generation (RAG). It cites an article from the Washington Post which found that PLOS and Frontiers journals were prominent in the C4 dataset used to train Meta’s Llama models. The article also mentions that the World Intellectual Property Organization (WIPO) remains undecided about whether use of data to train AI models constitutes copyright infringement or not.

Source MIT News

9. Musk's X banned in Brazil after disinformation row

The Brazilian Supreme Court has banned the X (formerly Twitter) platform in the country following the failure of the company to comply with a requirement to appoint a new legal representative in the country. The platform has been under judicial scrutiny after it was used to host accounts spreading disinformation – mostly by supporters of Brazil’s former far-right president Jair Bolsonaro. The courts previously ordered these accounts to be blocked. Elon Musk, for his part, is strongly against any type of censorship on the platform. Apple and Google have been given five-days to remove X from their App Stores for Brazilian users. The X platform is estimated to have 20 million users in Brazil.