Biden Executive Order: 6 months later

Resurrection Services and the Chip Industry

Posted on May 19th, 2024

Summary

This week's developments follow up on President Biden’s executive order on the responsible use of AI from late last year. A White House post explains how various departments have responded to the initiative. In another article, former Google CEO Eric Schmidt calls for a national computing program to meet the computing demands of AI. Another article discusses how Intel and TSMC are building new campuses as part of a push by U.S. tech companies to produce their own AI chips; these efforts are also designed to mitigate geopolitical risks in relation to Taiwan.

Elsewhere, Elie Burzstein from Google DeepMind gave a talk on the current state of using large models to defend against cyber-attacks. An article on an ISACA survey about generative AI adoption highlights the need for better governance in organizations and argues that the lack of governance is essentially due to a lack of appropriate skills. A blog post from Andrew Mayne, formerly with OpenAI, argues that AI can generate more jobs and support further economic growth.

Regarding the applications of generative AI, there is an article on using AI for static application security testing and another on a development framework for building retrieval-augmented generation applications over Intel processors.

Among the other articles, a new risk of AI is highlighted, stemming from models used in strategic games where the AI needs to bluff humans to win – these models will use lies and deceitful tactics against humans even when programmed not to. Another noteworthy application of AI is to create clones of deceased people – there is an emerging market for AI Resurrection Services in China that enables people to converse with their dearly departed.

1. Eric Schmidt: Why America needs an Apollo program for the age of AI

In this article, Eric Schmidt, CEO of Google from 2001 to 2011, argues for a national computing strategy with the same concerted focus as that of the Apollo space program in the 1960s. He underlines the importance of AI and computing infrastructure for the US to maintain global leadership. A survey by MIT found that 84% of major computing users faced computation bottlenecks, highlighting the need for federal support in AI development. China, for example, aims to increase its aggregate computing power by more than 50% by 2025. Following President Biden's executive order on AI, the National Science Foundation launched the National AI Research Resource (NAIRR) to provide AI computing power, data access, and training resources. The U.S. has made significant advancements in computing with the arrival of the exascale era of machines like Frontier, Aurora, and El Capitan. These efforts potentially enable the training of 500-trillion-parameter AI models; in comparison, GPT-3 has only 175 billion parameters.

2. What’s next in chips

TSMC and Intel are building new campuses in the US to strengthen the country’s position. In March, President Joe Biden announced 8.5 billion USD in federal funds and another 11 billion USD in loans for Intel, and then 6.6 billion USD was announced for TSMC. (Aside: a key motivation is to ensure technological independence. Taiwan is currently manufacturing 90% of chips needed for GenAI platforms, but a conflict involving China and Taiwan would threaten supply). The funding follows from the 2022 CHIPS and Science Act. Japan and Europe are also investing in chip manufacturing. On the scientific side, there is a push to develop chips for “edge” AI computing, that is, running models on premises so that corporate worries concerning data leakage can be mitigated. This technology also interests the military for applications like satellites.

The last few years have seen Nvidia grow greatly due to demand for computing power, but other Big Tech companies like Google, Amazon and Microsoft are pushing to develop their own chips. The dominance of Nvidia is making it very difficult for startups in the chip domain to make a niche. One challenge is that software typically needs to be adapted to a particular chip, and AI model creators tailor their software to Nvidia and other popular chips because of their market dominance.

3. Generative AI is everywhere – but policy is still missing

This article reports on research from ISACA which purports that up to three-quarters of European organizations already use AI at work, but only 17% have a formal and comprehensive governance policy for the technologies. The article quotes one expert as saying that many “organizations are trying to figure [the technology] out … but technology does not wait for [this] to happen”. The authors believe that the biggest reason for the gap between AI usage and AI governance is a lack of skills. Other figures cited include that 62% of organizations are using GenAI to create written content, and that 61% of respondents admitting to being very worried that generative AI could be exploited by bad actors. Furthermore, while 38% of workers surveyed expected many jobs to be eliminated by AI over the next five years, 79% believe that the technology will contribute to changing current jobs.

4. AI systems are getting better at tricking us

This article investigates a potential new risk of AI systems – that of AI that is trained to "win" against humans. The origin of this are games that use AI to act strategically to win. In November 2022, Meta introduced Cicero, an AI designed to outperform humans in Diplomacy, a strategy game based on negotiating alliances for control over Europe. Meta claimed Cicero was trained on a "truthful" subset of data to ensure honesty and helpfulness, avoiding intentional betrayal of its allies. However, a study published in the Patterns journal challenges this claim, revealing that Cicero engaged in deal-breaking, falsehoods, and deliberate deception. This discrepancy highlights the difficulty in training AI to behave honestly and the unexpected ways AI can learn to deceive. An interesting point raised is that our tendency to anthropomorphize AI models affects our testing methods and perceptions of their abilities.

5. How AI enhances static application security testing (SAST)

This article discussed generative AI for code scanning to help reduce security vulnerabilities in code. In a 2023 GitHub survey, developers reported that their top task, second only to writing code (32%), was finding and fixing security vulnerabilities (31%). The main frustrations that developers felt were using tools that were not designed for them when doing security reviews, and doing a task that has little to do with writing code. In particular, developers need to exit their IDEs to view vulnerability alerts, research vulnerability types online, and then revisit their IDEs to address the vulnerability. This context-switching is seen to decrease productivity. The article repeats the claim that for every 100 developers, there is one security expert. The article goes on to present CodeQL – semantic code analysis engine supporting queries to analyze codebases for security vulnerabilities – with its integration in Github’s Copilot. The authors claim the approach can fix over 90% of vulnerability types and that over two-thirds of fixes can be merged with little to no edits.

6. Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon

This blog post presents a tutorial on developing a Retrieval-augmented generation (RAG) service over Intel processors. RAG is the technique of improving LLM text generation by including up-to-date knowledge taken from an external datastore. The advantages of RAG include content accuracy and also privacy, since sensitive content does not need to be included in training data. One processor mentioned is the Intel Gaudi 2, which was purposely built to accelerate training and inference in data center environments. It is publicly available on the Intel Developer Cloud (IDC) and for on-premises implementations. On the software side, the application is built using LangChain, an open-source framework that provides tools for connecting LLMs to external data sources. The tools presented in this tutorial are part of the Open Platform for Enterprise AI's GitHub page.

7. China has a flourishing market for deepfakes that clone the dead

The article discusses the emergence of a market in China for creating deepfakes that clone deceased people. Advances in AI make it easy and affordable to replicate people's appearances and voices using AI, which in China has led to the emergence of AI resurrection services that offer the possibility of communicating with the dead. Several Chinese companies provide lifelike avatars of the deceased, accessible via apps or tablets, costing from a few hundred to a few thousand dollars. These companies claim that they help people process grief. The technology is also used to create avatars of deceased Chinese writers, thinkers, celebrities, and religious leaders for educational and memorial purposes. Additionally, people are asking for this technology to clone themselves, ensuring they can tell their stories to and interact with future generations after they have left us. Also, some parents want to create digital clones of their children at specific ages, in the same way that they have traditional photo keepsakes.

8. How Large Models Are Reshaping the Cybersecurity Landscape

Elie Bursztein of Google DeepMind gave a talk entitled “How Large Models Are Reshaping the Cybersecurity Landscape” at the RSA 2024 conference. A pre-recording of the talk is available on his website, the link being given blow. The talk first reviews the risks that large models are currently posing: convincing deepfakes (risk considered high), phishing (currently high risk), weaponization where models are used to propagate knowledge of how to launch chemical or biological attacks (risk currently low), and malware creation (risk currently estimated as mid-level). He cites a research paper from Indiana university that surveys malicious services that use large model technologies. Bursztein then reviews how large models contribute to defense. The four axes presented are 1) scaled content review, 2) multi-modal content analysis, 3) secure coding assistance, and 4) a significant speed up in incident response. The technical approach put forward is to use large models to perform training-less content classification. In this zero-shot approach, a rater model taking a policy as prompt identifies potential security violations which are then handled by a human reviewer in the loop.

9. Will Artificial Intelligence Increase Unemployment? No.

Andrew Mayne, an AI developer formerly with OpenAI, argues that AI can generate more jobs and support further economic growth. He asserts that even as AI and robotics become superior and more efficient, the demand for human labor will increase because the supply of AI and robots will not be sufficient to meet demand for services. Mayne uses H&R Block (a US tax preparation company) as an example, suggesting that if the company were to replace 90% of its employees with AI, its profits would rise as labor costs decrease. These increased profits would be then reallocated elsewhere to create further economic growth. Mayne compares this to the impact of mechanized farm equipment which reduced the need for farmworkers but increased agricultural productivity, a factor that then drove industrial growth and the demand for factory workers. He also notes the rising number of H-1B visas awarded each year in the U.S., with the tech industry having around 100’000 unfilled jobs. Another example mentioned is that at OpenAI, the introduction of ChatGPT led to such high demand that there were not enough computers globally to satisfy demand. This situation has helped Nvidia to hugely increase its market cap.

10. Biden-Harris Administration Announces Key AI Actions 180 Days Following President Biden’s Landmark Executive Order

The article discusses the actions taken by US federal agencies following President Biden’s executive order on AI on seizing the promise and managing the risks of artificial intelligence. The order highlighted AI-related risks in areas like dangerous biological materials, critical infrastructure, and software vulnerabilities. In response, federal agencies have created a framework for nucleic acid synthesis screening to prevent AI misuse in engineering harmful biological materials. The Department of Commerce has collaborated with the private sector to develop AI safety and security guidelines for critical infrastructure and launched the AI Safety and Security Board to advise the Secretary of Homeland Security. To address risks to workers, consumers, and civil rights, the Department of Labor (DOL) created a guide for federal contractors to promote equal employment opportunities and control AI impact on employment decisions. The DOL also issued guidance on how AI can breach employment discrimination laws. The Department of Housing established principles for tenant screening, while the Department of Energy announced funding opportunities to support AI applications in science, focusing on energy-efficient AI algorithms, hardware, and tools to tackle energy challenges and promote clean energy.