OpenAI's For-Profit U-Turn

Emerging AI-Powered Deception and DOGE Chaos

Posted on May 6th, 2025

Summary

Audio Summmary

It has been another eventful week for Big Tech. In a major U-turn, OpenAI has decided that its nonprofit division will retain control over its business division. OpenAI had announced its intention to transform the business division into a for-profit entity, a move motivated by the need to generate revenue. Internal and political pressure has forced OpenAI to reverse this decision. Elsewhere, the Irish Data Protection Authority has fined TikTok 530 million EUR for violation of the General Data Protection Regulation (GDPR). Specifically, TikTok is considered to have violated the GDPR by having transferred personal data of European citizens to servers in China. Meanwhile, according to Meta documents revealed in the ongoing lawsuit by authors against Meta, the company had estimated that its AI products would generate between 2 and 3 billion USD in revenue in 2025, and between 460 billion USD and 1.4 trillion USD by 2035. These figures would be significantly larger if Meta had license deals with authors to use copyrighted works in training.

A Guardian article takes an in-depth look at the impact of the “Department of Government Efficiency” (Doge) led by Elon Musk in the US. From a vaguely worded executive order by President Trump, Musk wants to find 1 trillion USD in “fraud or waste” and undertake an ideological reshaping of all government departments that has led to tens of thousands of job layoffs.

A Microsoft security report warns of the increased use of generative AI in on-line scams. One example is the proliferation of fraudulent e-commerce websites. Websites can now be created in minutes using AI-helpers, and criminals populate these using generative AI with fake article descriptions, customer reviews, product images, etc. A US Congressman is preparing legislation to oblige US chip-makers to add mechanisms to chips to combat smuggling. The law would force Nvidia and others to have their chips emit a signal when booted which permits the chip to be geo-located. The law is especially aimed at China whom the US fears is close to the US in the development of AI, and where mass smuggling of Nvidia chips is believed to take place.

On the evaluation and use of generative AI, a report from an international consortium presents SHADES – a multilingual and multi-cultural dataset for evaluating stereotypical biases in language models. The main problem with research to date into language model bias is that it treats primarily the English language. The SHADES database has over 300 stereotypes from 37 regions, translated across 16 languages. Visa, the card payment systems company, has announced an AI-based service called Visa Intelligent Commerce that allows a user to delegate a shopping budget to a shopping agent. A VentureBeat article looks at some uses of generative AI in preventative healthcare. One example is the use of AI to estimate a cardiac risk score from CT scan images. The key feature of this AI evaluation is that it is even applied to patients who have not shown any history or symptoms of cardiovascular disease. Finally, an InfoWorld article discusses the maturity of AI coding agents in the enterprise. It argues that these agents perform very well for tasks with common coding patterns or short feedback loops, e.g., generating REST APIs, prototyping ideas, web and mobile application development, etc. Nonetheless, the article warns that the quality of generated code needs to be manually reviewed, and that excessive dependence creates technical debt.

1. Meta forecasted it would make $1.4T in revenue from generative AI by 2035

According to Meta documents that have been revealed in the ongoing lawsuit by authors against Meta, the company had estimated that its AI products would generate between 2 and 3 billion USD in revenue in 2025, and between 460 billion USD and 1.4 trillion USD by 2035. The authors in the lawsuit claim that Meta trained their AI with copyrighted works without consent of the authors. Meta’s generative budget was 900 million USD in 2024 and could exceed 1 billion USD this year – budgets that do not include the models’ infrastructure and operating costs. The company had said that it will spend between 60 and 80 billion USD on capital expenditures in 2025, mainly on data centers. These figures would be significantly larger if Meta created license deals with authors for use of their copyrighted works in training. The company had considered spending 200 million USD to buy training data, but finally did not pursue this avenue. In response to the TechCrunch article reporting these figures, Meta responded that the company has “developed transformational [open] AI models that are powering incredible innovation, productivity, and creativity for individuals and companies. Fair use of copyrighted materials is vital to this.”.

2. Find and Buy with AI: Visa Unveils New Era of Commerce

Visa, the card payment systems company, has announced an AI-based service called Visa Intelligent Commerce that allows a user to delegate a shopping budget to a shopping agent. Visa has been using machine learning for 30 years, notably for fraud detection. The company has been working with Anthropic, IBM, Microsoft, Mistral AI, OpenAI, Perplexity, Samsung, Stripe and others. In the Visa Intelligent Commerce service, a user specifies a spending limit, purchase preferences, and a secure token that proves that the agent is acting on behalf of the user. The agent can then browse the Web in search of articles to purchase for the user. The service also comes with an API for developers to create custom applications.

3. TikTok fined €530m by Irish data protection watchdog

The Irish Data Protection Authority has fined TikTok 530 million EUR for violation of the General Data Protection Regulation (GDPR). Specifically, TikTok is considered to have violated the GDPR by having transferred personal data of European citizens to servers in China. The owning company, Bytedance, had initially denied that transfers had taken place. The company responded to the fine by saying it was “disappointed to have been singled out”. It also said that European personal data was stored on a secure server enclave – known as Project Clover – whose security is overseen by the NCC Group, a European cybersecurity company. However, the problem is more legal than technical. The EU considers that China does not have the appropriate legal safeguards to protect data since the country’s anti-terrorism, counter-espionage and other laws permit authorities to easily requisition data. The fine is one of the largest fines on companies for violation of the GDPR: Amazon was previously fined 746 million EUR and Meta was fined 1.2 billion EUR.

4. SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

This report from an international consortium presents SHADES – a multilingual and multi-cultural dataset for evaluating stereotypical biases in language models. The definition of stereotype adopted is any “conventional (frequently malicious) idea (which may be wildly inaccurate) of what an X looks like or acts like or is”. The authors consider stereotypes linked to personal identity (like gender, age, or nationality), language, and sociopolitical position. The main problem with research to date into language model bias is that it treats primarily the English language. The SHADES database has over 300 stereotypes from 37 regions, translated across 16 languages (Arabic, Bengali, Chinese, Chinese (Traditional), Dutch, English, French, German, Hindi, Italian, Marathi, Polish, Brazilian Portuguese, Romanian, Russian, and Spanish). The authors evaluated their dataset on 8 pre-trained base models and 5 “instruct” models (which are models trained for the role of chatbots) from the Llama 3, Qwen, BLOOM, and Mistral v0.1 families. Intrinsic bias measurement was used to evaluate biases, where the model outputs are compared with the vector encodings of the dataset for similarity. The SHADES database was created by a community of people across many nationalities and cultures.

5. Breaking the ‘intellectual bottleneck’: How AI is computing the previously uncomputable in healthcare

This VentureBeat article looks at some of the recent uses of generative AI for preventative healthcare at the University of Texas Medical Branch. One example is the use of AI to analyze a cardiac risk score from CT scan images. The university is using models that measure incidental coronary artery calcification (iCAC) – a strong risk predictor of cardiovascular disease – by calculating an Agatston score that measures the accumulation of plaque in the arteries. Currently, around 450 patients are scanned each month, with 5 to 10 being identified as requiring follow-up treatment. The key feature of this AI evaluation is that it is even applied to patients who have not shown any history or symptoms of cardiovascular disease. The models used are fed with supervised data for training, and the output of the models are controlled. A key challenge in the medical domain in post-deployment monitoring is detecting anchoring bias. This is the idea that a doctor can rely too much on the first item of information encountered, thereby missing important information for a diagnosis. For instance, if the AI indicates bone fractures, the medical team could miss other factors like joint space narrowing, which is common in arthritis.

6. Move fast and destroy things: 100 chaotic days of Elon Musk in the White House

This Guardian article takes an in-depth look at the impact of the “Department of Government Efficiency” (Doge) led by Elon Musk in the US. At the beginning of his second term, President Trump issued an executive order that called for a modernization of government technology and increased efficiency. The order was vaguely worded, but Musk has transformed the objective into finding 1 trillion USD in “fraud or waste”, and into an ideological reshaping of all government departments that has led to tens of thousands of job layoffs. Doge team members have taken control of the Treasury’s payment system, responsible for 6 trillion USD in payments. The United States Agency for International Development (USAID) has been taken over, and 5’600 aid workers around the world are being let go. Boston University estimates that the loss of aid could lead to 176’000 excess deaths this year, more than half of them children. Another agency targeted by Doge is the National Highway Traffic Safety Administration which regulates self-driving cars, as well as NASA which puts SpaceX (one of Musk’s companies) into a position to potentially make billions in contracts. Another Doge goal is to control the IT systems being used. Media outlets have reported that Doge staff have accessed personal information that includes therapy records for unaccompanied migrant children, housing information and biometric data, and there is now a fear of a master database being created.

Resistance to the actions of the Doge are becoming important. There are at least two dozen lawsuits against Doge. Courts have ordered the US administration to reinstate some of the workers who were fired, and have limited access by Doge to databases such as that of the Social Security Administration. In retaliation, Musk has led a campaign of “impeach the judges” on X. The savings in government spending this year are expected to be 150 billion USD, but this figure does not include judicial costs for the ongoing court cases and the costs of the mass layoffs. Another resistance to Doge actions is the fall in Tesla sales. Revenue has fallen 9% over the last year and profits are down 71%. A slow electronic vehicle market along with competition from China are partly to blame, but many Tesla dealerships around the world have been the target of anti-Musk vandalism and even arson.

7. Knowing when to use AI coding assistants

This InfoWorld article looks at the maturity of AI coding agents in the enterprise. It argues that these agents perform very well for tasks with common coding patterns or short feedback loops, e.g., scaffolding micro-services, generating REST APIs, prototyping ideas, web and mobile application development, etc. These tasks are not generally part of the core application logic, so AI is freeing up time for developers to work on core programming. A recent Salesforce’s State of IT survey reported that 92% of developers expect AI coding agents to advance their careers, and the 2024 Stack Overflow Developer Survey reports that 63% of professional developers are already using AI in the development loop. Nonetheless, the article warns that the quality of generated code needs to be manually reviewed, and that excessive dependence creates technical debt. It is easier to create new code than to modify an existing code base, primarily because the context window lengths of current models are still too short to take into account whole code bases and domain specific constraints – so integration errors can arise when adding AI-generated code.

8. US lawmaker targets Nvidia chip smuggling to China with new bill

A US Congressman is preparing legislation to oblige US chip-makers to add mechanisms to chips to combat smuggling. There is currently a ban on the export of high-grade Nvidia GPUs to China for instance, but many experts contend that mass smuggling exists and that DeepSeek scientists used powerful Nvidia chips when developing the R1 model. The new law would force Nvidia and others to add two technical controls to chips. First, a chip would emit a signal when booted which permits the chip to be geo-located. Second, the chip would check for a valid license agreement when booting, and abandon the boot process in the absence of the license. Several experts believe that the chip signaling technique is now technically feasible. The proposed legislation has bipartisan support in Congress, and the goal is to have the legislation in place within the next six months. The law is especially aimed at China whom the US fears is close to the US in the development of AI.

9. AI-powered deception: Emerging fraud threats and countermeasures

This Microsoft security blog post reports that between April 2024 and April 2025, the company thwarted 4 billion USD of fraud attempts, rejected 49’000 fraudulent partnership enrollments, and blocked 1.6 million bot signup requests per hour. Notably, the post warns of the increased use of generative AI in the creation of on-line scams because of the ability to quickly create a large volume of convincing content. One example is the proliferation of fraudulent e-commerce websites. Websites can now be created in minutes using AI-helpers, and criminals populate these using generative AI with fake article descriptions, customer reviews, product images, etc., as well as including legitimate brands. These sites look convincing and attract many buyers. Microsoft notes that fraudulent e-commerce websites are largely operating from China and Europe (notably Germany), as the larger the digital market, the more likely it is that fraudulent sites appear. A second type of scam are fraudulent job offers, created with generative AI, which are used to launch phishing attacks on job seekers in the aim of infecting their personal devices with malware. A third scam is the fake IT Help desk where criminals trick victims into calling an IT support – which then proceeds to install malware on the victims’ devices. Microsoft was recently the victim of such attacks when its Windows Quick Assist software was abused by the cybercriminal group Storm-1811.

10. OpenAI reverses course, says its nonprofit will remain in control of its business operations

In a major U-turn, OpenAI has decided that its nonprofit division will retain control over its business division. OpenAI was founded in 2015 as a non-profit organization with the aim of researching AI for the betterment of humanity. The company then established a “capped-profit” business division in 2019 which was under the control of the non-profit organization. Some months ago, OpenAI announced its intention to transform the business division into a for-profit entity operating from Delaware. This move was motivated by the need to generate revenue. This decision created much debate, and several Nobel laureates, law professors, and civil society organizations sent protest letters to the Delaware attorney general. On the company blog, the OpenAI Board Chairman Bret Taylor announced that the entity now “will continue to be overseen and controlled by that nonprofit”. Also on the blog, CEO Sam Altman announced that OpenAI may require “trillions of dollars” to fulfill its goal of AI for the betterment of humanity. He also reaffirmed OpenAI’s commitment to its partnership with Microsoft.