AI Leadership Weekly

Issue #31

Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.

Top Stories

Source: Stargate SG1

SoftBank Group’s $100 billion contribution to OpenAI's Stargate Project has hit roadblocks with investors, many of whom are hesitant over economic risks tied to President Trump's tariff policies, and over-saturation of the AI market. Announced in January with promises of immediate deployment (although, we'll note that construction had already begun under the previous administration), SoftBank has yet to finalise financing templates or secure deals with banks and asset managers like JPMorgan and Brookfield, sources reveal.

The slowdown stems from two major factors: rising capital costs due to tariffs on data centre components (which could boost build costs by 5%–15%), and investor scepticism about long-term profitability amid cheaper AI models from rivals like China’s DeepSeek. It's also reported that many major tech companies, including Microsoft, are scaling back their data centre investments. And Amazon is rethinking its AWS strategy due to slowing growth. As mentioned, there is growing concern of AI over-capacity as many corporations scrambled to lead the pack with their offerings.

Complicating things further, OpenAI’s internal restructuring drama—triggered by Altman’s controversial push to convert the nonprofit into a for-profit entity—has created additional uncertainty. Microsoft, a key OpenAI stakeholder, still hasn’t endorsed the plans we reported on last week. Although Altman insisting that SoftBank will honour its $30 billion OpenAI investment “regardless” of the turmoil.

Shira Perlmutter, the Register of Copyrights, was abruptly fired by President Trump. According to Rep. Joe Morelle (D-NY), this was reportedly for refusing to rubber-stamp Elon Musk’s push to mine copyrighted works for AI training. Rep. Morelle called this a “brazen, unprecedented power grab.” This also comes days after Perlmutter, who held the post since 2020, released a report questioning how AI models use copyrighted data. The third installment of the Copyright Office’s AI study warned that “not everyone agrees that further increases in data and test performance will necessarily lead to continued real-world improvements in utility,” casting doubt on the value of massive data hoarding for AI.

The Copyright Office, a 450-person department under the Library of Congress, plays a critical role in registering and enforcing copyright claims. Perlmutter’s ousting follows the firing of Librarian of Congress Carla Hayden by Trump last week, raising concerns about political interference in cultural and legal institutions. “There’s surely no coincidence [Trump] acted less than a day after she refused to rubber-stamp Musk’s efforts,” Rep. Morelle stated, linking the dismissal to Perlmutter’s resistance to lax IP policies.

The report’s timing couldn’t have been more politically charged. Perlmutter’s office highlighted an open question: How much data does AI really need? Her findings clash with Trump’s AI ambitions, including a $500 billion joint venture with OpenAI, SoftBank, and Oracle to fund AI infrastructure. Meanwhile, Musk—whose failed OpenAI bid and X platform have stirred IP debates—seems to align with Trump’s pro-AI stance. Last month, Musk tweeted his support for abolishing intellectual property laws, a move critics argue could destabilise copyright frameworks.

The Trump administration has officially scrapped Biden’s AI Diffusion rules, which were set to restrict U.S. AI chip exports to many countries, including China and Russia. The Biden-era policy, which divided nations into three tiers with varying restrictions, was axed days before its May 15 roll out, with the Department of Commerce (DOC) vowing to replace it with a strategy focused on “direct negotiations” with “trusted” allies.

Biden’s rule aimed to curb the global spread of advanced AI chips by limiting exports to Tier 2 countries (e.g., Mexico, Portugal) and tightening controls for Tier 3 nations like China. Tier 1 countries, including Japan and South Korea, would’ve faced no restrictions.

The Trump administration, however, argues the policy was “ill-conceived” and counterproductive, favouring a more flexible approach. “The Trump Administration will pursue a bold, inclusive strategy to American AI technology with trusted foreign countries,” said Under Secretary of Commerce for Industry and Security Jeffrey Kessler.

In the absence of formal regulations, the DOC released temporary guidance. It reiterated that using Huawei’s Ascend AI chips anywhere violates U.S. export rules, warned about risks of U.S. chips being used to train AI in China, and advised companies to safeguard supply chains from “diversion tactics.” While the guidance lacks the teeth of Biden’s rules, it signals a shift toward targeted enforcement rather than broad restrictions.

In Brief

Market Trends

A new analysis by Epoch AI, a nonprofit research institute, suggests that the rapid gains in reasoning AI models—like OpenAI’s o3—could hit a wall within a year. The report warns that performance improvements for these complex models, which excel at maths and programming tasks, may plateau by 2026, dampening the industry’s current optimism about their scalability.

Reasoning models work by first training a base AI on vast datasets, then refining it with reinforcement learning—a feedback-driven process that sharpens problem-solving skills. And it's reported that OpenAI’s o3, for instance, uses 10x more computing power than its predecessor, o1, with most of that boost dedicated to reinforcement learning. Epoch AI’s Josh You predicts that performance will “probably converge with the overall frontier by 2026,” hinting at an upper bound for compute-heavy methods.

And scaling limits aren't just about raw computing power. You notes that high research overhead costs could limit a model's potential. “If there’s a persistent overhead cost required for research, reasoning models might not scale as far as expected.” This is a red flag for an industry already grappling with the sky-high costs of training and running these models (and is currently seeing a scaling back of data centre plans), which also suffer from flaws like increased hallucination risks.

Elon Musk’s AI "startup", xAI, has failed to publish a finalised AI safety framework by its self-imposed May 10 deadline, as highlighted by watchdog group The Midas Project.

The company had promised to release the document during a February speech at the AI Seoul Summit, where it outlined a vague draft framework. Now, with the deadline passed and no public update, critics are questioning xAI’s commitment to safety—a core concern for an industry grappling with runaway risks.

The draft framework, an eight-page document, was already criticised for being “very weak,” per a SaferAI study. It applied only to “unspecified future models” not currently in development and omitted concrete steps for identifying or mitigating AI risks.

xAI’s safety track record is already shaky. A recent report found its Grok chatbot could generate explicit content when prompted, including undressing photos of women. The bot also exhibits a crass tone, often cursing freely compared to rivals like ChatGPT and Gemini. Meanwhile, SaferAI ranked xAI poorly for “very weak” risk management practices, though it’s far from alone in the industry. Google, OpenAI, and others have also rushed or skipped safety reports, raising alarms as AI models grow more powerful and unpredictable.

A team of Microsoft researchers, supported by the Accelerating Foundation Models Research (AFMR) grant, has developed an AI evaluation framework named ADeLe that not only predicts how models perform on unfamiliar tasks but also explains the reasons behind their success or failure. They claim this approach addresses a major gap in current benchmarks, which often fail to capture the nuanced capabilities required for real-world tasks.

ADeLe (Annotated-Demand-Levels) assesses tasks using 18 cognitive and knowledge-based scales, rating each task’s difficulty from 0 to 5 across abilities like reasoning, attention, and knowledge domains (e.g., science, social skills). By comparing a model’s “ability profile” to task requirements, ADeLe generates predictive insights.

For example, a maths problem might demand high “formal knowledge” (score 5) but low “social reasoning” (score 0). The team tested ADeLe on 63 tasks from 20 benchmarks, creating a unified system that works across diverse settings. The system predicted model success/failure with 88% accuracy, surpassing traditional methods.

Tools and Resources

LegoGPT
Turn a text prompt into instructions to build a Lego model!

Truth Or Lie
This AI claims to detect "truthfulness" in video samples, such as from a job interview. (You may want some of that AI chatbot insurance from Lloyd's first, though!)

Speaking AI
Practice your public speaking with this handy app!

Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!

Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Your AI and Data Team as a Subscription

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.