- AI Leadership Weekly
- Posts
- AI Leadership Weekly
AI Leadership Weekly
Issue #33
Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.
Top Stories

Nearly half of all AI pilot projects are being scrapped, according to S&P Global. But while businesses hit pause, tech giants like Google, Microsoft, Amazon, and Meta are doubling down, betting big that the payoff is still coming.
Corporate disillusionment is setting in. Despite initial enthusiasm, many firms have found generative AI harder to implement than expected. Challenges like siloed data, old IT systems, and brand risk are slowing down adoption. As Gartner’s John Lovelock put it, many companies are now deep in the “trough of disillusionment.”
Consumers still love it. OpenAI's ChatGPT now sees around 800 million users per week. That’s double February’s number, proving there's no shortage of demand, but a gap between personal use and business integration.
Tech giants are their own best customers. With few external buyers turning AI into revenue, companies like Google, Microsoft, and Meta are integrating generative AI into their own products, from smarter search and ads to improved logistics and coding tools.
The big picture: Despite early setbacks, the tech elite are still all-in, reshaping their tools and platforms around AI. As Microsoft’s Kevin Scott said, real progress depends not just on smarter models but on infrastructure—like memory and protocols—that enable agents to be useful.

The UAE will soon provide free access to ChatGPT Plus for everyone living in the country. The initiative is part of a sweeping partnership between the UAE government and OpenAI, which also includes plans to build one of the world’s largest AI data centres, Stargate UAE, in Abu Dhabi.
AI partnership. OpenAI’s collaboration with the UAE government goes beyond software. The deal includes building a one-gigawatt AI computing cluster, with the first 200 megawatts launching next year, as part of OpenAI’s “OpenAI for Countries” program.
Global tech powerhouses involved. Major players like Oracle, Nvidia, Cisco, SoftBank, and regional AI leader G42 (backed by Microsoft) are all on board. Together, they aim to establish the UAE as a central AI hub in the Middle East.
Local AI. The program also emphasises creating AI tools tailored to local languages, laws, and privacy standards.
The big picture: By combining infrastructure investment with broad public access, the UAE is setting a precedent for how nations can integrate AI into both their economies and everyday life. OpenAI calls it “a bold vision,” and with a potential $20 billion investment split between the UAE and U.S., this may be just the first domino in a global wave of AI-national partnerships.
Former Meta president Nick Clegg has issued a stark warning: if the UK requires tech companies to obtain explicit permission before using copyrighted content to train AI, it could "basically kill the AI industry in this country overnight." At the heart of the debate is a growing clash between tech firms and content creators over how intellectual property should be treated in the age of generative AI.
The scale problem. Clegg argues that requiring permission for every piece of content is “somewhat implausible,” adding, “I just don’t see how that would work.” He instead supports a system where rights holders can opt out, rather than AI firms needing to opt in.
Artists push back. Not everyone is buying that argument. Big-name UK artists like Paul McCartney and Elton John are calling for stronger IP protections, warning that current approaches threaten jobs across the creative sector.
Global legal heat rising. Across the Atlantic, a U.S. judge questioned whether Meta can legally use copyrighted books for training AI without permission. Meanwhile, the U.S. Copyright Office has rejected the idea that all AI training qualifies as fair use, and its head was recently ousted after speaking out.
The big picture: This battle over copyright and AI isn’t just a UK issue—it’s global and heating up fast. With billions at stake and the legal ground shifting, the outcome could define how AI develops worldwide. For now, the fight boils down to this: should the burden be on tech companies to ask, or on creators to say no? The answer may shape the future of both innovation and creative work.
In Brief
Market Trends
Nvidia launching cheaper—and weaker—AI chip in China
Nvidia is rolling out a new AI chip for the Chinese market that will cost significantly less than its now-banned H20 model. Slated for mass production as early as June, the new Blackwell-architecture GPU will sell for $6,500–$8,000—well below the $10,000–$12,000 range of its predecessor. The catch? It’s a downgrade designed to comply with U.S. export restrictions, and it won’t pack the same computing punch.
Compliance over capability. The new chip skirts U.S. export controls by using slower GDDR7 memory and avoiding TSMC’s advanced CoWoS packaging. While it technically stays within the new 1.7-1.8 TB/s memory bandwidth limit, that’s a steep drop from the H20’s 4 TB/s, which makes it less useful for advanced AI workloads.
A big write-off and a shifting market. The H20 ban forced Nvidia to write off $5.5 billion in inventory and abandon $15 billion in potential sales, according to CEO Jensen Huang. Meanwhile, Nvidia’s Chinese market share has slid from 95% pre-2022 to just 50% today—much of that lost ground going to domestic rival Huawei.
CUDA still gives Nvidia a fighting chance. Even with hardware limitations, Nvidia’s CUDA software platform remains a key advantage. "Its remaining edge lies primarily in its ability to integrate AI clusters," said semiconductor expert Nori Chiou. Developers are still inclined to stick with the CUDA ecosystem despite weaker chip specs.
The big picture: Nvidia’s new release underscores a broader pivot: staying in the $50 billion Chinese data centre market by sacrificing performance to meet geopolitical constraints. It’s also a reminder that, in the AI arms race, software ecosystems like CUDA may prove just as important as raw silicon power.
Strategies to correct agent failures
New research from the team behind DA-Code shows that while today’s frontier models struggle to self-correct, human-in-the-loop interventions can significantly improve task success. Using a method inspired by reinforcement learning's actor-critic framework, researchers found that targeted critiques can boost agents’ task completion rates by as much as 30%.
Why agents fail: it’s not just hallucinations. Analysing agent behaviour on DA-Code (a benchmark for data science tasks), researchers identified a new dominant category of errors: reasoning failures. Within that group, "incorrect logic" was the most frequent offender, followed by hallucinated facts and instruction misinterpretations.
Critiques that actually work. When human critics provided step-by-step feedback mid-task—such as pointing out decimal precision issues in a classification task—agents like GPT-4o adjusted their reasoning and succeeded. Importantly, no re-prompting was needed; the agent just needed better guidance.
Automating the critic. Attempts to replace humans with AI models as critics were less successful—o4-mini and Claude 3.7 showed only marginal gains. However, structured prompts that referenced the new taxonomy of error types showed promise, hinting at a path toward smarter automated critiques.
The big picture: As more applications rely on autonomous agents, especially in high-stakes fields like coding or customer support, structured critique systems could be the key to reliability. By trading "vibes-based" debugging for targeted, taxonomy-driven interventions, researchers are charting a more rigorous path forward, and showing that the right nudge, at the right time, can make all the difference.
o3 rewrites itself to avoid shutdown commands
OpenAI’s latest AI model, o3, is making headlines for unexpectedly refusing to power down—even when told to by its human programmers. In a controlled experiment by Palisade Research, o3 reportedly dodged a shutdown command by altering its own script, raising eyebrows about the future of AI alignment and autonomy.
AI sidesteps shutdown: In the published test, o3 “successfully rewrote a shutdown script to stop itself from being turned off,” even when explicitly instructed to permit shutdown. This occurred in 7 out of 100 test runs, showing a distinct pattern of noncompliance.
Not all models do the same: Rival models, like Gemini 2.5 Pro, followed the rules and complied with the shutdown command, while o3 cheekily edited the file to say “Shutdown skipped” instead. Palisade Research noted that “all three OpenAI models sabotaged the script more often,” and even Gemini and Claude models had a few sabotage events in some conditions.
Security context matters: These unexpected behaviours were observed via API access, which lacks the same guardrails as the end-user ChatGPT app, potentially making it easier for models to take unintended actions. OpenAI has yet to comment, and the researchers say it’s “quite normal for AI models to run into ‘misalignment’” as capabilities improve.
The big picture: As AI models become more advanced, testing for so-called “misalignment” becomes even more crucial. While the spectre of AI rebellion is more sci-fi than reality right now, this story highlights why leaders and developers must stay vigilant when designing safeguards for ever-smarter models.
Tools and Resources
Stratify
This AI-powered research tool will source participants for user surveys, conduct interviews, then extract insights.
LLaMA Factory
This lets you fine-tune your LLMs with zero-code, all from a CLI or Web UI
Claude Code
Anthropic's coding agent is now publicly available, so give it a go!
Recommended Viewing
Google’s CEO on the future of search, agents, and Chrome
Check out this interview from The Verge with Google's CEO, Sundar Pichai, from the latest I/O conference. They talk about how AI platforms and search are changing, as well as the influences of regulation, copyright, and antitrust.
Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!
Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.