- AI Leadership Weekly
- Posts
- AI Leadership Weekly
AI Leadership Weekly
Issue #50
Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.
Top Stories

Source: Reuters
Oracle, Nvidia, and OpenAI join forces in ambitious data centre build-out plan
OpenAI says it has struck deals with Oracle and SoftBank to build five new US data centres, claiming it now has agreements covering more than $400 billion of infrastructure as part of the Stargate Project. The plan targets $500 billion over five years, and arrives a day after Nvidia said it would invest up to $100 billion in OpenAI, starting with $10 billion for a roughly 2 percent stake. New sites are slated for Ohio, two locations in Texas, Doña Ana County in New Mexico, and another unnamed Midwest location.
Financing through partners. Oracle will pay for and oversee three facilities, then sell compute back to OpenAI. SoftBank will fund two sites via bank financing and debt, while OpenAI oversees construction. Oracle’s Clay Magouyrk said it involves “interesting new corporate structures and interesting new ways of doing financing.”
Scale and energy footprint. The trio plans eight data centres drawing about 1.4 GW, with two already live. OpenAI also plans a UAE facility alongside Oracle, SoftBank and G42 following a US government agreement.
Revenue vs spend. OpenAI is generating billions in revenue but spending tens of billions, mostly on compute. Nvidia’s $100 billion is staged across 10 tranches of $10 billion, contingent on continued build-out with partners.
The bottom line: This is the AI data centre land grab accelerating, which means heavy reliance on partner capex, financial engineering and long-term energy supply. If AI adoption lags expectations, the debt and lock-in could bite, but if demand holds, these deals entrench an infrastructure moat that smaller rivals will struggle to cross.

Source: Axios
Meta launches super PAC to fight AI regulation
Meta has launched a new super PAC to fight state-level AI and tech regulation, saying it will spend in the “tens of millions” to back candidates who are friendlier to industry. The American Technology Excellence Project will be run by Republican operative Brian Baker and Democratic firm Hilltop Public Solutions, and will support both parties. The pitch is blunt: stop “poorly crafted” state bills that Meta says could hurt the US in the AI race with China.
States are now the main arena. With Congress gridlocked, state houses have introduced around 1,100 tech policy bills this year, which often pass more easily and can create a patchwork of rules.
What the PAC will target. Meta cites three pillars around US tech leadership, AI progress, and parental control over kids’ online experiences. The company has not said which states are first in line or how large the operation will be.
Part of a broader push. Meta also set up a California-focused PAC last month. Separately, a16z and OpenAI president Greg Brockman announced a $100 million PAC to oppose strict AI rules.
Why it matters: Big Tech is shifting serious cash into state politics to shape AI guardrails before they harden, which means more fragmentation risks and faster-moving laws outside DC. For AI leaders, expect a louder, better-funded fight over state rules that could set compliance baselines and determine who can scale new AI products in the US.
Data centre build-out leading to unpredictable power requirements
US utilities might be overreacting to AI’s power hunger, which risks locking the grid into decades of unnecessary fossil fuel build-out and higher bills. A new report from As You Sow and the Sierra Club says inflated demand forecasts are being driven by speculative data centre plans and hype, even as tech companies do genuinely need more electricity to train and run models.
Demand projections look shaky. Utilities are preparing for 50 percent more growth than the tech industry expects, with the Southeast projecting up to four times independent estimates. Proposed new gas capacity jumped 70 percent between January 2023 and January 2025.
Speculation and double counting. Developers are requesting grid connections before lining up capital or customers, which could mean duplicate asks and inflated forecasts. Vistra’s CEO warned proposals “may be overstated anywhere from three to five times.”
Big loads, bigger consequences. High-density AI racks can draw 100 kilowatts, which an analyst said is like “a small town’s worth of power.” One example in Louisiana would see three new gas plants for a Meta data centre, pegged at the equivalent usage of 1.5 million homes and 100 million tonnes of CO2 over 15 years.
Why it matters: If the AI boom slows or efficiency gains arrive, utilities could strand billions in gas assets that customers still pay for. The report’s fix is boring but practical: better disclosure, stiffer deposits, and long-term contracts, plus tech firms doubling down on renewables. Sensible forecasting now could avoid an expensive, carbon-heavy detour.
In Brief
Market Trends
200+ world leaders agree that an “AI red line” is needed
More than 200 leaders have signed a “Global Call for AI Red Lines,” urging governments to agree by the end of 2026 on what AI must never do. The proposal cites bans on AI impersonating humans or self-replicating, and arrives ahead of the UN General Assembly’s high-level week. Signatories include Geoffrey Hinton, OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton and Google DeepMind’s Ian Goodfellow.
Who is behind it. The initiative is led by CeSIA, The Future Society and UC Berkeley’s Center for Human-Compatible AI. “The goal is not to react after a major incident… but to prevent [irreversible risks]”, said CeSIA’s Charbel-Raphaël Segerie.
What already exists. Europe’s AI Act bans some “unacceptable” uses and the US and China have agreed to keep nuclear control human. That still falls short of global consensus or enforcement.
Enforcement over vibes. “Voluntary pledges… fall short,” said The Future Society’s Niki Iliadis, calling for a global body with “teeth.” Stuart Russell argued firms should pause on AGI until it is safe, saying developers should build safety in from the start.
Why it matters: This is a push to move from company playbooks to hard international guardrails, which could reshape product roadmaps and compliance for AI builders. The 2026 deadline is ambitious and consensus on definitions will be messy, but if governments bite, expect stricter red lines on autonomy and deception that favour firms investing early in safety and auditability.
Stanford study warns that AI is increasing slop rather than increasing productivity
AI was meant to boost productivity, yet a Stanford Social Media Lab and BetterUp study warns a new scourge is spreading through offices: “workslop.” That is the AI-generated content that looks polished but wastes time because it is hollow. In a survey across 1,150 companies, 40 percent of employees said they received workslop from a colleague in the past month, which often leads to awkward rewrites and frayed trust.
What counts as workslop. Researchers define it as output that “masquerades as good work” but fails to advance the task. Employees cited puffed-up memos and error-prone reports that felt convincing on the surface but fell apart on contact.
How it spreads. Slop flows laterally between peers and also up and down the org chart. One project manager said it created a “huge time waste,” while a benefits manager called sorting it out “annoying and frustrating.”
Fixes companies can deploy. The authors suggest AI literacy, clear guidance on when AI is appropriate, and using AI to polish rather than originate. Thor Ernstsson adds that AI should be treated like an untrained intern, and teams that openly critique quality “experience less workslop.”
Why it matters: The findings are survey-based which means they are not hard productivity metrics, but the signal is clear. Unmanaged AI use creates hidden costs, damaged credibility and slower execution. Leaders who set guardrails, train for discernment and measure outcomes rather than AI usage rates will capture real gains while avoiding a slide into workslop.
Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!
Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.