- AI Leadership Weekly
- Posts
- AI Leadership Weekly
AI Leadership Weekly
Issue #47
Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.
Top Stories

Source: Getty Images
xAI sues former engineer and alleges they stole trade secrets
xAI has sued former engineer Xuechen Li, alleging he stole confidential research worth “billions” in R&D shortcuts before resigning to join OpenAI. The complaint says Li cashed out roughly $7 million from xAI, then copied documents from his company laptop to a personal device and resigned three days later.
The core allegations. xAI alleges Li took “cutting-edge” tech “superior to ChatGPT,” tried to hide his tracks by deleting logs and renaming files, and later falsely certified he had returned and deleted company property.
Timeline and access. Li joined xAI in early 2024 as one of about 20 early engineers on Grok, which gave him broad access. The suit says he accepted an OpenAI offer before resigning, with an August start date.
Talent war backdrop. The case lands amid intense poaching where top researchers are courted like “superstar athletes” and packages can reach nine figures. That competition is now spilling into the courts.
Why it matters: The frontier AI race is increasingly about proprietary data, model weights and training recipes which means employee mobility now carries outsized IP risk. For leaders, expect tighter offboarding, stricter device monitoring and more litigation. For startups, the message is clear: protect your crown jewels early and assume rivals will test your defences.

Source: Pexels.com
How data centres are increasing everyone’s electricity costs
AI’s data centre boom is showing up on household power bills, according to energy advocates and regulators in the US. The PJM capacity market, which sets part of wholesale power costs across 13 states, saw prices jump by 800% last year, with its monitor attributing 63% of that increase to data centres. Utilities are racing to build new plants and lines which they say is needed for AI’s insatiable demand for compute.
Bills rising via market mechanics. Higher capacity costs are flowing through supply charges on customer bills. Data centres are now about 4% of US electricity demand and could triple within three years, they say.
Secret deals and socialised costs. Utilities have struck NDAs with Big Tech. In Louisiana, redacted filings suggest the public is “on the hook” for parts of a Meta-linked $3 to $4 billion plant. In Virginia, Dominion projects average residential bills could more than double as it builds new plants largely for data centres.
Policy fight over who pays. Maryland and Oregon created a separate customer class so data centres shoulder bespoke grid costs. The industry’s lobbying arm says members pay their fair share, yet it pushed to block stricter rules in Virginia.
Why it matters: If AI’s power appetite continues unchecked, expect sharper scrutiny of siting, tariffs and transparency. Entrepreneurs should watch for policy that forces large loads to pre-pay grid upgrades which could reshape the economics of AI infrastructure and who gets to compete.
AI-powered precogs are coming after you for thought crimes
An AI “pre-crime” platform called GIDEON is being pitched to US law enforcement, with its CEO claiming it will go live “next week” across multiple departments. The system purportedly scrapes the internet around the clock and uses an “Israeli grade ontology” to flag potential violent actors. There is no public list of participating agencies or technical detail, which means the timeline and scope are not independently verified.
What is being sold. GIDEON markets itself as AI “threat detection” that scans online speech to preempt attacks. The CEO promoted it on Fox News, saying it will help identify risks before they materialise.
Civil liberties concerns. The article argues this is “pre-crime” which risks curbing speech and privacy. The author warns the targets could vary by political leadership, which raises due process and bias questions.
Transparency and oversight. No contracts, agencies or evaluation metrics are disclosed. Without clear policies on accuracy, false positives and appeals, critics say the tech invites mission creep.
Why it matters: If police begin normalising AI-driven scanning of public posts, the line between predictive safety and surveillance could blur fast. Leaders should expect scrutiny of legality, auditing, and redress mechanisms which means any rollout without guardrails may prompt backlash and stricter regulation. For AI builders, this is a reminder that safety tech lives or dies on transparency, evidence of efficacy and civil rights compliance, not TV soundbites.
In Brief
Market Trends
Microsoft laying groundwork for independence from OpenAI
Microsoft has unveiled two in-house AI models and is already slotting them into Copilot, which hints at a gradual loosening of its dependence on OpenAI. MAI-Voice-1 is live in Copilot Daily and Podcasts, while MAI-1-preview, a foundational LLM “specifically trained to drive Copilot,” is being tested publicly and will roll into “certain text use cases… over the coming weeks.” They say MAI-1-preview was trained on roughly 15,000 Nvidia H100s and can run inference on a single GPU, which would be notable if borne out at scale.
Microsoft’s own stack. “Microsoft has introduced AI models that it trained internally,” signalling a hedge if OpenAI’s roadmap or pricing drifts.
Voice first, consumer focus. MAI-Voice-1 targets “high-fidelity, expressive audio,” and Mustafa Suleyman says the goal is to “create something that works extremely well for the consumer.”
Copilot integration and scope. MAI-1-preview aims at instruction-following and everyday queries. Live pilots on LMArena suggest Microsoft wants feedback before wider deployment.
Why it matters: Building proprietary models gives Microsoft leverage on cost, latency and data governance which means less platform risk if OpenAI stumbles or priorities diverge. For AI leaders, the signal is clear: the era of single-supplier reliance is fading as majors adopt mixed model portfolios and purpose-built systems for specific product experiences.
Cracks forming in Meta and Scale AI’s partnership
Meta’s multibillion-dollar tie-up with Scale AI looks shaky. TechCrunch reports TBD Labs is increasingly using rival data-labelling vendors like Surge and Mercor, while at least one Scale executive, Ruben Mayer, has already left Meta after two months. Some researchers reportedly view Scale’s data as “low quality,” a claim Meta disputes.
Early departures and friction. Mayer says he was “part of TBD Labs from day one” and left for a personal matter. Still, insiders describe a chaotic integration, with OpenAI hires exiting and Meta’s previous GenAI team seeing scope cut.
Vendor diversification and quality debate. Even with Meta’s big investment, TBD Labs is sourcing labels beyond Scale, which suggests a pragmatic hedge. Meta says there are no quality issues, yet researchers appear to prefer Surge and Mercor for expert-grade data.
Scale AI’s market squeeze. After OpenAI and Google ended work with Scale, the startup cut 200 roles then pivoted to government, landing a $99m Army deal. Meta, meanwhile, is racing ahead with a $50bn Hyperion data centre and aims to ship a next-gen model this year.
Why it matters: Data is the fuel for frontier AI which means control over high‑quality labelling pipelines is strategic. The episode underlines the risk of single-vendor bets and the difficulty of culture-mashing startups into Big Tech. For AI leaders, diversify suppliers, invest in expert-labelled data, and expect integration pain even when the cheques are large.

Source: Getty
Salesforce CEO boasts of firing 4,000 employees after replacing them with AI
Salesforce CEO Marc Benioff says AI agents have cut 4,000 roles in customer support, shrinking the team from 9,000 to 5,000, and are now handling half of customer conversations. Speaking on The Logan Bartlett Show, he said the company is also using “agentic sales” to call back a historic backlog of 100 million leads. These are bold claims which are not independently verified, though they fit a wider push to automate frontline tasks.
Headcount shift in support. Benioff framed it as a “rebalance” which means fewer “heads” needed in support. Salesforce had 76,453 employees across divisions as of January.
Productivity and handoffs. An “omnichannel supervisor” coordinates humans and bots, with AI escalating when it hits limits. Benioff likened it to driver assist that asks a human to take over.
Industry views diverge. Nvidia’s Jensen Huang argues AI should boost growth rather than trigger layoffs, while Microsoft’s Asha Sharma says agentic systems could flatten management layers.
Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!
Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.