AI Leadership Weekly

Issue #42

Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.

Top Stories

Source: Groq

Chip maker Groq nears $600M raise, doubles valuation to $6B in Nvidia challenge
Groq, one of the more buzzed-about Nvidia challengers, is reportedly nearing a $600 million fundraising round that would put its valuation close to $6 billion, which is double its worth just a year ago. The deal, led by Austin-based VC firm Disruptive, comes on the heels of Groq’s $640 million raise last year at a $2.8 billion mark. It’s proof that big money still believes in alternatives to Nvidia’s AI chip dominance, although Groq itself isn’t commenting on the details just yet.

  • Fundraising boom, but revenue questions. While this new round looks set to bring Groq’s total haul to $2 billion, “The Information” recently reported the company had to trim 2025 revenue projections by more than $1 billion. Groq insiders claim this revenue will simply shift to 2026, but it’s a reminder that being the next AI hardware giant is no sure thing.

  • High-profile deals and global ambitions. Groq isn’t standing still: it’s landed a major partnership with Bell Canada for AI infrastructure, began powering Meta’s Llama 4 inference, and just set up its first European data centre in Finland.

  • Taking on Nvidia’s market dominance. Groq is clear about its ambitions: to give customers a meaningful alternative to Nvidia, currently sporting a staggering $4 trillion market cap. Founder Jonathan Ross, previously of Google’s TPU project, is betting big that customers want more choices and less vendor lock-in.

The bottom line: Groq is riding a wave of investor optimism, but lofty valuations don’t guarantee success against an entrenched giant like Nvidia. As AI hardware demand explodes, Groq’s ability to deliver real revenue—and not just hype—will be the true test. For enterprise tech leaders, the chip race is getting more interesting by the week.

Source: Getty

China’s AI action plan
China has unveiled a sweeping new action plan for artificial intelligence, urging global cooperation on AI development and rules just days after the U.S. released its own AI strategy. The plan, announced at Shanghai’s World Artificial Intelligence Conference by Premier Li Qiang, includes a proposal for a worldwide AI cooperation organisation. Li also highlighted China’s ambitions to help roll out AI technology in developing nations, particularly the Global South.

  • Global approach versus national interests. China’s plan pushes for a multilateral framework, in contrast to the U.S., which is pursuing a more camp-based strategy with its traditional allies such as Japan and Australia. “China clearly wants to stick to the multilateral approach while the U.S. wants to build its own camp, very much targeting the rise of China in the field of AI,” said George Chen of the Asia Group.

  • Access to tech and rising alternatives. While U.S. restrictions continue to limit China’s access to advanced AI chips, Chinese firms are investing heavily in domestic alternatives. Nvidia CEO Jensen Huang recently called the homegrown Chinese chips "formidable," perhaps hinting at a future where the West’s stranglehold on core AI hardware may not last forever.

  • Seeking soft power in the Global South. Alongside talks of technology, China’s plan includes sharing AI capabilities with less developed countries, trying to position itself as a tech ally for nations left out of the U.S. and European orbits.

Why it matters: This latest move is more than just grandstanding at a conference; it signals two rival visions for the future of AI: one prioritising international agreements (at least in rhetoric), the other circling the wagons. For founders, investors and tech builders, the global split may force hard choices about supply chains, standards, and who ultimately controls the future rules of AI.

No-one will work for Zuckerberg, not even for $1 billion
Meta has amped up its AI talent hunt, reportedly offering eye-watering sums—one offer exceeded $1 billion—to lure staffers from AI startup Thinking Machines Lab (TML), helmed by ex-OpenAI CTO Mira Murati. Yet, despite the pitch and some offers in the $200-$500 million range, sources say not a single TML hire has accepted. The result? A feeding frenzy of headlines and a fair share of industry side-eye.

  • Billionaire offers, reluctant recruits. Mark Zuckerberg’s recruitment push for the new Meta Superintelligence Labs has been anything but shy. Sources say WhatsApp DMs from Zuck himself and “sizable” offers rolled out. Meta disputes the details, but admits there was at least one major bid.

  • Open source strategy and internal pressure. Meta is betting big on open sourcing its models to outflank rivals like OpenAI, with execs reportedly keen to undercut competitors. One Meta source claimed “the pressure has always been there” to launch competitive models, sometimes at the expense of polish.

  • Mission, money, or mission drift? Despite Meta’s financial firepower, some top AI minds say they’re less inspired by the company’s consumer-centric roadmap and leadership choices than by the more ambitious missions at OpenAI or Anthropic. For now, TML’s own star-studded startup (valued at $12 billion before shipping a product) isn’t exactly desperate for Zuckerberg’s cheque.

The bottom line: Meta may be spending billions assembling its AI dream team, but industry chatter suggests prestige, mission, and culture still matter, even in the age of AI billionaires. With egos and strategy still in flux, it remains to be seen whether Meta can turn headline-grabbing offers into real progress.

In Brief

Market Trends

Switzerland building LLM on public infrastructure Switzerland is making a bold move in the world of artificial intelligence, announcing the release of its first fully open, multilingual LLM in the summer of 2025. Developed by ETH Zurich, EPFL, and the Swiss National Supercomputing Centre (CSCS), the project is designed to foster transparency, global accessibility, and open innovation.

  • Fully open, fully transparent. The Swiss LLM will be released under an Apache 2.0 licence, with code, training data, and model weights (the full package) made public. “Fully open models enable high-trust applications and are necessary for advancing research about the risks and opportunities of AI,” said Imanol Schlag at ETH AI Center.

  • Multilingual muscle. Trained on data across more than 1,000 languages and offering both 8B and 70B parameter versions, Switzerland’s model aims to rival global open-source heavyweights. Training included 15 trillion tokens, with 40% of data targeting non-English languages, which is quite rare in the generative AI space.

  • Ethics, regulation, and academic collaboration. The model follows Swiss and EU regulations, Swiss data protection laws, and strict ethical guidelines for sourcing data, with researchers stating that “respecting web crawling opt-outs did not significantly impact model performance.” Backed by the Swiss AI Initiative, it draws on resources and talent from over 10 academic institutions.

The big picture: As noted by EPFL’s Martin Jaggi, Switzerland’s full-throttle open science approach could challenge the dominance of closed tech giants, position Europe as a hub for ethical, transparent AI, and attract top talent and partners. For entrepreneurs and researchers tired of “black box” models, this could be the open door they were waiting for.



AI to restrict minors on YouTube
YouTube is turning to AI to help spot underage users in the US and automatically apply protections for minors. The platform’s new machine learning age estimation rolls out from 13 August, automatically flagging accounts suspected to belong to users under 18 and limiting what those accounts can see and do.

  • AI-driven age checks for minors. Google’s tech will analyse user activity and account details to estimate age, not just rely on self-reported data. If flagged as underage, users will be notified, and can appeal by verifying their age via government ID, selfie, or credit card.

  • Extra protections and restrictions. Affected accounts will get “teen mode” by default, meaning no age-restricted videos, non-personalised ads, “take a break” prompts, and fewer video recommendations around topics like body image. Privacy reminders will also pop up before comments or uploads.

  • Impact on creators and wider context. YouTube says the change may “shift” some audiences to the teen bracket, which could mean less ad revenue for some creators. The move aligns with mounting regulatory pressure, like the UK’s online age-gate laws and the EU’s prototype linking age checks to digital IDs.

The bottom line: As governments bear down on tech giants over online child safety, YouTube’s machine learning rollout is a major play in the ongoing age-verification tug-of-war. If it actually works as intended, it could set the standard for compliance—and headaches—for other platforms balancing user privacy, revenue, and child protection.

Devs forced to battle “almost right” code
AI-powered coding assistants might be widespread, but new Stack Overflow data suggests they’re quietly taxing developer productivity. The 2025 Developer Survey found 84% of programmers now use or plan to use AI tools, but trust in these tools is plummeting, as the so-called “almost right” AI-generated code creates more headaches than previously disclosed.

  • The productivity tax of ‘almost right’ code. Sixty-six percent of developers say their biggest frustration is fixing AI solutions that are plausible but not production-ready. Debugging these “almost right” responses often takes longer than starting from scratch, and just 33% of developers now trust AI’s accuracy.

  • Enterprise risks and the limits of ‘vibe coding’. With AI adoption outpacing governance, technical debt and security issues are piling up. Most developers steer clear of “vibe coding” (blind trust in AI output) for important work, citing ethical or security risks, while 61.7% still prefer human help for those thornier coding problems.

  • Humans (and Stack Overflow) remain essential. Despite AI’s rapid rise, old-school human expertise remains critical. Stack Overflow continues to be a go-to resource, with many developers turning to it after hitting walls with bot-written code. As Jody Bailey from Stack Overflow notes, the future belongs to communities that offer reliable, curated knowledge, not just fast answers.

Why it matters: AI coding tools promise huge efficiency gains, but if left unchecked, they could saddle organisations with mounting technical debt and security risks. The winners in AI-driven development won’t just be those who adopt AI quickly, but those who integrate it thoughtfully.

Tools and Resources

Try out Alibaba's new open source video model!

This is Microsoft's AI agent for their browser. Maybe now people will use it?

Browser automations that mix agent prompts and deterministic actions.

Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!

Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Your AI and Data Team as a Subscription

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.