AI Leadership Weekly

Issue #35

Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.

Top Stories

Mark Zuckerberg is taking Meta’s AI ambitions into his own hands by personally recruiting a “superintelligence” team to chase artificial general intelligence (AGI), according to several sources close to the company. After internal disappointment with the latest Llama models and criticism both inside and outside Meta, Zuckerberg is reportedly going all-in, micromanaging the process and reorganising teams at headquarters to sit right by him.

  • The ‘superintelligence’ playbook. Zuckerberg is focused on building a dedicated group of top talent, including researchers and engineers he’s handpicked over dinners and private chats. The ambition? For Meta “to outstrip other tech companies in achieving what’s known as AGI,” and embed it across Meta’s many services such as chatbots and Ray-Ban smart glasses.

  • Big bets, big hires, big investments. Zuckerberg’s not just recruiting, but also planning a multi-billion dollar investment in Scale AI, reportedly valuing the startup at $28 billion. Scale AI’s CEO, Alexandr Wang, is expected to join the new AGI team, signalling Meta’s willingness to bring in major external talent to close the capability gap.

  • Internal struggles and regulatory questions. Meta’s recent Llama 4 release drew lukewarm reviews, and even internally, some worry the company is over-promising. Meanwhile, Meta’s massive investments and aggressive hiring are likely to attract regulatory scrutiny, echoing recent probes into other tech giants’ AI partnerships.

The big picture: Meta’s pivot to a hands-on, founder-driven AGI race is a dramatic reset aimed at wrestling back a leadership spot in artificial intelligence. With “hundreds of billions” expected to be poured into AI and a super-team forming under Zuckerberg’s watchful eye, this scramble highlights just how fiercely the global race for advanced AI, and the fight for top talent, has become.

Anthropic has quietly killed off its AI-generated blog, Claude Explains, just a month after launch, raising fresh questions about the risks and rewards of letting large language models loose on branded content. The experimental blog, touted as a way to combine AI with human expertise, has vanished with its web address now redirecting visitors to Anthropic’s main site.

  • Unclear boundaries and human oversight. Anthropic initially promised Claude Explains would “demonstrate how human expertise and AI capabilities can work together,” with subject matter experts shaping and fact-checking AI-written drafts. Still, it was never clear how much was AI and how much was human, fuelling some scepticism online.

  • Social media scepticism and marketing motives. Critics called it out for lack of transparency, and some suspected it was more about automating content marketing than genuinely helping users, especially since such tactics are often intended to fuel web traffic.

  • AI-generated content and industry caution. Anthropic isn’t the first to learn the hard way that AI still “confidently makes things up”, with other publishers like Bloomberg and G/O Media stumbling over factually sketchy AI-generated posts.

The big picture: Anthropic’s decision to pull the plug on its own AI-written blog so quickly is a signal to the industry: blending AI and content publishing is no silver bullet, at least not yet. As more companies flirt with automating editorial work, the Claude Explains experiment is a handy reminder that transparency and accuracy are just as important as the tech itself.

Tesla’s robotics ambitions face a shakeup this week as the leader of its Optimus humanoid project, Milan Kovac, has exited the company. The move comes at a sensitive moment, with Tesla gearing up to unveil its fleet of self-driving robotaxis in Austin, and just as scepticism over Tesla’s autonomous tech seems to be rising.

  • Leadership change at a crucial moment. Kovac, who helped supervise foundational software for both Tesla’s Autopilot and Optimus divisions, announced he’s stepping down to spend more time abroad with his family. As he put it, “this is the only reason, and has absolutely nothing to do with anything else,” but his departure lands right before a major product launch.

  • Big bets on robots and autonomy. Elon Musk continues to double down, publicly calling the Optimus the “most sophisticated humanoid robot on earth” (while apparently never having seen a Boston Dynamics video) and telling investors that the company’s future hinges on mass-scale autonomous cars and robotics. But public trust, at least for robotaxis, is shaky. Some say autonomous Teslas are unsafe, and a few even claim they shouldn’t be legal at all, according to TheStreet.

  • Rising competition heats up. While Tesla rearranges its engineering leadership (reportedly replacing Kovac with Ashok Elluswamy, Autopilot and AI VP), Amazon is swiftly making moves of their own in humanoid robotics, testing robot package delivery.

The big picture: With key personnel changes happening right before a critical launch, Tesla’s bold vision for a robot-powered future faces real tests both inside and outside the company.

In Brief

Market Trends

In a twist that few saw coming, OpenAI has struck an unprecedented deal with Google to tap its cloud infrastructure, according to several insiders. The agreement marks a collaboration between two of the fiercest rivals in the AI space, as OpenAI moves to diversify its computing resources beyond its primary partner, Microsoft.

  • Rival partners for massive compute needs. OpenAI’s appetite for computing power has exploded since ChatGPT’s blockbuster debut, pushing the company to source additional “compute” from Google’s advanced cloud platform. “The deal … underscores the fact that the two are willing to overlook heavy competition between them to meet the massive computing demands,” analysts say.

  • The changing cloud and AI ecosystem. This partnership is not just a win for Google’s $43 billion cloud business—it signals a growing trend of cloud providers selling capacity even to their direct competitors. Google has already signed up other high-profile AI outfits, including Apple and Anthropic, and now sees OpenAI added to its customer list, even as Google’s own AI models race neck-and-neck with OpenAI.

  • Strategic challenges on all sides. Despite the headline win, Google faces headaches as it balances the needs of external AI customers and its own ambitions (and reportedly, chip supply is already running thin). Meanwhile, OpenAI is racing to build its own hardware and cut reliance on anyone else, including Microsoft, as the competitive and financial stakes soar.

The big picture: This remarkable crossover between OpenAI and Google highlights just how demanding—and unpredictable—the AI arms race has become. For all the public rivalry, behind the scenes the new world of AI is forcing even the biggest players to partner up when it comes to hardware and infrastructure. If there’s a lesson here, it’s that in AI, today’s competitor may be tomorrow’s cloud provider.

The Trump administration is shaking up federal AI policy, announcing that the Biden-era AI Safety Institute will be rebranded as the Center for AI Standards and Innovation. The move signals a shift to voluntary standards and away from regulation, according to Commerce Secretary Howard Lutnick. This rebrand is said to reflect a hands-off approach: instead of government regulation, the centre will encourage the private sector and voluntary collaboration to set the pace for AI safety standards.

  • Voluntary approach over regulation. Lutnick told attendees at the inaugural AI Honors event that “we’re not going to regulate it,” preferring a “voluntary model” where stakeholders can drive analysis, standards, and best practices themselves, echoing the administration’s broader deregulatory philosophy.

  • Focus on global leadership and data centres. Lutnick remarked on the need for U.S. leadership in AI, noting, “our adversaries are substantially behind us.” A big theme was also the growing energy demand, with Lutnick suggesting that data centre operators may be allowed to build their own dedicated power sources to address America’s surging need for computational muscle.

  • Ongoing debate over state-level AI regulation. Not everyone is onboard with every change, with some GOP lawmakers voicing concern over federal moves to restrict state-level regulation as part of Trump’s legislative agenda, highlighting ongoing friction about how AI oversight should be structured in the U.S.

The big picture: This rebranding and shift in oversight model mark a notable pivot in U.S. AI policy, away from the more cautious and regulated stance taken by the previous administration. For tech leaders, this signals a greener light for faster, less encumbered AI innovation, but it also means more responsibility for industry and new questions about who, if anyone, sets the rules when it comes to safety, standards, and the enormous infrastructure AI requires.

McKinsey’s in-house AI, “Lilli,” is quickly taking over tasks that were once the bread and butter of junior consultants, such as making PowerPoints and drafting project proposals. According to McKinsey, over 75% of their 43,000 employees are now using Lilli every month, streamlining workflows and freeing up valuable human time for higher-level problem solving.

  • AI takes on junior analyst work. With Lilli, McKinsey consultants can whip up PowerPoint slideshows, tailor the tone of presentations, draft client proposals, and even research industry trends, all by feeding prompts into the system. “Do we need armies of business analysts creating PowerPoints? No, the technology could do that,” said Kate Smaje, McKinsey’s global head of tech and AI.

  • AI adoption accelerates at McKinsey and beyond. Lilli is a deeply integrated platform trained on McKinsey’s vast intellectual property. Employees use it an average of 17 times a week, logging over half a million prompts monthly and reportedly saving 30% of the time spent gathering and synthesising information. Rival firms like Bain and BCG are reportedly rolling out similar tools.

  • Impact on hiring and the future of entry-level jobs. Despite fears that AI will mean fewer entry-level jobs, Smaje claims the firm isn’t necessarily planning big cuts, but a shift in what junior analysts do. But broader industry numbers aren’t comforting: SignalFire reports that entry-level hiring at big tech firms fell 25% from 2023 to 2024, as AI eats away at grunt work.

The big picture: McKinsey’s Lilli is a clear sign that AI isn’t just hype but is fundamentally changing the shape of white-collar work from the inside. For consulting and knowledge industries, the era of the human PowerPoint machine is ending, and the real task now is making sure new talent adds value beyond what AI delivers in seconds.

Tools and Resources

This is Apple's coding tool, which now has access to its latest generative models.

Vibe co-, editing? Apparently it's a thing now!

Turn (boring) text files into (fun) learning content!

Recommended Reading

This video goes over the recent Apple research paper that discusses reasoning models. It's interesting to see where reasoning models excel, and where more traditional LLMs still reign. And, also, where they both fall flat.

Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!

Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Your AI and Data Team as a Subscription

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.