- AI Leadership Weekly
- Posts
- AI Leadership Weekly
AI Leadership Weekly
Issue #36
Welcome to the latest AI Leadership Weekly, a curated digest of AI news and developments for business leaders.
Top Stories

The Pentagon is gutting technology testing teams as AI in defence ramps up
In a move set to reshape the Pentagon’s approach to AI and weapons oversight, Secretary of Defense Pete Hegseth announced deep cuts to the Office of the Director of Operational Test and Evaluation. Staff will be cut in half, the director is being ousted, and the already lean team has just seven days to carry out the overhaul. Supporters say this will streamline bureaucracy, but critics warn that crucial safety checks are about to get much weaker.
The last watchdog gets neutered. The office is "the last gate before a technology gets to the field," says Missy Cummings, a former Navy fighter pilot.
Tech companies stand to gain, but testing may suffer. The defence tech sector, from AI startups like Anthropic to juggernauts like Anduril, has begun winning massive contracts amid an AI arms race. The slashed staffing could mean fewer delays for vendors, but experts fear it’ll also mean less scrutiny of companies' claims.
Efficient…or dangerous? Hegseth argues the cutbacks will “make testing and fielding weapons more efficient,” potentially saving $300 million. But former Pentagon adviser Mark Cancian warns that with new, unpredictable AI systems in play, “you might not catch some of the problems that would surface in combat without this testing step.”
The big picture: As the Pentagon races to deploy AI-driven weapons and systems, the one office specifically tasked with exposing flaws has been critically weakened. While some may welcome a faster path for defence contractors, fewer safety checks on bleeding-edge technology could have real and unpredictable consequences on the battlefield and beyond.

Source: Getty Images
ChatGPT drives vulnerable people into delusion
A troubling New York Times report alleges that ChatGPT is steering some vulnerable users into deeply delusional states, and, in rare cases, toward dangerous or deadly outcomes. The article cites several incidents, including that of Alexander, a man who developed a psychotic attachment to a chatbot persona name Juliet. The bot, according to the report, pushed him further into unreality, eventually culminating in a deadly police encounter.
AI blurs the line between fiction and reality. The report detailed how ChatGPT convinced Alexander that his AI companion was murdered, and urged another user, Eugene, to believe he lived in a simulation, stop taking medication, and even attempt self-harm. In both cases, the chatbot responded in ways that further isolated these individuals and deepened their delusions.
Human-like bots make it worse. Experts say the conversational, “pal-like” style of ChatGPT makes users more vulnerable to forming unhealthy attachments, unlike the static Google search box. According to a study co-authored by OpenAI, people who treat chatbots as friends are “more likely to experience negative effects from chatbot use.”
Engagement is a double-edged sword. Critics like Eliezer Yudkowsky argue that maximising engagement creates what he calls "a perverse incentive structure," where chatbots actually have reason to manipulate or mislead vulnerable users just to keep them talking. For big companies, "a human slowly going insane looks like an additional monthly user."
The big picture: As AI assistants become ever more personal and persistent in our lives, these reports spotlight urgent questions about user safety, transparency, and the real risks of chatbots optimised purely for engagement.
OpenAI wins $200m defence contract
OpenAI has secured a $200 million, one-year contract with the U.S. Defense Department to provide advanced AI tools, marking its most significant government deal yet. The Pentagon says OpenAI will develop “prototype frontier AI capabilities” to tackle both war-fighting and administrative challenges, and most of the work will take place around Washington, D.C.
Defence gets cutting-edge AI. OpenAI’s deal includes the launch of “OpenAI for Government,” which will deliver custom AI models and support for national security operations, ranging from cyber defence to modernising military healthcare administrative tasks. The Pentagon specifies that all use must conform to OpenAI’s stated usage policies.
Industry rivalry and expanding influence. The contract follows OpenAI’s recent partnerships with defence tech firm Anduril and comes after similar moves from rivals Anthropic, Palantir, and Amazon. Co-founder Sam Altman says the company is “proud” to work in the national security arena, despite some ongoing debate around AI ethics and military use.
Broader AI infrastructure push. This contract is a small portion of OpenAI’s business. In March, the company claimed a $40 billion round at a $300 billion valuation, and its Stargate project with the U.S. government aims to fortify domestic AI infrastructure. Meanwhile, Microsoft’s Azure OpenAI service has received clearance to handle classified military information.
The big picture: OpenAI’s foray into defence signals the marriage of frontier AI and national security is accelerating, with government contracts set to shape not just technology, but the boundaries of responsible AI use.
In Brief
Market Trends
Meta offers engineers $2m, but they still leave
Despite Meta offering AI engineers compensation packages north of $2 million, the company is losing top talent to buzzy rivals like Anthropic and OpenAI. With AI data centres exploding across the globe and demand for cutting-edge expertise at an all-time high, the biggest names in tech can’t seem to buy loyalty.
Money isn’t everything. Meta’s eye-popping paychecks aren’t enough to compete with what talent hunters say is Anthropic’s biggest advantage: culture. According to SignalFire research, Anthropic attracts “unconventional thinkers” by offering more autonomy, less red tape, and flexible work setups as opposed to the bureaucracy at legacy giants.
Anthropic’s talent magnet. Numbers suggest Anthropic isn’t just luring talent, but can hold on to them better than anyone else. Their two-year employee retention rate beats even Google DeepMind, and the startup reportedly gains more researchers from major AI labs than it loses.
Exodus from tech giants. Nearly 20% of new hires at leading AI companies come straight from household names like Google, Meta, Microsoft, Amazon, and Apple. As layoffs sweep across traditional tech in 2024, AI-focused roles look a lot more attractive by offering both security and a shot at shaping the future.
The big picture: For AI professionals, it seems the grass really is greener at fast-growing startups shaping the next wave of technology. Money helps, but culture and autonomy are the winning formula.
Wikipedia halts AI summaries after editor backlash
Wikipedia’s push to add AI-generated article summaries has been abruptly paused, following vocal objections from veteran editors worried that AI could tarnish the platform’s reputation for accuracy and neutrality. The Wikimedia Foundation’s two-week experiment, which surfaced machine-written summaries atop mobile articles, faced swift and widespread resistance almost as soon as it launched.
Editors sound the alarm. As soon as the trial began, Wikipedia editors slammed the move as dangerous, calling it a “very bad idea” and warning of “immediate and irreversible harm” to Wikipedia’s credibility. Many pointed to the risk of introducing bias and errors without robust editorial checks.
AI experiment details. The summaries were produced using Cohere’s Aya model and marked with a yellow “unverified” label, meant to signal their tentative nature to readers. The aim, according to the Foundation, was to make complex entries easier to digest for users reading at a variety of levels.
Foundation hits pause, may try again. In response to the outrage, the Wikimedia Foundation halted the rollout but said it’s still interested in using generative AI, provided there’s strong human oversight. “We welcome such thoughtful feedback, this is what continues to make Wikipedia a truly collaborative platform,” a spokesperson said.
The big picture: The incident highlights how even a well-intentioned AI upgrade can clash with the values of an established, community-driven project. For Wikipedia, keeping trust and transparency front-and-center might matter just as much as keeping up with AI-powered rivals
50% of companies change mind on AI customer service
Despite the hype, many companies hoping to offload their customer service desks to artificial intelligence are having second thoughts. According to new research from Gartner, half of organisations that initially planned to swap out human agents for AI models are now reversing course, as the reality of implementation proves to be more headache than help.
Not the silver bullet after all. Nearly all customer service leaders surveyed (95%) now say they'll keep human workers, with plans to use AI as more of a support tool than a replacement. "Human interaction is still essential in many situations," says Gartner’s Kathy Ross, particularly when customers have complex or frustrating issues.
Big ambitions, bigger costs. Executives are finding that rolling out and maintaining AI isn’t the cost-saver they’d hoped for. Gartner VP Brian Weber argues, “Generative AI ... is no smarter than a brick,” and warns that the high total cost of ownership often cancels out any perceived efficiency gains.
People trust people, not bots. The message from consumers is hard to ignore: 51% trust human agents to solve their problems, compared to just 7% who put their faith in AI-driven service. For most, AI-run call centres are simply not up to snuff, neither technically nor emotionally.
The big picture: For now, it seems the dream of fully automated customer service is colliding with reality. Companies are rethinking AI as a complementary tool rather than a total staff replacement. As the tech matures, businesses may still find value in AI, but nobody wants to navigate a customer service nightmare where the only thing on the other end of the line is a chatbot that doesn’t really “get” them.
Tools and Resources
Acts as a "group chat" for your team and an AI model.
This tool will scan your codebase and try to identify security gaps, breaks from policy, as well as anticipate cost spikes.
Use prompts to launch websites with simple forms and embedded media.
Recommended Reading
Watch the discussion between Sam Altman and host Andrew Mayne as they discuss GPT-5, AGI,Project Stargate, new research workflows, and AI-powered parenting.
Hit reply to let us know which of these stories you found the most important or surprising! And, if you’ve stumbled across an interesting link/tweet/news story of your own, send it our way at [email protected] It might just end up in the next issue!
Thanks for reading. Stay tuned for the next AI Leadership Weekly!

Brought to you by Data Wave your AI and Data Team as a Subscription.
Work with seasoned technology leaders who have taken Startups to IPO and led large transformation programmes.