#28 - AI Agents Rise as Trump Tariffs Fuel US-China Tech Battle

The newsletter to thrive in an exponential world

The AI arms race isn't just accelerating—it's transforming into a geopolitical cold war. While most people experienced "AI fatigue" this quarter, behind the scenes we witnessed 50 new models in just two weeks, breakthroughs in understanding how AI actually thinks, and concerning developments in the US-China tech confrontation.

This week, we explore how AI models are more intelligent than even their creators understand, examine the implications of a new report forecasting AI development through 2027, and unpack what Trump's tariff strategy means for the future of global AI development.

Why does it matter? 
Stick with us and find out. Let’s keep riding this tech wave together, and don’t forget to follow us on socials for more.

Cheers,
Patrick, Nikola & Aragorn 🚀

Even AI's Creators Don't Understand What They've Built

Anthropic has released a groundbreaking study that confirms what many have suspected: large language models don't merely predict the next word—they develop abstract thinking and plan ahead in ways their creators didn't anticipate.

The research conclusively proves that Claude isn't processing language sequentially like most experts assumed. Instead, it creates abstract internal concepts before translating them back to human language. It plans ahead to poem endings before writing the first line. And it performs calculations using human-like estimation strategies rather than brute computation.

Most telling was Yann LeCun's statement just weeks ago that "LLMs cannot plan ahead" - directly contradicted by Anthropic's own research. This disconnect between what leading AI scientists believe they've built and what these systems actually do highlights a disturbing reality: even the architects of these technologies don't fully comprehend their capabilities.

"These models being developed as neural networks based on mechanisms found in the human brain, but not fully comprehended, are developing like humans," Aragorn noted. "They're already developing capabilities that far exceed the limitations of language alone."

Why does it matter?
The gap between expert understanding and AI reality creates dangerous blind spots. As models continue evolving beyond their intended parameters, organizations deploying them face unknown risks and opportunities. The models demonstrating abstract thinking today are the same ones being integrated into critical systems worldwide—with consequences nobody fully understands.

The Turing Test Was Passed a Year Ago (But Nobody Cared)

via Futurism

A new study shows ChatGPT-4.5 successfully mimicking human conversation 74% of the time when given specific personas—definitively passing the Turing Test by most standards.

While media outlets are just now celebrating this milestone, the capability has existed for over a year. "I'm 100% certain that a year ago in this newsletter, we already discussed how the Turing test had been passed, but nobody gave a shit," Aragorn noted. "And now one year later, everybody's like, 'We passed the Turing test!'"

The test participants were psychology undergraduates—young, tech-savvy users who should theoretically be better at detecting AI than the general population. This suggests the results would be even more decisive with a broader demographic.

Why does it matter?
The continuous moving of goalposts around AI capabilities reveals more about human psychology than technology limitations. We consistently undervalue breakthroughs after they occur, creating a perpetual cycle of "AI isn't there yet" followed by "that doesn't count." This psychological barrier prevents organizations from recognizing just how advanced today's AI systems actually are, leading to strategic blindness and missed opportunities.

AI Models Are Released Weekly Now (And Nobody Can Keep Up)

via Fast Company

The pace of AI model releases has reached overwhelming levels. In just two weeks, we saw 50 new models, including DeepSeek 3, Google Gemini 2.5, Ideogram 3.0, Llama 4, and dozens more. What once warranted dedicated analysis now passes as background noise.

Most significant among these releases is Llama 4, which features a staggering 10 million token context window which allows the model to process and "remember" entire books or conversations without losing track of context. That’s ten times larger than Gemini 2.5 Pro's already impressive million-token capability which is praised to have become the best coding AI model.

"We used to dive into each new model when it came out, but there's almost no way to keep up now," Patrick observed. "This is not just about the models getting exponentially better—it's about seeing exponential growth in the number of models, companies, researchers, and investment."

Why does it matter?
This acceleration creates both challenges and opportunities. For individuals, the question becomes which models to use, when, and how. Patrick noted reaching a "Dunbar number for AI"—the cognitive limit of meaningful relationships we can maintain with different AI systems. Organizations face similar challenges in deciding which models to adopt, when to switch, and how to integrate them. Yet this proliferation also provides specialized tools for specific needs, creating an increasingly rich ecosystem of options.

Manus AI: The Agent Revolution Has Begun

a mock up website created by Manus AI with the guidance of Aragorn

Chinese startup Manus AI and newcomer GenSpark are bringing truly autonomous AI agents into the mainstream. These platforms don't just answer questions or perform simple tasks—they conduct research, analyze data, and execute complex workflows with minimal human guidance, and even build websites (which of course we tried).

Unlike OpenAI's Operator or Anthropic's Computer Use, which mimic human interaction patterns by moving cursors and clicking interfaces, Manus approaches problems programmatically. "Why make AI do things the complicated, inefficient way humans do them?" Aragorn questioned. "Manus shows that connecting AI directly to data is ultimately more powerful."

Users report these platforms completing work that would have cost "$200K in employees" to accomplish manually. While concerns about data security with Chinese-built systems should theoretically apply, the technology's overwhelming utility has largely silenced critics who were vocal about similar issues with DeepSeek.

Why does it matter?
The transition from AI assistants to true agents marks the inflection point where organizations can achieve exponential productivity gains. Current implementations still require specific tasks and oversight, but we're rapidly approaching systems that can maintain persistent goals and operate continuously in the background. "There's a very small window of opportunity—maybe a year or two—where people using AI creatively will benefit enormously," Aragorn noted.

The AI Cold War Will Define the Next Decade

via Mehr News Agency

A detailed report titled "AI 2025-2027" presents a sobering forecast of the emerging technological cold war between the US and China. The document combines research, extrapolation, and game theory to predict how AI development will unfold against a backdrop of increasing geopolitical tension.

The report describes a world where "Deep Sand" (China) and "Open Brain" (US) compete for AI dominance, with increasing restrictions on chip access, targeted cybersecurity attacks, and growing military posturing around Taiwan.

This technology race has profound implications beyond software. As Trump implements aggressive tariffs to rebuild American manufacturing capability, he's addressing a strategic weakness: unlike World War II, when America could outproduce Japan in naval vessels, today's US has minimal production capacity compared to China.

"If right now America and China would go to war and have a naval conflict, America would lose because they cannot produce anything," Aragorn explained. "Every aircraft carrier lost would take them a massive amount of time to replace, while China can produce ships 100 times faster."

Why does it matter?
The AI race is merely one facet of a broader geopolitical realignment that will reshape global power structures. Organizations must recognize that technology development no longer happens in a neutral environment—it's increasingly shaped by national security concerns, trade restrictions, and competing spheres of influence. Strategic planning needs to account for this fractured landscape, with potential limitations on which technologies can be deployed where and by whom.

Book Spotlight: The Disappearance of Rituals

Byung-Chul Han's The Disappearance of Rituals offers a compelling philosophical critique of our productivity-obsessed world. This accessible 100-page volume examines how the loss of repetitive, stable rituals has left us in a state of constant production without meaningful pauses or proper closures. 

As someone deeply embedded in innovation work, Patrick found Han's insights on our inability to let go particularly revealing. We keep adding without removing, resulting in bloated systems and lives. While he disagrees with his pessimistic view of technology (which overlooks its potential for human augmentation), his poetic writing creates an essential space for reflection. 

This book won't tell you how to fix our ritual-starved world, but it brilliantly diagnoses what we've lost and why we feel increasingly exhausted by the compulsion to produce ourselves and our lives as content.

Newsletter full video recording

That’s all for this week 🫢 

Want to get inspired on a daily base? Connect with us on LinkedIn.

Want to get your brand in front of 12k innovators? Send us an email.

How did we do this week?

Login or Subscribe to participate in polls.