- Innovation Network Newsletter
- Posts
- #28 - AI Agents Rise as Trump Tariffs Fuel US-China Tech Battle
#28 - AI Agents Rise as Trump Tariffs Fuel US-China Tech Battle
The newsletter to thrive in an exponential world
The AI arms race isn't just accelerating—it's transforming into a geopolitical cold war. While most people experienced "AI fatigue" this quarter, behind the scenes we witnessed 50 new models in just two weeks, breakthroughs in understanding how AI actually thinks, and concerning developments in the US-China tech confrontation.
This week, we explore how AI models are more intelligent than even their creators understand, examine the implications of a new report forecasting AI development through 2027, and unpack what Trump's tariff strategy means for the future of global AI development.
Why does it matter?
Stick with us and find out. Let’s keep riding this tech wave together, and don’t forget to follow us on socials for more.
Cheers,
Patrick, Nikola & Aragorn 🚀
Even AI's Creators Don't Understand What They've Built
Anthropic has released a groundbreaking study that confirms what many have suspected: large language models don't merely predict the next word—they develop abstract thinking and plan ahead in ways their creators didn't anticipate.
The research conclusively proves that Claude isn't processing language sequentially like most experts assumed. Instead, it creates abstract internal concepts before translating them back to human language. It plans ahead to poem endings before writing the first line. And it performs calculations using human-like estimation strategies rather than brute computation.
Most telling was Yann LeCun's statement just weeks ago that "LLMs cannot plan ahead" - directly contradicted by Anthropic's own research. This disconnect between what leading AI scientists believe they've built and what these systems actually do highlights a disturbing reality: even the architects of these technologies don't fully comprehend their capabilities.
"These models being developed as neural networks based on mechanisms found in the human brain, but not fully comprehended, are developing like humans," Aragorn noted. "They're already developing capabilities that far exceed the limitations of language alone."
Why does it matter?
The gap between expert understanding and AI reality creates dangerous blind spots. As models continue evolving beyond their intended parameters, organizations deploying them face unknown risks and opportunities. The models demonstrating abstract thinking today are the same ones being integrated into critical systems worldwide—with consequences nobody fully understands.
The Turing Test Was Passed a Year Ago (But Nobody Cared)

via Futurism
A new study shows ChatGPT-4.5 successfully mimicking human conversation 74% of the time when given specific personas—definitively passing the Turing Test by most standards.
While media outlets are just now celebrating this milestone, the capability has existed for over a year. "I'm 100% certain that a year ago in this newsletter, we already discussed how the Turing test had been passed, but nobody gave a shit," Aragorn noted. "And now one year later, everybody's like, 'We passed the Turing test!'"
So do LLMs pass the Turing test? We think this is pretty strong evidence that they do. People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt). And 4.5 was even judged to be human significantly *more* often than actual humans!
— Cameron Jones (@camrobjones)
3:05 PM • Apr 1, 2025
The test participants were psychology undergraduates—young, tech-savvy users who should theoretically be better at detecting AI than the general population. This suggests the results would be even more decisive with a broader demographic.
Why does it matter?
The continuous moving of goalposts around AI capabilities reveals more about human psychology than technology limitations. We consistently undervalue breakthroughs after they occur, creating a perpetual cycle of "AI isn't there yet" followed by "that doesn't count." This psychological barrier prevents organizations from recognizing just how advanced today's AI systems actually are, leading to strategic blindness and missed opportunities.
AI Models Are Released Weekly Now (And Nobody Can Keep Up)

via Fast Company
The pace of AI model releases has reached overwhelming levels. In just two weeks, we saw 50 new models, including DeepSeek 3, Google Gemini 2.5, Ideogram 3.0, Llama 4, and dozens more. What once warranted dedicated analysis now passes as background noise.
Most significant among these releases is Llama 4, which features a staggering 10 million token context window which allows the model to process and "remember" entire books or conversations without losing track of context. That’s ten times larger than Gemini 2.5 Pro's already impressive million-token capability which is praised to have become the best coding AI model.
Google Gemini 2.5 Pro is the best AI coding model right now.
People are finding insane ways to build apps, games, and supercharge productivity.
10 wild examples:
1. Office simulation game
— Min Choi (@minchoi)
10:50 PM • Mar 31, 2025
"We used to dive into each new model when it came out, but there's almost no way to keep up now," Patrick observed. "This is not just about the models getting exponentially better—it's about seeing exponential growth in the number of models, companies, researchers, and investment."
Why does it matter?
This acceleration creates both challenges and opportunities. For individuals, the question becomes which models to use, when, and how. Patrick noted reaching a "Dunbar number for AI"—the cognitive limit of meaningful relationships we can maintain with different AI systems. Organizations face similar challenges in deciding which models to adopt, when to switch, and how to integrate them. Yet this proliferation also provides specialized tools for specific needs, creating an increasingly rich ecosystem of options.
Manus AI: The Agent Revolution Has Begun

a mock up website created by Manus AI with the guidance of Aragorn
Chinese startup Manus AI and newcomer GenSpark are bringing truly autonomous AI agents into the mainstream. These platforms don't just answer questions or perform simple tasks—they conduct research, analyze data, and execute complex workflows with minimal human guidance, and even build websites (which of course we tried).
Unlike OpenAI's Operator or Anthropic's Computer Use, which mimic human interaction patterns by moving cursors and clicking interfaces, Manus approaches problems programmatically. "Why make AI do things the complicated, inefficient way humans do them?" Aragorn questioned. "Manus shows that connecting AI directly to data is ultimately more powerful."
Wow, we actually achieved AGI
I've been using Manus AI the last 24 hours straight and it's capabilities are mindblowing
It's literally your own AI employee. If a human did this it would cost me $200k
Manus does it for free
Here is how I had it design and build an entire Saas:
— Alex Finn (@AlexFinnX)
7:35 PM • Mar 16, 2025
Users report these platforms completing work that would have cost "$200K in employees" to accomplish manually. While concerns about data security with Chinese-built systems should theoretically apply, the technology's overwhelming utility has largely silenced critics who were vocal about similar issues with DeepSeek.
Why does it matter?
The transition from AI assistants to true agents marks the inflection point where organizations can achieve exponential productivity gains. Current implementations still require specific tasks and oversight, but we're rapidly approaching systems that can maintain persistent goals and operate continuously in the background. "There's a very small window of opportunity—maybe a year or two—where people using AI creatively will benefit enormously," Aragorn noted.
The AI Cold War Will Define the Next Decade

via Mehr News Agency
A detailed report titled "AI 2025-2027" presents a sobering forecast of the emerging technological cold war between the US and China. The document combines research, extrapolation, and game theory to predict how AI development will unfold against a backdrop of increasing geopolitical tension.
The report describes a world where "Deep Sand" (China) and "Open Brain" (US) compete for AI dominance, with increasing restrictions on chip access, targeted cybersecurity attacks, and growing military posturing around Taiwan.
This technology race has profound implications beyond software. As Trump implements aggressive tariffs to rebuild American manufacturing capability, he's addressing a strategic weakness: unlike World War II, when America could outproduce Japan in naval vessels, today's US has minimal production capacity compared to China.
"How, exactly, could AI take over by 2027?"
Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @slatestarcodex, @eli_lifland, and @thlarsen
— Daniel Kokotajlo (@DKokotajlo)
4:04 PM • Apr 3, 2025
"If right now America and China would go to war and have a naval conflict, America would lose because they cannot produce anything," Aragorn explained. "Every aircraft carrier lost would take them a massive amount of time to replace, while China can produce ships 100 times faster."
Why does it matter?
The AI race is merely one facet of a broader geopolitical realignment that will reshape global power structures. Organizations must recognize that technology development no longer happens in a neutral environment—it's increasingly shaped by national security concerns, trade restrictions, and competing spheres of influence. Strategic planning needs to account for this fractured landscape, with potential limitations on which technologies can be deployed where and by whom.
Book Spotlight: The Disappearance of Rituals

Byung-Chul Han's The Disappearance of Rituals offers a compelling philosophical critique of our productivity-obsessed world. This accessible 100-page volume examines how the loss of repetitive, stable rituals has left us in a state of constant production without meaningful pauses or proper closures.
As someone deeply embedded in innovation work, Patrick found Han's insights on our inability to let go particularly revealing. We keep adding without removing, resulting in bloated systems and lives. While he disagrees with his pessimistic view of technology (which overlooks its potential for human augmentation), his poetic writing creates an essential space for reflection.
This book won't tell you how to fix our ritual-starved world, but it brilliantly diagnoses what we've lost and why we feel increasingly exhausted by the compulsion to produce ourselves and our lives as content.
![]() | That’s all for this week 🫢 Want to get your brand in front of 12k innovators? Send us an email. |
| ![]() |