China sends Trump a Message while 800 Million Use ChatGPT | 🏄‍♀️ Ep. 37

The newsletter to thrive in an exponential world

Welcome back to the Innovation Network Newsletter

This week's episode was a reality check. While we've been busy discussing AI ethics and regulations in Europe, China just announced a plan to hit 90% AI adoption by 2030. Meanwhile, ChatGPT crossed 800 million users in three years, something that took LinkedIn over a decade to achieve.

We're covering a lot of ground today: the shadow AI systems operating in your company right now, the trade war escalating between the US and China, why AI's progress is more exponential than you think, and how Europe's risk-averse approach is turning into an innovation death spiral.

Plus, we'll show you how we generated four complete songs with AI in under five minutes during the live recording. Because why not?

Let's dive in.

Cheers,
Patrick, Nikola & Aragorn 🚀

Registration For The Next Meetup Is Open

We're hosting our next Innovation Network meetup on December 11th in Amsterdam at AIM. This isn't your typical AI conference where someone teaches you how to use ChatGPT. We're going bigger.

What to expect:

  • 3:00 PM - 8:00 PM (two hours longer than last time)

  • Hands-on co-creation sessions with experts teaching you the newest frontier AI tools

  • Discussion roundtables covering everything from practical AI implementation to ethics, philosophy, and the future of work

  • Prompting competition (last time people created complete presentations with virtual characters and original songs in 25 minutes. We're doubling down on this)

  • Diverse crowd: Everyone from hardcore developers to investors, business transformation leaders to video creators, all united by a "surfing the waves" mindset

Shadow AI: Your Company's Unofficial Operating System

via Bernard Marr

Patrick just came back from a CFO dinner in Belgium with executives from major corporations. The surprise? These companies are still debating whether to give employees Microsoft Copilot access. The reality? Their employees are already using ChatGPT anyway, uploading company data to external systems because they need the tools to do their jobs effectively.

Companies are spending months calculating the ROI of 20/month per employee while ignoring the real cost: losing their best people because they're not providing the tools needed to augment their performance. When your top performer (who costs 200,000/year to replace) leave because they're working with outdated tools, that annual AI budget suddenly looks like a bargain.

If your best people don’t get AI to augment their work, they will leave the organization. That's gonna cost you way more money than being cheap on AI licenses.

Patrick Willer

Why does it matter? 
Shadow IT has evolved into Shadow AI, and it's happening at scale. With 800 million weekly ChatGPT users and 600 million daily AI users globally, the genie is out of the bottle. Your employees are already using AI! The only question is whether they're doing it safely within your infrastructure or recklessly on external platforms. The companies that recognize this and build comprehensive AI strategies now will retain talent and gain competitive advantage. Those that don't will bleed talent and fall behind while still calculating spreadsheets about license costs.

The Fastest Adoption in History

via OpenAI: World Bank

At OpenAI's Dev Day, they dropped a number that should make everyone pause: 800 million weekly users. To put that in perspective, it took LinkedIn over a decade to reach that milestone. ChatGPT did it in roughly three years. And that's just ChatGPT. It doesn't count Claude, Gemini, DeepSeek, and dozens of other AI platforms.

The generational divide is interesting but misleading. While 76% of Gen Z has used AI versus 50% of Boomers, millennials actually have higher daily usage rates. Why? They're in their prime working years, using AI to advance their careers. This is work-driven adoption at unprecedented scale.

One year ago people were a little bit silent and secretly using AI. It felt like cheating. We used to ask each other “did you do it to yourself or did you write it with AI". We're beyond that. Everybody is using AI now.

Patrick Willer

Why does it matter?
When 1.8 billion people globally have access to AI assistants, and 600 million use them daily, we're watching the fastest technology adoption in human history. Faster than smartphones. Faster than the internet. Faster than electricity. The business models built on the old internet (advertising, SEO, traffic) are about to collapse. When people stop searching Google and instead ask AI, who sees the ads? When AI reads the top 10 results instead of 100, what happens to discoverability? We're not ready for how disruptive this will be to digital business models that have dominated for 20 years.

The AI Cold War With Rare Minerals

via TRENDS Research & Advisory

This weekend, crypto and stock markets took a hit because China escalated the AI Cold War. They imposed new restrictions on rare earth metal exports, and China controls 70% of global supply. These materials are essential for chip manufacturing, which means China can literally starve American tech companies if they want to.

Trump's response? Threaten higher tariffs. Which is like bringing a knife to a gunfight. China has the manufacturing infrastructure, the internal market, and now they're controlling critical resources. Meanwhile, Europe isn't even in the game. We're debating airport expansion in Schiphol while China and the US fight for technological dominance.

China is now like, look, okay, you can put tariffs on stuff. But, you know, we still control the supply you need to build the chips for your tech companies. So we can just starve your tech companies to death.

Aragorn Meulendijks

Why does it matter?
The AI revolution requires massive computational power, which requires chips, which requires rare earth metals. China's move shows they understand the strategic importance of controlling supply chains for critical resources. The US is trying to strong-arm its way through with tariffs and restrictions, but that only works if you have leverage.

Europe's complete absence from this conversation is alarming. We're not players; we're spectators. And spectators don't write the rules for the future. The decisions being made in Beijing and Washington right now will determine whether European companies can even access the technology they need to compete in five years.

China's AI Plus Plan: 90% Adoption by 2030

via Brics competition

Speaking of China, they just unveiled their "AI Plus" national strategy. The goal? Achieve 90% AI adoption across their entire population by 2030. That's five years away. They're embedding AI into every sector of society, economy, and government. They're making it a priority in education. This is nation-level commitment to technological transformation.

Will they hit 90%? Probably not. Will they fail spectacularly in some areas? Absolutely. But that's exactly the point. They're adopting the Elon Musk strategy of failing fast, learning, and iterating.

You better just fail big and hard and then pivot and learn from it. That's the way.

Patrick Willer

Why does it matter?
Europe's precautionary principle has served us well in some areas. Our food safety standards are better than America's, our citizens aren't dying from preventable gun violence at scale. But that same principle is killing us in technology. We assess risk, find that risk exists, and decide not to try. China assesses risk, decides the bigger risk is falling behind, and launches anyway. In five years, China will have a population that's fluent in AI usage, companies built on AI infrastructure, and a government that understands how to leverage these tools. Europe will have... well-intentioned regulations and a widening gap between our aspirations and our capabilities. You can't regulate your way to innovation leadership.

AI's Task Duration is Doubling Every Seven Months

from METR.org

People keep saying AI progress is slowing down because IQ scores aren't increasing exponentially. They're looking at the wrong metrics. Yes, AI went from an average IQ of 65 to 136 in two years—impressive, but not exponential. But look at task duration: AI can now work on complex tasks for an hour straight, and that capability is doubling every seven months.

The latest Anthropic model worked for 30 hours straight to completely re-code an app like Slack.

Patrick Willer

Think about what that means. Seven months ago, AI could work for 30 minutes. Now it's an hour. In seven more months, it'll be two hours. Then four. Then eight. Then a full workday. The length of a task AI can do is doubling every seven months. That's more rapid than Moore's law.

Why does it matter?
When people said "AI can't really do complex work," they meant AI couldn't sustain attention on difficult problems long enough to solve them. That limitation is evaporating at exponential speed. Within two years, we'll have AI that can work full 8-hour days on a single project. Within three years, it might work continuously for days. Combined with IQ levels already exceeding most humans and reasoning capabilities that now match academic-level mathematics, we're approaching a threshold where AI can genuinely replace knowledge workers for sustained, complex projects.

Google DeepMind's Code Mender

Google DeepMind just released Code Mender. An AI system that autonomously scans your entire codebase, identifies security vulnerabilities, and fixes them. No human oversight required. This is high-stakes work; security breaches cost companies millions and can destroy reputations overnight.

The whole idea of this agentic kind of platform with multiple AI models is that it continuously goes over your code base for basically your whole infrastructure, your whole software stack. And whenever there's a new kind of breach or when it finds any potential risks, it fixes it.

Aragorn Meulendijks

The fact that we now trust AI to do this autonomously shows we've moved past the "AI hallucinates and can't be trusted" phase. We're building systems with enough redundancy, verification, and self-checking that they can handle critical tasks where mistakes have serious consequences.

Why does it matter?
Last year's problems are not this year's problems. While skeptics are still arguing "but AI makes mistakes," the industry has already built solutions that account for those limitations. Code Mender represents a new category of AI deployment: autonomous systems handling high-stakes tasks that humans consistently fail at. How many massive data breaches happened because a human forgot to patch a known vulnerability? How many companies lost millions because security teams couldn't keep up with the volume of potential exploits? AI doesn't get tired, doesn't take weekends off, and doesn't overlook the boring but critical maintenance work.

The State of AI Report 2025: What You Need to Know

Menlo Ventures just released their 350-page State of AI Report for 2025, and it's packed with insights. A few highlights:

  • 1.8 billion people globally have used AI, with 600 million daily users

  • Open source vs. closed models is now a global battleground between China and the West

  • AI reasoning and planning capabilities are maturing at unprecedented speed

  • Agentic AI systems will be mainstream in daily apps by 2026—not in a decade, next year

  • Prediction: AI will provide a Nobel Prize-level breakthrough in science within the next year

Predictions suggest that agentic AI systems will be mainstream in daily apps by 2026. Not a decade away.

Patrick Willer

Last year's predictions from this report were accurate. These aren't wild guesses. They're informed forecasts from people watching the industry closely.

Why does it matter?
Most people are still forming their understanding of AI based on information that's 12-18 months old. In a field moving this fast, that's ancient history. This report gives you a current snapshot of where we actually are versus where people think we are. The gap is enormous. If your company strategy is based on AI capabilities from 2023, you're already obsolete.

If you think agentic AI is years away, you're wrong. It's next year.

If you think AI-driven scientific breakthroughs are speculative, prepare to be surprised in the next 12 months.

The report is free, the video summary is on YouTube, and ignorance is no longer an excuse.

Europe's Regulation Problem: Safety or Suicide?

via Forbes

At the Deloitte Stakeholder Day, someone argued that EU regulations aren't limiting innovation because "America also has regulations and they're innovating." But that misses the fundamental difference in regulatory philosophy.

America asks: "What's the probability of harm?" If it's 10%, they might take that risk. If it's 50%, they debate it. Their gun laws prove they'll accept 50,000 deaths per year for something they value. Europe asks: "Is there any risk?" If the answer is yes, we often just don't do it. One citizen potentially harmed? Shut it down.

This approach has benefits: our food is safer, our citizens aren't dying from preventable violence at American scales. But in technology, it's killing us. China announced their AI Plus plan. America is pouring billions into AI development. Europe is getting GPT-NL with funding that's a drop in the ocean compared to what China and the US are investing.

Why does it matter?
For the first time in 300 years, Western Europe faces a genuine disadvantage in technological development. We're not leading; we're not even keeping up. We're falling behind while congratulating ourselves on having good intentions. The AI Act, Digital Markets Act, and Digital Services Act aren't just slowing innovation, they're making companies actively avoid the European market. Meta won't release AI glasses here. Apple is delaying AI features. Sora's cameo feature isn't available. These aren't minor inconveniences; they're strategic setbacks. If European citizens and companies can't access the tools defining the next decade, how will we compete?

Spotify Removes 75 Million AI-Generated Tracks

via CyberNews

Here's a wild stat: Spotify removed 75 million AI-generated tracks this year alone. That's not a typo. Seventy-five million. Tools like Suno V5 now let anyone create professional-quality music in minutes, with full editing capabilities to fine-tune every detail.

During our live recording, we demonstrated this by creating four complete songs in under five minutes about our upcoming Innovation Network meetup. English versions, Dutch versions, hip-hop, rap—done. And the quality? Honestly impressive.

Meanwhile, Spotify just launched AI mixing tools that let anyone become a DJ, automatically blending tracks with manual control over breaks, bass, transitions. The democratization of music creation and curation is complete.

Why does it matter?
The music industry fought Napster for years. They can't fight this. AI music generation is legal, accessible, and improving exponentially. In 12 months, audio editing capabilities typically run 6-12 months ahead of video. That means the granular control we now have over AI music. Editing individual tracks, adjusting pronunciation, controlling every element, will come to video within a year. Directors will build perfect scenes by instructing AI on camera movements, actor positioning, audio mixing, all with precision control. We're watching the complete democratization of creative production, whether traditional gatekeepers like it or not.

Newsletter full video recording

That’s all for this week 🫢 

Want to get inspired on a daily base? Connect with us on LinkedIn.

Want to get your brand in front of 12k innovators? Send us an email.

How did we do this week?

Login or Subscribe to participate in polls.