Bitcoin's Quantum Death & Elon's Robot Army | Ep. 38🏄‍♀️

The newsletter to thrive in an exponential world

Welcome back to the Innovation Network Newsletter

This week's episode took some unexpected turns. We started with a disturbing Dutch documentary about sadistic online chat groups, pivoted to the discovery that every major AI model (including Grok) leans left politically, then dove into the AI browser wars happening right now (that most people don't even know about).

But the theme that kept emerging? Europe is regulating technologies we're not even allowed to test. We're banning AI social platforms before trying them. We're blocking browser innovations while criminals exploit unregulated spaces. We're debating policies about tools our citizens can't access.

Meanwhile, China has 60% of internet users following virtual AI idols, quantum computing just got 13,000x faster, and robotics are deflating in price so rapidly that a home robot now costs less than a Sony robot dog.

Cheers,
Patrick, Nikola & Aragorn 🚀

Sign Up for Our Next Innovation Network Meetup

We're hosting our Innovation Network meetup on December 11th in Amsterdam at AI AM.

What to expect:

  • 3:00 PM - 8:00 PM (two hours longer than last time)

  • Hands-on co-creation sessions with experts teaching you the newest frontier AI tools

  • Discussion roundtables covering everything from practical AI implementation to ethics, philosophy, and the future of work

  • Prompting competition (last time people created complete presentations with virtual characters and original songs in 25 minutes. We're doubling down on this)

  • Diverse crowd: Everyone from hardcore developers to investors, business transformation leaders to video creators, all united by a "surfing the waves" mindset

This event is open to all Innovation Network fans. And yes, feel free to bring a friend, colleague, or that one person who won’t stop talking about AI over lunch.

The Dark Side of Unregulated Spaces

A Dutch documentary on Zembla this weekend exposed sadistic online chat groups where young people are lured in, then manipulated into committing and filming violent acts, including animal abuse and street attacks. The material is then used for blackmail. Police investigators said they were traumatized just watching it.

The public reaction? "Ban social media! We need to act now!" But that's exactly the wrong lesson.

20 years ago when social media started to rise, we didn't give it the proper appreciation. We didn't make sure that we quickly put in place the right kind of policies, the right kind of legislation, the right kind of etiquette, hygiene. We didn't include it in education the right way.

Aragorn Meulendijks

And now the same thing is happening with AI, right? We're making the same mistake again. Our politicians are praising themselves for finally having a digital legislation policy on digital skills in education, but it's 20 years too late. The knee-jerk reaction is dangerous because it focuses on banning tools rather than building literacy and safety frameworks. And we're already seeing this pattern repeat with AI.

Why does it matter? 
We're about to make the same catastrophic mistake with AI that we made with social media. Instead of building proper education, safety protocols, and digital literacy from day one, we're letting an entire generation grow up in an AI-augmented world without the tools to navigate it safely. By the time politicians wake up to regulate properly, the damage will already be done. The answer isn't banning technology—it's teaching people how to use it responsibly and building appropriate safeguards from the start. Waiting 20 years to act isn't caution; it's negligence.

Every AI Model Leans Left. Even Grok

via American Enterprise Institute

Here's something that flew under most people's radar: Research shows that every major AI model has a left-wing political bias. ChatGPT and Claude are extremely left-leaning. Gemini sits in the middle. And even Grok. Elon Musk's supposedly right-wing alternative is actually left of center.

This became relevant when a major Dutch newspaper contacted Aragorn because they were researching the upcoming Dutch elections. They ran multiple tests with fresh profiles, no cookies, no cache, incognito mode and ChatGPT consistently recommended GroenLinks-PvdA (the biggest left-wing party) or D66 (another left-wing party) when asked "what should I vote?"

Why does it matter?
When billions of people are using AI to inform major life decisions (including who to vote for) the political bias of these models becomes a democracy issue. This isn't about whether left-wing policies are good or bad; it's about whether we're aware that the tools we trust for "objective" information are actually steering us in specific political directions. If AI becomes the default way people research candidates, understand issues, and make voting decisions, and all major AI models share the same political lean. We're systematically influencing elections at scale. This needs to be discussed openly, not discovered accidentally by curious journalists.

The AI Browser War You Didn't Know Was Happening

via Medium

While everyone was focused on ChatGPT and Gemini, a completely different AI war erupted: browsers. Edge, GenSpark, Comet (from Perplexity), and Atlas are all competing to become your AI-powered browser and most people have no idea this is even happening.

Patrick demonstrated GenSpark's capabilities during the recording, showing how you can watch a YouTube video while simultaneously asking the AI to summarize it, create chapter points, generate an audio version, or fact-check the content all in a sidebar without interrupting your viewing.

When I get an email with a personal request to me... Perplexity's Comet browser automatically writes me a draft. It checks my calendar. When am I available? It checks my previous emails. What did we discuss before? What does it know about me and how I would respond to this?

Aragorn Meulendijks

These new AI browsers raise fresh security questions. They often build on top of Chromium while experimenting with capabilities that traditional browser architectures were never designed to support. Many request access to passwords or sensitive browsing data to deliver their features, which creates a broader attack surface. Most players are young companies that are still maturing their security practices, so users must decide how much trust they are willing to place in a fast-moving startup ecosystem.

Why does it matter?
The browser wars are back, but this time the stakes are higher. These AI browsers have access to everything: your emails, calendars, passwords, browsing history, and they're capable of making decisions on your behalf. The convenience is incredible: drafting emails, summarizing videos, automating tasks. But we're trading security for convenience at a scale we don't fully understand yet.

Robotics Price Deflation: The $1,400 Home Robot

Here’s a signal: The NoteX Boomi housebot is priced at just $1,400. That's half the cost of a Sony robot dog. For an actual functional humanoid robot.

Sure, it's not going to revolutionize your household chores tomorrow. It doesn't have fingers, and there's a lot it can't do. But that's missing the point. The signal isn't about this specific robot, it's about the trajectory.

As these technologies become more popular, more widespread, the deflationary power kicks in on a massive scale. And so the deflation of the price for robots versus the capabilities that they have are both... accelerating.

Aragorn Meulendijks

The Murphy Humanoids overview showed the explosion: In 2023, there were about 12 major companies working on humanoid robots. The 2025 update shows dozens of well-funded companies racing to market.

Why does it matter?
The deflationary curve for robotics is steeper than anyone predicted. When a functional home robot costs less than a premium smartphone, we're not talking about a luxury item anymore. We're talking about accessible household technology. This isn't 20 years away; it's 2-3 years away. The combination of falling prices, improving capabilities, and massive investment means the "robot in every home" future is arriving faster than the "smartphone in every pocket" revolution did. Companies, workers, and policymakers who think they have a decade to prepare are going to be caught flat-footed. The disruption is already in motion.

Elon's $1 Trillion Compensation

via The Verge

Speaking of robots, Elon Musk's controversial Tesla compensation package (worth up to $1 trillion) is directly tied to Optimus robot development. And his reasoning is more philosophical than financial.

Musk said he doesn't care about the money. He cares about control. Then he corrected himself because "control" sounds ominous, so he clarified: if he's going to build an army of Tesla Optimus robots, he wants to ensure someone he believes is morally trustworthy (himself) maintains significant influence over them.

Musk believes that Optimus will be the biggest product ever. More important than electric cars. I think that once we hit Mars, there will be more Optimus robots on Mars than humans building a civilization there.

Patrick Willer

Why does it matter?
Musk’s message is simple: whoever leads in advanced robotics will shape the future. His view is that you cannot separate technological power from the values of the people who wield it. You may or may not trust Musk, yet he is right about the stakes. The ability to manufacture millions of capable autonomous robots would concentrate influence at a scale that could exceed most traditional institutions. Decisions about who builds these systems, who governs them, and which ethical principles guide their behavior are not problems for the distant future. They are decisions being made right now, often without broad public scrutiny.

Europe's AI: Testing Nothing, Banning Everything

via Medium

Sora (OpenAI's AI social platform), Snapchat's AI filters and Meta's Vibes all launched in the past weeks. All blocked in Europe. We can't access them. We can't test them. We can't understand them. But we've already regulated them out of existence here.

Aragorn tried everything to get access, even getting an American eSIM for his iPhone. No luck. The downloads are impressive. Sora was downloaded more in its first two weeks than the OpenAI app was at launch. But we can't verify any of this firsthand because we're completely locked out.

If the entire population of China or the US is savvy with AI content generation, we (EU) are lagging behind with fluid AI product design. That has a ripple effect in business.

Patrick WIller

Aragorn took it to its logical conclusion with a brutal example: Imagine you're Heinz Ketchup. You can either spend a million euros making an ad in Europe the old-fashioned way, or hire professionals in the US or China who can legally use AI tools and do it for a fraction of the cost. What would you do? You'd fire the European creatives because they're not allowed to use the tools that make them competitive.

Why does it matter?
Europe is creating a situation where we're protecting citizens from tools they can't access, while simultaneously making European workers uncompetitive. European companies will simply hire AI-skilled workers from the US or China to create their content, making European creative professionals obsolete. We're not protecting our citizens; we're exporting their jobs while preventing them from developing the skills to compete. You can't regulate your way to safety if the regulation itself creates the danger.

China's AI Social Reality: 60% Follow Virtual Idols

via The Guardian

While we were researching US AI social platforms, we discovered China's AI social market is massive and completely different from the Western approach.

Instead of separate AI apps, China is integrating AI into existing platforms. Douyin (Chinese TikTok) has deep AI integration, but not in the Western version because it's not allowed here.

Then came the statistic that stopped us both: In China, 60% of internet users actively follow virtual (AI-generated) idols. Sixty percent. Not early adopters. Not tech enthusiasts. Six out of ten Chinese internet users.

The market size is staggering. The Chinese AI social media market is estimated to hit $20 billion USD by 2032.

And there's an educational component most people don't know about: The Chinese government limits children to watching normal social media for only a short period daily. After that, they only get educational content. Many of the top "influencers" in that educational space are AI-generated virtual tutors.

This is the modern equivalent of Sesame Street, using the dominant media platform of the era for education. Except now it's AI-powered, personalized, and operating at the scale of hundreds of millions of users.

Why does it matter?
While Western commentators debate whether AI companions are dystopian, 60% of Chinese internet users have already normalized following and learning from virtual AI personalities. That's hundreds of millions of people whose relationship with media, education, and entertainment has fundamentally shifted. In five years, China will have a generation that grew up learning from AI tutors, being entertained by AI idols, and seeing virtual personalities as normal parts of their media landscape. Western kids will be catching up to a paradigm that's already established.

The Future of Learning: AI Tutors Outperforming Humans

Education is being revolutionized by AI tutors and the results are already proving superior to traditional methods. Research from Nigeria and Texas showed students using AI-augmented education performing significantly better than control groups.

Tools like Synthesis 2.0 and Google's Learn Your Way platform let students customize how they consume educational content. Want it as a game? Done. Prefer a podcast you can listen to while walking? No problem. Need flashcards? Generated instantly.

Patrick shared a personal example about his kids:

My son Bowie doesn't learn that way. He creates a summary and he walks around with a podcast. My daughter Jazz wants her learning content in a game. So she does it in a different way. And this is just the way it is now gonna be for everybody.

Patrick Willer

The AI tutors don't just deliver content differently, they adapt to personality, interests, and learning pace. They get to know students personally, asking about their hobbies and weaving those interests into the learning experience.

Why does it matter?
Every student can now have a personal tutor available 24/7 that adapts to their learning style, interests, and pace. The students using these tools are already outperforming their peers. Within five years, the gap between students with access to AI tutors and those without will be massive and potentially permanent. The democratization of education is happening now.

Quantum Computing Just Got More Reliable

Google's Willow quantum chip achieved something that should make everyone pay attention: calculations that are 13,000 times faster than classical algorithms, with a breakthrough in reducing fault errors that allows adding more qubits.

This isn't just one company making progress. Aragorn asked AI to list the five most significant quantum breakthroughs from the last six months:

  1. Microsoft's Majorana One (topological qubit processor)

  2. IBM's Heron R2 chip with modular tunable couplers

  3. Xanadu's Aurora for scalable quantum data centers

  4. Improved qubit quality with ultra-low error rates

  5. Development of anti-matter qubits at CERN

When asked what this means for the near future, AI predicted: practical applications within 2-5 years, improved hardware, and tight integration with AI, much faster than the 10-20 year timeline experts predicted just recently.

Why does it matter?
Quantum computing combined with AI could revolutionize material science, drug discovery, and cryptography within 2-5 years, not the 10-20 years experts predicted just a few years ago. The convergence of quantum and AI is happening faster than our institutions, security protocols, and financial systems are prepared for. If you're building anything that relies on current encryption standards, you need a quantum-safe strategy now, not later.

Newsletter full video recording

That’s all for this week 🫢 

Want to get inspired on a daily base? Connect with us on LinkedIn.

Want to get your brand in front of 12k innovators? Send us an email.

How did we do this week?

Login or Subscribe to participate in polls.