- Innovation Network Newsletter
- Posts
- INN026: AI Ethics | AI Arena | Mercury's Brain-Breaking Diffusion | AI Adoption Gap
INN026: AI Ethics | AI Arena | Mercury's Brain-Breaking Diffusion | AI Adoption Gap
The newsletter to thrive in an exponential world
The AI arms race isn't just accelerating, it's transforming. While you were busy updating your prompts, DeepSeek dethroned ChatGPT as the fastest-growing AI in history, Mercury rewrote the rules of text generation, and Claude 3.7 quietly became the programmer's secret weapon. Plus, we'll dissect the latest models changing the game and reveal which professions are being left behind in the AI revolution.
But behind these technical leaps lurks a more profound question: Who's steering this ship? This week, we sit down with futurist Lisanne Buik to explore how organizations can move beyond tech fascination to intentional implementation.
Meet Lisanne Buik. With so many great thinkers in our network, we thought: why not invite some of them to join our conversations? The first to join us was the brilliant and insightful Lisanne Buik, who leads AI with grace. Key point we discussed are:
What does leading with grace mean?
Will AI become conscious?
What does it take for an organization to be ‘ethical’?
And of course we had way too little time. But the good news: Lisanne and Patrick are working on a new workshop on AI ethics. Interested? Let’s talk!
Why does it matter?
The future isn't waiting for those playing catch-up. Let’s keep riding this tech wave together, and don’t forget to follow us on socials for more.
Cheers,
Patrick, Nikola & Aragorn 🚀
Why Most Companies Get AI Wrong From Day 1

via AI.news
The AI hype cycle has created a frenzy of scattered implementation with no clear direction," explains futurist Lisanne Buik during our recent conversation. As our first Innovation Network guest, she pinpoints exactly where organizations stumble: they rush to adopt without vision.
According to Lisanne, companies make three critical mistakes from the outset.
First, they implement AI in isolated pockets rather than with an organization-wide strategy.
Second, they chase experimentation without clear value metrics.
And third, they fail to align AI implementation with their stated values.
"Despite all the enthusiasm for experimentation, companies are finding it quite hard to identify use cases where AI has actually added value," Lisanne notes. "They have these value statements on their walls about transparency and equality, but their AI implementations often contradict these principles."
This matches what we've observed in our innovation dinners. Last year, executives were eager to understand the technology. Now they grasp the concepts but struggle with the more difficult question: "How do we implement this in a way that creates genuine value?"
Why does it matter?
The gap between AI experimentation and meaningful business impact isn't closing on its own. Without a comprehensive vision for how AI serves your organization's purpose and values, implementation becomes a costly exercise in trendspotting rather than transformation. The answer isn't to slow down adoption, but to lead it with clear purpose from day one.
Western AI Won't Work Everywhere (And That's a Billion-Dollar Problem)

via Midjourney
When it comes to AI deployment, geography matters more than you think. Lisanne shared her work with an NGO raising a $1 billion fund specifically to address a critical imbalance: AI models built primarily in the West are being deployed in cultural contexts they weren't designed for.
"We have a whole global south with no choice but to adopt models ingrained with different value sets than they have," Lisanne explained.
The real-world consequences are immediate and practical. She describes a local African farmer with generations of ancestral knowledge about soil health being recommended Western monocrop systems by an AI that doesn't understand local agricultural traditions. Not good.
As Africa's economic and innovative power grows the mismatch between Western AI and local needs becomes increasingly problematic.
The NGO's solution? Build community around AI literacy in Africa, develop ethical frameworks that respect local values, and adapt existing models to fit local circumstances across education, agriculture, health, and workforce inclusion.
Why does it matter?
The AI divide isn't just about access—it's about relevance. Organizations operating globally must recognize that one-size-fits-all AI implementation ignores crucial cultural contexts. This creates both risks (alienating local markets) and opportunities (developing culturally-adapted AI solutions). The next frontier of AI isn't just technical advancement but cultural adaptation.
Your AI Is a Mirror (And You Might Not Like What You See)

via Simon Morice
Forget Terminator scenarios—the real AI revelation is much more uncomfortable.
"AI is learning from us how to be human and will reflect back how it thinks we think it is to be human," Lisanne observed during our conversation. This mirror effect exposes our biases, contradictions, and limitations in ways we can't easily dismiss.
When an AI trained on human knowledge and behaviors produces problematic outputs, we're quick to blame the technology. But Lisanne challenges us to see this as an opportunity for self-reflection: what does AI's behavior tell us about ourselves?
This perspective fundamentally reframes AI ethics. Rather than seeing AI as an autonomous entity that might "wake up" and threaten humanity (a view Lisanne explicitly rejects), we should recognize it as a reflection of our collective values, biases, and behaviors.
"It's really a myth that only designers, developers, and companies are shaping this technology," she notes. "Everybody, through every interaction, is shaping it." At the end of the day each one of us bears responsibility for what emerges.
Why it Matters: Understanding AI as a reflection rather than an independent entity transforms how we approach development and implementation. The question shifts from "How do we control AI?" to "What values and behaviors do we want to see reflected?" This makes AI ethics not just a technical challenge but a profound opportunity for societal self-examination.
Claude 3.7 Just Dethroned ChatGPT (For Specific Tasks)
Anthropic's Claude 3.7 arrived with little fanfare but big implications. While OpenAI's ChatGPT 4.5 promised enhanced emotional intelligence but delivered minimal improvements, Claude 3.7 has quietly excelled where it matters most: reasoning and coding.
Despite being marketed as an incremental update (3.7 rather than 4.0), this model represents a significant architectural advance as "the first hybrid reasoning model."
Our testing confirms it substantially outperforms its predecessors and rivals in several key areas.
Most surprisingly, despite OpenAI's claims about ChatGPT 4.5's emotional intelligence, our real-world testing found Claude 3.7 to be more emotionally nuanced and responsive.
For organizations using AI extensively, Claude 3.7 is proving particularly valuable for copywriting and coding tasks, while ChatGPT still excels at research with its web search capabilities.
Anthropic CEO Dario Amodei positioned Claude 3.7 specifically for coding tasks because that's where their user base needs help—a focused approach that's paying dividends.
Meanwhile, Anthropic secured another $3.5 billion in funding, tripling its valuation to $61 billion and signaling investor confidence in its direction.
Why does it matter?
The era of general-purpose AI is evolving into a landscape of specialized tools. Organizations now need to maintain portfolios of AI models for different tasks rather than seeking a single solution. Understanding each model's strengths becomes crucial for efficiency and effectiveness. Claude's rise also suggests a potential leadership change in the AI race, with Anthropic potentially overtaking OpenAI in certain domains.
Mercury: The Text Generator That Breaks Your Brain
Inception Labs just released Mercury, and it fundamentally challenges how we think about text generation. Unlike traditional language models that produce text one token at a time (like humans speaking), Mercury uses diffusion techniques—similar to those powering image generators—to create entire texts simultaneously.
Diffusion language models are SO FAST!!
A new startup, Inception Labs, has released Mercury Coder, "the first commercial-scale diffusion large language model"
It's 5-10x faster than current gen LLMs, providing high-quality responses at low costs.
And you can try it now!
— Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr)
9:23 PM • Feb 26, 2025
The result is both alien and fascinating. Mercury refines an initial "noise" output through iterative steps, creating complete texts that emerge holistically rather than sequentially. For users, this means dramatically faster text generation with potentially different qualities and capabilities than traditional models.
"Our brains are not set up for this," noted Aragorn during our discussion. Humans cannot generate or even process full texts simultaneously—we are inherently sequential thinkers. Yet Mercury operates with a completely different processing paradigm.
While still using transformer architecture as its foundation, Mercury represents a novel hybrid approach that could redefine AI text generation. Early tests suggest it maintains coherence while operating in this parallel fashion—something that would have seemed impossible just months ago.
Why does it matter?
Mercury signals the emerging diversity in fundamental AI architectures. Beyond just improving existing approaches, we're now seeing entirely different paradigms for how machines process and generate language. This could lead to specialized AI systems for tasks where holistic understanding is more important than sequential reasoning. As these systems develop, they raise fascinating questions about how such alien processing methods might eventually interface with human cognition.
These Professions Are Getting Left Behind in the AI Revolution

via Venturebeat
Anthropic's Economic Index has revealed a stark adoption gap across professions. Computer and mathematical occupations dominate current AI usage, representing 37.2% of users despite comprising only 3% of the US workforce. Arts and media professionals follow as the second-largest group, with most other professions significantly underrepresented.
The data shows that administrative roles—which could substantially benefit from automation and assistance—are among the most underrepresented in current AI adoption.
While software developers use AI to write code and artists use it to generate images, vast segments of the workforce remain largely untouched by this revolution.
The coding revolution will be far more dramatic and will happen 10x faster.
Anthropic's Dario Amodei predicts AI will write 90% of code within 6 months. All code within a year. It's clear this is a complete reinvention.
History shows us a pattern: when creation tools simplify,… x.com/i/web/status/1…
— Innovation Network (@INN2046)
6:04 PM • Mar 11, 2025
As tech and creative industries leverage AI to multiply productivity, others risk falling permanently behind.
The disparity also reflects the current state of AI tools, which are predominantly designed for technical and creative tasks rather than administrative or operational roles.
Why does it matter?
This uneven adoption creates both risks and opportunities. Companies with workforces primarily in lagging categories face potential disruption as competitors leverage AI. Meanwhile, there's enormous untapped potential for AI tools specifically designed for these underserved professions. The next wave of AI adoption will likely focus on these gaps, creating new competitive advantages for early adopters in these fields.
![]() | That’s all for this week 🫢 Want to get your brand in front of 12k innovators? Send us an email. |
| ![]() |