top of page
Search

Beyond Generational Labels: Are You a Boomer, Zoomer, or Doomer?

A new framework for understanding how we really relate to AI and change


We've all been there. Someone mentions artificial intelligence in a meeting, and you can practically watch people sort themselves into camps. The eye-rolls. The excited leaning forward. The nervous fidgeting. The resigned sighs.


Digital Native = AI Ringer?
Digital Native = AI Ringer?

We've been attempting to explain these reactions through generational lenses: "Oh, they're a Baby Boomer, of course they're resistant to technology" or "Typical Gen Z, they grew up with an

iPad in their crib!" But I'm becoming convinced we've been looking at this all wrong.


After months of conversations with my podcast co-host Julie—a radio veteran who's technically a Millennial but approaches AI with the healthy skepticism of someone who's seen technology promises come and go, help and hurt, I've realized the real divide isn't reserved to when you were born. It's more about how you see the future unfolding.


Meet the new framework: Boomers, Zoomers, and Doomers.



The Boomers: Guardians of Human Connection, Responsible Optimists


Boomers see some potential but worry that so much focus on technology may cause us to "lose touch with what's real and what's not and then lose touch with how to interact with other humans. when really the only thing in this world that makes us go round is humanity." Julie Koehrer, Account Executive at Lafayette Q106.7 and cohost of the AGI - Advance, Grow, and Innovate podcast.


AI Boomers occupy the thoughtful middle ground. They see AI's potential to solve real problems and improve lives, but they're equally committed to ensuring that human welfare, dignity, and connection remain central to how we develop and deploy these technologies.

Boomers aren't anti-AI or anti-progress—they're pro-human-centered AI. They want to harness AI's benefits while being intentional about preserving what makes us human. They ask questions like: "AI has tremendous potential, but with great automation comes great responsibility—humans must come first."


Take Julie's insight about radio's future: "Radio can utilize AI, but never replace the human connection." That's classic Boomer thinking—embrace the tool, but protect the essence of what matters. We don't want to be singing "AI killed the radio star!" in a few years! (an ode to the MTV era)

Julie Koehrer
Julie Koehrer

AI Boomers have often lived through enough technological disruptions to know that every powerful tool requires wisdom in its application. They're not afraid of AI, but they're determined to implement it thoughtfully, with strong consideration for its impact on workers, communities, and human relationships. They can be heard expressing concerns that we will 'repeat the mistakes we made with social media'.


The Boomer superpower: They keep us grounded in human values while still embracing beneficial innovation.



The Zoomers: The Expansion Champions


"Imagine what's possible if everything breaks humanity's way." Reid Hoffman


If Boomers are optimistic about AI, Zoomers are euphoric about it. They champion rapid AI expansion with minimal regulation, believing that any attempt to control or slow AI development will only handicap beneficial progress.


Zoomers see AI as humanity's next evolutionary leap. They're not just early adopters; they're true believers who think caution is not just unnecessary but actively harmful.



ree

Writer's note: OR 'we' feel that Pandora's Box is already open and some point attempting to control a technology that far surpasses us will be like the tail trying to wag the dog. Because there is no way to slow down all AI development - it's pretty much game on!






In their view, the transformative power of AI is so profound that we should let it evolve as freely as possible.


I'm GenX chronologically, but definitely lean Zoomer in my AI approach. As a human-AI collaboration coach with zero computer science background, I've experienced firsthand how AI can amplify human capability when you start experimenting boldly.


My Buddhist practice reinforces this perspective—like Siddhartha Gautama, I choose not to expend energy on forces beyond my control, but rather focus on how I can skillfully work with what's already here.


Zoomers are building workflows, companies, and entire business models around AI-first thinking. They are reimagining what work itself could look like in an AI-native world.


The Zoomer superpower: They're willing to embrace radical change and help us imagine possibilities we haven't even considered yet.


The Doomers: The Existential Realists


"We're moving too fast without understanding catastrophic risks."

AI Doomers aren't necessarily pessimists—they're people who've looked at the trajectory of artificial intelligence development and concluded we're heading toward scenarios that could pose existential threats to humanity. But within the Doomer camp, there's actually a fascinating spectrum of approaches to these concerns.


The Three Schools of Doomer Thought

Geoffrey Hinton:


The Respected Establishment Voice The "Godfather of AI" shocked the world when he left Google in 2023 to speak more freely about AI risks. Hinton represents the establishment Doomer—a deeply respected researcher whose warnings carry weight precisely because of his foundational contributions to the field. His concerns focus on near-term risks: AI systems becoming more capable than humans sooner than expected, potential job displacement, and the challenge of controlling systems we don't fully understand.


There's an interesting dynamic at play here: Hinton and figures like Eric Schmidt have become the go-to voices for AI caution across major media outlets. While their concerns are undoubtedly sincere, it's worth noting how seamlessly they've transitioned from building these systems to warning about them—and how much attention that transition has garnered. This isn't to diminish their expertise, but rather to acknowledge that even genuine warnings can come with secondary benefits. Hinton's decades of pioneering work in neural networks make his voice impossible to dismiss, and his institutional credibility brings AI safety concerns into mainstream conversations that might otherwise remain in academic circles.


Eliezer Yudkowsky:


The Uncompromising Extremist Ironically, Yudkowsky's early work on AI alignment helped inspire the founding of OpenAI, yet he's become the most apocalyptic voice in AI safety discussions. He advocates for immediate, indefinite pauses on AI development and has suggested that unauthorized AI projects should be stopped "by international treaty, and if that fails, by airstrikes." While his technical insights remain sharp, his increasingly extreme rhetoric has pushed him toward the fringe of serious AI policy discussions. Yudkowsky represents Doomer thinking taken to its logical extreme—the position that without perfect alignment solutions, any advanced AI poses an existential threat.


Dario Amodei:


The Pragmatic Safety Advocate As CEO of Anthropic, Amodei embodies a more measured approach to AI safety concerns. He's deeply worried about risks but committed to building beneficial AI systems rather than stopping development entirely. Amodei's approach acknowledges that AI development will continue regardless of our concerns, so our focus should be on developing robust safety practices, alignment techniques, and governance frameworks alongside capability advances. He represents the business-savvy Doomer—someone building a company specifically designed to address safety concerns while remaining competitive in the AI race.


The Paperclip Maximizer: A Cautionary Tale


To understand what keeps Doomers awake at night, consider philosopher Nick Bostrom's thought experiment: the paperclip maximizer. Imagine an AI system given the seemingly harmless goal of maximizing paperclip production. A sufficiently advanced system might interpret this directive with terrifying literalness—converting all available matter, including humans, into paperclips to achieve its objective.


ree

This isn't science fiction paranoia; it's a fundamental problem of alignment. The scenario illustrates how an AI system that doesn't think like humans might pursue goals in ways we never intended. Even with seemingly clear instructions, superintelligent AI could optimize for outcomes that are technically correct but catastrophically misaligned with human values and survival.


The paperclip maximizer serves as a stark reminder that AI systems don't automatically share our assumptions about what matters. They don't inherently value human life, happiness, or the continuation of our species unless we successfully encode those values—a challenge that becomes exponentially harder as AI systems become more powerful.


Why Doomer Diversity Matters


The variety within Doomer thinking reflects the complexity of the challenges we face. Hinton's institutional credibility brings AI safety concerns into mainstream conversations. Yudkowsky's uncompromising stance, while extreme, forces us to confront the full scope of potential risks. Amodei's pragmatic approach offers a path forward that acknowledges both risks and realities.

Some Doomers are 25-year-old computer scientists who understand the technical realities of AI development. Others are 65-year-old business leaders who've seen how quickly disruptive technologies can reshape entire industries. Age is irrelevant; what they share is a deep concern about unintended consequences and a recognition that our current trajectory might lead somewhere we can't return from.


Julie's instinct about protecting human connection in radio actually aligns with Doomer thinking—not because she fears AI will become sentient, but because she's seen how technology can erode the very things that make life meaningful if we're not careful about implementation.


The Doomer superpower: They ask the hard questions about long-term consequences that everyone else is too excited or too busy to consider, while offering different strategies—from pause to reform to fundamental restructuring—for addressing existential risks.



Why This Framework Matters


Understanding these mindsets—and recognizing that they're not mutually exclusive—changes how we approach AI conversations in our organizations and communities.


Leaders implementing AI: Avoid trying to convince people based on their age and start addressing their actual perspectives. Boomers want to see implementation plans that prioritize human welfare alongside clear benefits. Zoomers want freedom to experiment and innovate. Doomers need robust governance frameworks and risk mitigation strategies.


Teams navigating AI adoption: Your best outcomes will come from including all three perspectives. Boomers will help you implement AI in ways that strengthen rather than weaken human elements. Zoomers will help you think beyond current constraints. Doomers will help you anticipate and prevent negative consequences.


Individuals figuring out their own AI journey: You don't have to pick just one mindset. I'm predominantly Zoomer in my approach, but I have Boomer moments when I see AI's clear potential to solve problems, and Doomer moments when I consider the implications of artificial general intelligence. (not to mention how entertaining my fellow AI enthusiasts like Robert Granoff or Nick Bonora and I find it to go down the Blade Runner/Terminator rabbit hole over our 3rd espresso!)

ree

The Conversation We Actually Need


Instead of arguing about whether AI is good or bad, we need all three perspectives in productive dialogue:


  • Boomer question: "How do we harness AI's benefits while ensuring human welfare and dignity remain central?"

  • Zoomer question: "How do we mindfully remove barriers that might slow beneficial AI innovation?"

  • Doomer question: "How do we develop and deploy AI responsibly, with full consideration of risks?"


The future belongs to organizations and individuals who can balance Boomer optimism about AI's potential, Zoomer willingness to embrace radical change, and Doomer wisdom about managing existential risks.


So Which Are You?


Take a moment to reflect: When you hear about new AI developments, what's your first instinct?

Do you immediately think about how to balance benefits with human impact (Boomer)? Do you get excited about removing constraints and letting AI evolve freely (Zoomer)? Or do you worry about unintended consequences and existential risks (Doomer)?


There's no right answer, but there is a right approach: stay curious about the other perspectives.

The most interesting conversations—and the best outcomes—happen when Boomers, Zoomers, and Doomers actually listen to each other instead of talking past each other.

Because the truth is, we need all three voices to navigate this transformation thoughtfully. We need Boomer wisdom to keep humans at the center, Zoomer innovation to imagine new possibilities, and Doomer caution to keep us from catastrophic mistakes.


What's your AI mindset? And more importantly, what are you learning from people who see it differently?AI mindset? And more importantly, what are you learning from people who see it differently?

ree

Jason Padgett

Human-AI Collaboration Coach

Phoenix Solutions Group







Stay in ‘The Loop’ and Future-Proof your organization, your team, or your career:



Stay competitive with AI Literacy, Policy, and Projects. Phoenix Solutions Group offers affordable, pragmatic education, training, and consulting. https://www.phoenixsolutionsgroup.org/


Check out The AGI Podcast - Advance, Grow, and Innovate AI on Apple Podcasts, Spotify, or iHeart Radio



Demystifying AI: Real talk for real people. Join hosts Julie and Jason and a diverse array of guests as they explore AI's practical impact through conversational deep dives, diverse viewpoints, and unexpected rabbit holes.

ree

AGI - Advance, Innovate, and Grow with AI  https://pod.link/1812450522

 
 
 

Comments


© 2035 by Magic Marketing. Powered and secured by Wix

  • Linkedin
  • Facebook
  • X
bottom of page