28% of CEOs Have Never Used AI. They're Still Blaming It for Your Layoff.

The definitive guide to AI adoption by generation, the AI-washing epidemic, and what companies actually need to do instead of cutting headcount and hoping ChatGPT picks up the slack.

← Back to Articles
Corporate boardroom with empty executive chairs and AI dashboards on screens, illustrating the disconnect between leadership and AI adoption

Nearly 70% of CEOs, CFOs, and senior executives use AI at work less than one hour per week.

Twenty-eight percent have never used it at all.

These are the same leaders mandating AI adoption across their organizations. Tracking employee usage. Tying AI fluency to performance reviews. Handing out bonuses for compliance and penalizing workers who don't get on board fast enough.

And then they walk into earnings calls, tell Wall Street that AI is "the most transformative technology since the Internet," and announce thousands of layoffs in the same breath.

Something doesn't add up. And the data proves it.

A note before we start

This is not an anti-AI article. I run an AI education company. I teach teams how to use these tools every single day. I believe AI is the most powerful professional development tool most workers have ever had access to.

But I also believe in telling the truth. And the truth right now is uncomfortable for a lot of people in corner offices.

The truth is that most companies don't have an AI problem. They have a leadership problem dressed up in AI language. The layoffs dominating headlines have far less to do with artificial intelligence than they do with pandemic-era overhiring, bloated balance sheets, investor pressure, and a corporate culture that has discovered "AI" is the most investor-friendly word in the English language.

This article is going to walk through all of it. The real data on how every generation is actually adopting AI. The financial shell game behind the "AI layoff" narrative. And, most importantly, what companies should actually be doing instead of cutting headcount and hoping ChatGPT picks up the slack.

Let's get into it.

Part 1: The Generational Landscape

Everyone Is More Curious Than Scared. That's the Part Nobody Talks About.

Before we get into the layoff conversation, we need to establish something that gets buried under the fear-based headlines: workers across every generation are more excited about AI than they are afraid of it.

PwC's 2025 Global Workforce survey found that employees are twice as likely to be curious or excited about AI's impact on their work as they are to be worried or confused. EY's Agentic AI Workplace Survey reported that 84% of desk workers are eager to embrace AI in their roles. Workers who use AI daily are saving an average of two hours per day, according to the Adecco Group's Global Workforce report.

The appetite is there. Across every generation. What's missing is the infrastructure, the training, and the leadership to channel that appetite into something productive.

Here's how each generation is actually engaging with AI right now, based on the best available research from 2025 and early 2026.

Gen Z: The Most Excited and the Most Anxious

Gen Z (born roughly 1997-2012) leads every generation in raw AI adoption. A 2025 London School of Economics survey found that 83% of Gen Z workers use AI on the job. Salesforce data shows 70% are already using generative AI tools. According to YouGov, about 1 in 10 use AI tools daily or multiple times per day, with another 23% using it weekly.

They're not just dabbling. SurveyMonkey's 2025 data shows 61% of Gen Z workers use AI specifically for education and learning. They're the most likely generation to have received AI skills training in the past month. They approach AI with curiosity rather than caution, and they're building it into both their professional and personal lives.

But here's the paradox. Gen Z also carries the highest anxiety about AI of any generation. PwC's Global AI Jobs Barometer found that 39% worry AI could result in lower pay for positions like theirs, the highest percentage of any age group. Over half believe AI will require them to reskill and will ultimately impact their career decisions, according to Deloitte's 2024 Gen Z and Millennial Survey.

The data that should worry everyone

Anthropic's own research found that unemployment for U.S. workers in their 20s in AI-exposed occupations rose by nearly 3% in the first half of 2025. Job-finding rates for workers ages 22-25 entering AI-exposed occupations have fallen about 14% since ChatGPT launched in 2022.

Gen Z is the first generation that will have AI tools spanning the entire arc of their careers. They're also the generation most likely to be collateral damage when companies cut entry-level roles and call it "AI efficiency."

Millennials: The Bridge Generation and the Most Skilled Users

Millennials (born roughly 1981-1996) are, by most measures, the most proficient generation when it comes to AI in the workplace. McKinsey data from 2025 shows that 62% of employees aged 35-44 report high AI expertise, higher than Gen Z at 50% and significantly higher than Boomers at 22%. Ninety percent of workers in this age range say they're comfortable using AI at work, the highest comfort level of any cohort.

The London School of Economics survey puts Millennial AI usage at work at 73%. Pew Research Center found that 30% use ChatGPT specifically for work tasks. Deloitte's 2025 survey shows 58% have used standalone AI tools, while 60% regularly engage with AI features embedded in products they already use.

Randstad's research makes an important distinction: while Gen Z adopts at higher raw rates, Millennials emerge as the most adept at actually utilizing AI in their jobs. Their proficiency with generative AI tools surpasses other generations, making them what Randstad calls "key players in the current AI landscape within workplaces."

This makes sense when you consider their position. Millennials grew up alongside rapid technological change. They remember the world before smartphones and have fully integrated into the world after them. They understand both sides of digital transformation in a way that gives them a practical, less hype-driven relationship with new technology.

Their concerns are more specific and more mature: algorithmic bias, misinformation, and ethical AI development top their worry list. They want AI used responsibly, not just used.

The challenge for Millennials is that they occupy the mid-career management layer where layoffs land hardest. They're skilled enough to know AI's potential and experienced enough to know when the AI narrative is being used as cover for something else entirely.

Gen X: The Strategic Skeptics Everyone Keeps Misreading

Gen X (born roughly 1965-1980) gets consistently mischaracterized in AI adoption conversations. The headlines paint them as resistant. The data tells a different story.

The London School of Economics puts Gen X AI usage at work at 60%. Pew reports that 78% know about ChatGPT and more than half (52%) say they've heard "a lot" about it. That's not ignorance. That's awareness without full buy-in, and there's a reason for the gap.

Gen X takes what Deloitte calls a "practical, results-driven approach" to AI. They embrace it when the benefits are tangible: business forecasting, automating repetitive administrative tasks, analytical tools with clear ROI. PCMag data shows 55% say AI will positively impact their lives. They're not anti-AI. They're anti-hype.

And honestly? They've earned that skepticism. Gen X was the original adopter of technology in technology's early stages, from home computers and video games to the internet boom. Then they lived through the dot-com crash. They watched an entire generation of "this changes everything" promises evaporate. When Gen X hears a CEO say AI will transform their industry, they've heard that pitch before, and they remember what happened next.

As one Built In analysis noted, Gen X and Boomers have a "longer attention span" and are "much more likely to have the skepticism and systemic, abstract thinking needed to audit AI effectively." In a world where AI hallucinations are a known problem and companies are making billion-dollar decisions based on tools that sometimes confidently fabricate information, that skepticism is a feature, not a bug.

Baby Boomers: Not Anti-Tech. Anti-Nonsense.

Baby Boomers (born roughly 1946-1964) have the lowest AI adoption rates, and that's not surprising. But the assumption that they're technophobes doesn't hold up.

The London School of Economics puts Boomer AI usage at work at 52%. That's lower than other generations, but it's still more than half. Pew found that 18% use ChatGPT at work. Twenty percent use AI at least once a week according to the Barna Group. McKinsey reports 22% claim high AI expertise.

Yes, 71% have never used a tool like ChatGPT. Yes, roughly half aren't using AI in their personal lives at all. But here's the data point that matters most: the AEM survey found that nearly half of Boomers say they would use AI if it were integrated into technology they are already using.

They don't want bells and whistles. They want technology that's functional and streamlined. Their barrier to AI isn't willingness. It's the way AI is being packaged and delivered to them: as a shiny new thing rather than a practical improvement to tools they already trust.

BCG's AI at Work survey found that when leaders demonstrate strong support for AI, the share of employees who feel positive about it rises from 15% to 55%. That leadership signal matters disproportionately for older workers who need to see that AI adoption is a strategic choice backed by their organization, not another tech fad that'll be forgotten in two years.

Generation AI Usage at Work Key Strength Primary Concern
Gen Z 83% Experimentation & fluency Pay reduction & career impact
Millennials 73% Highest proficiency & comfort Bias, misinformation, ethics
Gen X 60% Strategic skepticism & auditing Job security & unproven ROI
Boomers 52% Institutional knowledge & judgment Privacy, security, ethical risks

Source: London School of Economics, 2025

The Training Gap That Spans Every Generation

Here's where the generational conversation converges into a single, damning conclusion: nobody is getting the training they need.

25%
Get Formal AI Training
From their employer (Adecco, 2025)
14%
Use GenAI Daily
Despite widespread awareness (PwC, 2025)
90%
Say Reskilling Is Essential
To leverage AI effectively (EY, 2025)

EY's Work Reimagined survey captured the scale of the disconnect: AI usage at work is widespread (88% of organizations have some AI in play), yet only 28% have positioned employees to achieve transformative business impact.

This is not a generational problem. This is a leadership failure that affects every single age group. Gen Z wants training and is the most likely to get it, but still not at adequate levels. Millennials are skilled but often self-taught. Gen X and Boomers are the least likely to receive training, despite being the most likely to need structured onboarding to a new tool category.

The frustration cuts across every age group. Every generation wants to learn. Very few companies are teaching them.

Part 2: The AI-Washing Epidemic

4.5%. That's the Real Number.

In 2025, U.S. employers announced roughly 1.2 million job cuts, the highest annual total since the COVID-era layoffs of 2020.

Of those 1.2 million cuts, AI was cited as the reason for approximately 55,000.

That's 4.5%.

Standard "market and economic conditions" drove 245,000 layoffs, four times the AI figure. That data comes from Challenger, Gray & Christmas, the same outplacement firm that every major news outlet cites when reporting on layoff trends.

Let me say that again: traditional financial reasons caused four times more layoffs than AI in 2025. But you wouldn't know that from reading the headlines.

4.5%
Layoffs That Cited AI
55,000 of 1.2 million total cuts in 2025
245K
Cited "Market Conditions"
4x the number that cited AI
59%
Hiring Managers Admit
Using AI as cover for other reasons

Why CEOs Blame AI (Even When It's Not the Reason)

Peter Cohan, an associate professor of management practice at Babson College, told Built In that AI is "the least bad reason companies can use" for layoffs. Think about the alternatives:

If you blame tariffs, you risk political blowback. If you cite declining sales, investors sell your stock. If you admit to pandemic-era overhiring, you're confessing to years of mismanagement.

But if you blame AI? You look forward-thinking. Innovative. Lean. The stock goes up.

This isn't speculation. It's documented. Block (Jack Dorsey's fintech company) announced it was cutting nearly 40% of its workforce in February 2026, framing it around "intelligence tools" and "smaller and flatter teams." The stock surged 20-27% in after-hours trading, adding roughly $6 billion in market cap. Morgan Stanley upgraded the stock. Goldman Sachs raised its price target.

AI-related stocks have accounted for about 75% of S&P 500 returns since ChatGPT launched, according to analysis from The Conversation. There is a massive financial incentive for companies to be seen embracing AI aggressively, even when their actual AI capabilities don't justify the narrative.

"The headline is, 'It's because of AI,' but if you read what they actually say, they say, 'We expect that AI will cover this work.' Hadn't done it. They're just hoping. And they're saying it because that's what they think investors want to hear."

Peter Cappelli, Wharton Management Professor

The AI-Washing Playbook

The pattern is remarkably consistent. Forrester analysts examined the trend in January 2026 and their conclusion was blunt: "Many companies announcing A.I.-related layoffs do not have mature, vetted A.I. applications ready to fill those roles."

The New York Times labeled it "AI-washing." The term describes companies announcing layoffs, blaming artificial intelligence, and quietly sweeping other problems under the rug: pandemic-era over-hiring, declining revenues, strategic missteps, bloated management layers.

Here's how it typically plays out: A company announces "strategic restructuring" or "operational efficiency initiatives." Leadership mentions AI transformation roadmaps and "digital-first futures." They release employees, often in roles that AI couldn't realistically automate with current technology. Customer service positions get cut before conversational AI is fully deployed. Content roles disappear before AI writing tools are properly integrated. Marketing teams shrink ahead of any demonstrated AI replacement capability.

Then the stock goes up. And the cycle repeats.

"No CEO wants to stand up and say, 'We got it wrong.' Blaming AI shifts responsibility onto an external, seemingly unstoppable force."

Scott Dylan, Founder, NexaTech Ventures

The Post-COVID Overhiring Correction Nobody Wants to Talk About

If you want to understand why companies are laying people off, don't look at their AI roadmap. Look at their headcount charts from 2020 to 2022.

The Pandemic Hiring Binge

  • Meta nearly doubled headcount from 48,268 to 86,482 (early 2020 to late 2022)
  • Alphabet headcount soared 62% in the same period; tripled between 2016 and 2023
  • Salesforce jumped 30% in a single year
  • Amazon added over half a million workers; headcount doubled in just 2020 and 2021
  • Block tripled headcount during the pandemic under Jack Dorsey

When the Federal Reserve raised interest rates from near-zero to 5%, the correction was inevitable. Free money dried up. Growth expectations reset. And companies that had spent years hiring aggressively discovered they had far more people than their business required.

The layoff wave that followed was massive: 262,682 tech workers laid off in 2023, 152,922 in 2024, and 122,549 in 2025, according to Layoffs.fyi. Every major CEO who cut during this period acknowledged overhiring in their own words.

Mark Zuckerberg told the Morning Brew Daily podcast that the wave of layoffs was "largely due to companies seeking a 'leaner' workforce in the wake of a pandemic-era hiring boom." That's not my analysis. That's the CEO of Meta.

The Apple Control Group

Apple maintained modest, single-digit headcount growth throughout the pandemic, consistent with pre-pandemic trends. Apple never announced mass layoffs. The companies that overhired the most laid off the most. That pattern points to management discipline, not technological inevitability.

The Case Studies That Expose the Pattern

Amazon: Two Stories in One Day. In October 2025, Amazon announced its largest-ever round of layoffs: 14,000 corporate roles. SVP Beth Galetti wrote in a blog post that AI is "the most transformative technology we've seen since the Internet" and that the company needed to operate "more leanly." Hours later, a different Amazon representative walked it back: "AI is not the reason behind the vast majority of reductions." CEO Andy Jassy later clarified that the cuts were "not really AI-driven, not right now at least." Two narratives. One company. One day.

UPS: 48,000 Jobs, and AI Wasn't the Driver. UPS eliminated 48,000 roles in 2025. The bulk (34,000 operational jobs) were related to closing 93 buildings, not replacing people with robots. Parcel volumes were down 5.4% and the company was strategically pivoting away from Amazon, its largest customer. The 14,000 additional corporate cuts were only "partially related to AI," according to a company spokesperson.

Target: No AI Mentioned at All. Target cut thousands of corporate roles after four years of roughly stagnant revenue. Incoming CEO Michael Fiddelke said the cuts were about "reducing complexity" at a company where workforce grew faster than sales. He didn't mention AI in the layoff memo. But Target still got swept into the "AI is taking white-collar jobs" narrative by proximity.

Klarna: The Cautionary Tale. This is the one every CEO should study. CEO Sebastian Siemiatkowski publicly boasted that AI was "doing the work of 700 employees" and that the company had stopped hiring "largely due to AI." It was the purest expression of the AI-replacement narrative. Then customer complaints surged. Satisfaction ratings dropped. Users cited "generic, repetitive, and insufficiently nuanced replies." By early 2025, Siemiatkowski admitted the company had "gone too far" and "focused too much on cost." Klarna began rehiring humans. Forrester's research found that 55% of employers who laid off workers for AI now regret it. Klarna is the poster child for what happens when you mistake cost-cutting for transformation.

The Academic Evidence Is Overwhelming

An NBER working paper surveying nearly 6,000 C-suite executives across four countries found approximately 90% reported AI had no impact on employment over the past three years, and 89% reported no change in productivity. The Yale Budget Lab and Brookings Institution examined U.S. employment data and found no overall change in employment for workers in AI-exposed occupations.

MIT's "GenAI Divide" study found that 95% of enterprise AI pilot programs are failing to deliver measurable financial returns. Gartner reported that 30% of GenAI projects are abandoned after proof of concept. In 2025, 42% of companies abandoned most of their AI initiatives, up from 17% the year before.

And yet the layoffs continue to be framed as AI-driven. Because the market rewards it.

The Block Nuance: Where AI-Washing Meets Legitimate Transformation

I've been critical of the AI-washing narrative throughout this article, and I want to be fair. Not every company citing AI for restructuring is lying. Some are telling a partial truth, and Block is the most instructive example.

Block built a real AI tool. It's called Codename Goose, an open-source AI agent powered by Claude on Databricks. It's been in production internally for about 18 months. The results are measurable: 75% of engineers report saving 8-10+ hours per week. Developer productivity improved 40% per engineer in code output since September 2025. A risk underwriting model that previously took a full quarter to build was completed in a fraction of that time. Ninety-five percent of Block's engineers are regularly using AI to assist their development work.

Block also invested in the organizational change to make it work. They launched an AI Champions program with 50 developers dedicating 30% of their time to AI enablement. They created structured documentation, gamified adoption with an RPG-style system called Repo Quest, and adopted research-plan-implement frameworks for complex AI tasks. The company's CTO, Dhanji Prasanna, wrote an internal "AI manifesto" that kicked off the transformation, and one of his core principles was that "leadership needs to use the tools daily."

That's real. And it deserves credit.

But here's where it gets complicated. Jack Dorsey himself, in a March 2025 memo about a prior round of 931 layoffs, wrote explicitly: "None of the above points are trying to hit a specific financial target, replacing folks with AI, or changing our headcount cap." Eleven months later, the framing changed entirely. The February 2026 layoff of 4,000+ workers was presented as a fundamental shift driven by "intelligence tools." The narrative flipped. And Block's stock surged 25%.

Block's CFO acknowledged the company "over-hired during COVID" and ran duplicate organizational structures for Square and Cash App for years. Even after a 40% workforce reduction, Block has more employees than it did in early 2020.

The Block Takeaway

Block is neither the villain nor the hero of the AI-washing story. They're the mirror. A company that did some of the real work AND used the narrative to dress up a financial correction. That combination is probably the most honest picture of what's happening across corporate America right now.

Part 3: The Path Forward

The Problem Isn't AI. The Problem Is Leaders Without a Plan.

Let me bring the executive usage data back, because this is where the entire conversation comes into focus.

Nearly 70% of CEOs, CFOs, and senior executives use AI at work less than one hour per week. Twenty-eight percent never use it at all. This comes from a survey of more than 3,000 executives analyzed by Stanford economist Nicholas Bloom, published in Fortune in March 2026.

The NBER study of 6,000 executives found that overall executive AI usage amounts to roughly 1.5 hours per week. In the U.S., bosses use AI 1.7 hours per work week, slightly less than their employees at 1.8 hours.

These are the same leaders announcing that AI is the centerpiece of their corporate strategy. The same leaders cutting thousands of jobs in the name of AI efficiency. The same leaders telling investors that AI will transform their business.

They're not using it.

12x
More Likely to Win
When C-suite is deeply engaged with AI (BCG)
15→55%
Employee Positivity Shift
When leaders demonstrate strong AI support (BCG)
2x
More Likely to See ROI
When workflows are redesigned before tools are picked (McKinsey)
95%
AI Pilots Fail
To deliver measurable returns (MIT, 2025)

BCG's research found that C-level executives and leadership teams who are deeply engaged with AI are 12 times more likely to be among the top 5% of companies winning with AI innovation. Twelve times. That's not a marginal advantage. That's a completely different category of outcome.

McKinsey's data adds another layer: organizations reporting "significant" financial returns from AI were twice as likely to have redesigned end-to-end workflows before selecting their AI tools. The technology wasn't the differentiator. The organizational change was.

This is the core of the problem. AI adoption isn't failing because the tools aren't good enough. It's failing because leadership treats AI as a procurement decision rather than a strategic transformation. Buy the platform. Announce it on the earnings call. Check the box. Move on.

"Instead of leadership calling the shots with a top-down program, they take a ground-up approach, crowdsourcing initiatives that they then try to shape into something like a strategy. The result: projects that may not match enterprise priorities, are rarely executed with precision, and almost never lead to transformation."

PwC 2026 AI Predictions Report

Writer's 2025 research found that 71% of executives report their companies develop GenAI applications in isolated silos. Forty-two percent are experiencing internal power struggles because of generative AI. Ninety-three percent of AI leaders say cross-department collaboration is a barrier to delivery.

The ROI Crisis Is a Strategy Crisis

PwC's 2026 Global CEO Survey found that 56% of CEOs say they've gotten "nothing out of" their AI investments. Only 12% reported that AI both grew revenues and reduced costs.

MIT's GenAI Divide study found that 95% of enterprise AI pilots fail to deliver measurable returns. In 2025, 42% of companies abandoned most of their AI initiatives, up dramatically from 17% the year before.

This isn't an indictment of AI as a technology. It's an indictment of how companies are deploying it.

The companies in PwC's "vanguard" group (the 12% getting both revenue growth and cost reduction) share common traits. They deploy AI at enterprise scale consistent with business strategy. They have clearly defined roadmaps. They've formalized responsible AI practices. And they've built organizational cultures that enable adoption from the top down.

The 88% who are struggling share a different set of traits. Fragmented pilots that never scale. Unclear objectives. No formal training programs. Leadership that talks about AI on earnings calls but doesn't use it in their own daily work.

What Actually Works (The Boring Answer Nobody Wants to Hear)

The data on what separates AI winners from AI failures is remarkably consistent across every major research firm. And it's not exciting. It's not "we deployed an autonomous agent swarm." It's organizational. It's structural. It's leadership.

Leaders use the tools. BCG's data is unambiguous: leadership engagement is the single strongest predictor of organizational AI success. Block's CTO built this into his approach ("leadership needs to use the tools daily"). When a CEO can't describe their own AI workflow, they cannot credibly lead an AI transformation.

Training is the prerequisite, not the afterthought. Only 25% of workers get formal AI training. That number needs to be closer to 100%. And it can't be a one-size-fits-all webinar. Training must be tailored to teach employees how to use AI within the context of their specific role. Accounting teams need different AI training than marketing teams. The investment in role-specific, hands-on training is what separates companies getting returns from companies getting nothing.

Workflows are redesigned before tools are selected. McKinsey found this is the single biggest predictor of meaningful financial returns. Companies that pick an AI tool and then try to figure out where it fits are burning money. Companies that identify their highest-value workflows, map where AI can genuinely improve them, and then select the right tool are the ones seeing results.

The goal is augmentation, not replacement. The Klarna Effect proved what happens when you swing for full replacement. The research consistently shows that the highest-performing AI deployments keep humans in the loop. They use AI to eliminate the tedious, repetitive, low-value work so that employees can focus on judgment, creativity, relationship-building, and strategic thinking. The things AI can't do.

Measurement is about behavior change, not logins. Most executives track AI seat licenses, query counts, and login frequency. That tells you nothing about whether AI is actually changing how work gets done. The analyst who uses AI to draft report sections hasn't transformed their workflow. The analyst who redesigned their entire analytical process around AI's continuous monitoring capabilities has.

What This Means for Your Company

Here's the part where I stop being a researcher and start being the guy who walks into businesses every week and helps teams figure this out.

You do not need AI to survive. Businesses operated successfully for decades before ChatGPT existed. Your company does not need to announce an AI strategy on your next earnings call. You do not need to cut headcount and hope that a chatbot picks up the slack.

What you need is a plan.

A plan that starts with leadership actually using the tools. Not delegating it to IT. Not asking the "AI committee" to figure it out. Personally. Daily. Understanding what the technology can and can't do, from firsthand experience.

A plan that invests in training your existing people rather than replacing them. The data is clear: upskilling your workforce is more effective, less risky, and produces better long-term outcomes than cutting and hoping. Every generation in your organization is more curious about AI than they are afraid of it. That's an extraordinary advantage. Don't waste it by handing them a pink slip instead of a training program.

A plan that redesigns workflows before buying tools. Identify the three to five highest-impact processes in your business. Map them. Find the bottlenecks. Then, and only then, evaluate which AI tools can address those specific problems. "We need AI" is not a strategy. "We need to reduce our quote-to-close time by 30% and AI-powered proposal drafting can help" is a strategy.

A plan that measures real outcomes, not vanity metrics. Don't track how many employees logged into your AI platform this month. Track whether your customer response time decreased. Whether your content production increased in quality and volume. Whether your team is spending less time on repetitive data entry and more time on analysis and decision-making.

A plan that treats every generation in your workforce as an asset. Your Gen Z employees bring AI fluency and experimentation. Your Millennials bring practical proficiency and workflow integration skills. Your Gen X employees bring strategic skepticism and the ability to audit AI outputs critically. Your Boomers bring institutional knowledge and the kind of judgment that no algorithm can replicate. A real AI strategy leverages all of it.

The Question That Actually Matters

The question isn't "Will AI take my job?"

The question is: "Does my company have a plan for how AI makes me better at my job?"

If your leadership team uses AI less than an hour a week, the answer is probably no. If your company announced layoffs and blamed AI without deploying mature AI applications to replace those roles, the answer is definitely no. If 95% of your AI pilots are failing and your CEO is still telling investors that AI is your top strategic priority, the answer is no.

But it doesn't have to stay that way.

The technology is real. The potential is real. Workers across every generation are ready. The tools are available, many of them free or low-cost, and they are genuinely powerful.

What's missing is leadership willing to do the boring, difficult, unglamorous work of building a real plan. Of using the tools themselves. Of training their people instead of cutting them. Of measuring outcomes instead of performing innovation for Wall Street.

The companies that do this work will be the ones that win. Not the ones that cut 40% of their workforce and hope the stock pop carries them through the quarter. Not the ones that mention AI on every earnings call but can't describe a single workflow it's transformed. Not the ones that treat "AI strategy" as a bullet point on a slide deck rather than a fundamental rethinking of how their organization operates.

The Bottom Line

The threat was never artificial intelligence.

The threat was always artificial leadership.

AI is the most powerful professional development tool your workforce has ever had access to. Every generation in your company is more curious about it than scared of it. The path forward isn't fewer people. It's better-equipped people, led by executives who actually use the technology they keep talking about.

Stop cutting. Start planning. Start training. Start leading.

That's how you win with AI. Everything else is theater.

TB

Tim Bish

Tim cuts through AI hype to deliver research-backed insights for business leaders and technology professionals. He helps teams build practical, strategic AI capabilities through hands-on training and education.

Generational Adoption Data

  1. London School of Economics (2025): AI usage by generation survey. Gen Z 83%, Millennials 73%, Gen X 60%, Boomers 52%. Via Built In.
  2. McKinsey (2025): AI expertise by age group. 62% of workers aged 35-44 report high AI expertise.
  3. Pew Research Center (February 2025): "U.S. Workers Are More Worried Than Hopeful About Future AI Use in the Workplace." Survey of 5,273 employed U.S. adults.
  4. PwC Global AI Jobs Barometer (2025): Generational expectations for AI impact on work and wages.
  5. Deloitte Gen Z and Millennial Survey (2024/2025): AI attitudes and reskilling expectations.
  6. Randstad USA: "The Generational Divide in AI Adoption." AI usage habits and proficiency by generation.
  7. SurveyMonkey (September 2025): "2024 AI Trends By Generation: Who Uses AI The Most?" and "AI In The Workplace Statistics Report 2026."
  8. YouGov: Daily and weekly AI usage rates across generations.
  9. Salesforce: Gen Z generative AI adoption data (70% using GenAI).
  10. Barna Consulting Group (2024): Generational AI skepticism survey.
  11. AEM / Association of Equipment Manufacturers: "Understanding Generational Differences in the Age of AI." Micro-generational analysis.

Layoff & AI-Washing Data

  1. Challenger, Gray & Christmas (2025): 54,694 AI-cited layoffs of roughly 1.2 million total (4.5%). Via CNBC, December 2025.
  2. NBER Working Paper (February 2026): Survey of ~6,000 C-suite executives on AI employment and productivity impact. Via Fortune.
  3. Yale Budget Lab / Brookings Institution (2025): No overall change in employment for workers in AI-exposed occupations. Via Built In.
  4. MIT "GenAI Divide: State of AI in Business 2025": 95% of enterprise AI pilot programs failing to deliver measurable returns.
  5. Forrester (January 2026): Analysis of AI application maturity at companies announcing AI-related layoffs. Via The New York Times.
  6. Built In (2026): "Did AI Really Take Your Job? The Truth About AI Washing."
  7. The Conversation / University of Sydney (March 2026): "Tech companies are blaming massive layoffs on AI. What's really going on?"
  8. Fortune (February 2026): "'AI-washing' and 'forever layoffs': Why companies keep cutting jobs, even amid rising profits."
  9. Fast Company (March 2026): "This CEO explains what's really behind layoffs, and it's not AI."
  10. CNBC (November 2025): "AI-washing and the massive layoffs hitting the economy."
  11. NBC News (October 2025): "Tens of thousands of layoffs are being blamed on AI. What are companies actually getting?"
  12. Gil Pignol / Medium (March 2026): "Block's Layoffs Reveal the Great AI-Washing of Corporate America." Extensive sourced analysis.
  13. Quartz (February 2026): "'AI-washing' rises as companies blame AI for layoffs: What to know."

Executive Usage & Strategy Data

  1. Fortune / Nicholas Bloom, Stanford (March 13, 2026): "CEOs are mandating that employees use AI. They're hardly using it themselves." Survey of 3,000+ executives.
  2. NBER CEO Survey (February 2026): Executive AI usage amounts to ~1.5 hours/week; 25% not using at all. Via Fortune.
  3. BCG AI Radar (January 2026): 82% of CEOs more optimistic about AI; C-suite engagement = 12x more likely to be in top 5%. Via World Economic Forum.
  4. PwC 29th Global CEO Survey (January 2026): 4,454 CEOs across 95 countries. Only 12% report AI both grew revenues and reduced costs.
  5. EY Work Reimagined Survey (November 2025): 15,000 employees and 1,500 employers. 88% AI usage but only 28% achieving transformative impact.
  6. EY Agentic AI Workplace Survey (October 2025): 84% eager to embrace, 56% concerned about job security. 1,100+ U.S. desk workers.
  7. Writer Generative AI Adoption Report (2025): 71% developing AI in silos; 42% experiencing power struggles. Executive vs. employee perception gaps.
  8. McKinsey State of AI (November 2025): Workflow redesign as predictor of financial returns.
  9. Adecco Global Workforce of the Future Report (2025): Workers save avg 2 hours/day; only 25% receive formal AI training.
  10. Conference Board C-Suite Outlook Survey (2026): AI as strategic priority, workforce readiness as key constraint.
  11. Gartner (March 2025): "AI productivity paradox." 30% of GenAI projects abandoned after proof of concept.
  12. MIT Sloan Management Review: Executive AI literacy research. Analysis of 6,986 executives' AI skills.
  13. CIO.com (September 2025): "Executives love their AI rollouts, but employees aren't buying it."
  14. PwC 2026 AI Predictions: Top-down vs. crowdsourced AI strategy analysis.

Block / Codename Goose

  1. Fortune (March 6, 2026): "Exclusive: Block's CFO explains the AI leaps over 18 months." CFO Amrita Ahuja interview.
  2. Anthropic / Claude Customer Story: Block's deployment of Codename Goose. 75% of engineers saving 8-10+ hours weekly.
  3. Block Engineering Blog (January 2026): "AI-Assisted Development at Block." AI Champions program details.
  4. Lenny's Newsletter / Dhanji Prasanna interview: Block CTO "AI manifesto" and leadership principles.
  5. CIO.com (October 2025): "How Block is accelerating engineering velocity through developer experience."

Worker Sentiment

  1. PwC Global Workforce Hopes and Fears Survey (2025): Workers 2x more likely to be curious/excited than worried/confused.
  2. MetLife (March 2026): 61% worried about AI ethics/safety risks; 59% fear obsolescence. Via CNBC.
  3. Resume Now / HR News Feed (March 2026): 63% say AI will make the workplace feel less human. Survey of 1,003 employed U.S. adults.
  4. BCG AI at Work Survey (July 2025): More AI usage correlates with more concerns. "Silicon ceiling" for frontline workers.