AI Momentum, Two Rooms, Same Reality: What HR Leaders in Vegas and Paris Taught Me About Risk, Governance, and Growing Up with AI

When we kicked off the AI Momentum Model study with SmartRecruiters, I had a quiet fear in the back of my mind: What if the data doesn’t match reality?

We ran a blind survey of roughly 360 HR, TA, and HR IT leaders around the world. We built a model. We defined seven drivers of AI momentum. We mapped organizations into archetypes. On paper, it all held together (and you can download the full report here).

But survey responses are one thing. Sitting in a room with real HR, TA, HR/TA ops and HRIT leaders who live this work every day is something else entirely.

As part of the project, I had the chance to do exactly that—twice:

  • First with a group of SmartRecruiters’ strategic customers from North America (NAM) at HR Tech in Las Vegas
  • Second with a similarly senior group from the EU & UK (EUK) at UNLEASH World in Paris

We walked through the same research. Reviewed the same charts and graphs. And had two very different—but also surprisingly similar—conversations. Here’s the headline that set the table for both conversations:

  • In our benchmark, 22% of our aggregated survey respondents were identified as Leaders—reaching the highest two stages of AI adoption in HR.
  • Among respondents in NAM, 23% were identified as Leaders, 48% as Laggards.
  • Among respondents in the EUK, 21% were identified as Leaders, 39% as Laggards.

So the maturity curve is very similar—the same at the top, with more NAM organizations Lagging behind the EUK. The question is how they’re getting (or not getting) there—and what’s really holding them back.

Gut Check: My First, Most Personal Finding—This Research Slaps

Listening back to both sessions, my first reaction was honestly just… relief.

Because the survey panel was blind, all I could do up front was validate that people were in HR or TA, ask about their role, and screen for relevance. You do your best to design a solid instrument. You control for obvious noise. And then you hope for the best. It’s the nature of third-part survey work these days.

But in both rooms—Vegas and Paris—people weren’t just politely nodding at the results. There were lightbulbs going off, there was almost a sigh of relief. They were reconciling their own in experiences with the findings—and felt good about it.

  • The Seven Drivers of AI Momentum landed. Not as theory, but as, “Yep, that’s exactly the stuff we’re wrestling with.”
  • The best practices we’d surfaced—around literacy, coalitions, governance, posture, integration—felt less like a wishlist and more like a mirror.

You could almost feel a collective sigh of relief when we got to one core idea:

“Going all in on AI” is not what separates Leaders from Laggards.

Buying more AI isn’t the unlock. Turning on every feature your vendors are pushing at you isn’t the strategy.

What moves the needle? All the unsexy, foundational work of transformation:

  • Getting your data and processes into a state where AI can actually help.
  • Building real coalitions with IT, legal, compliance, finance.
  • Developing literacy and governance that are specific to HR—not just whatever the enterprise is doing.

Both rooms visibly relaxed when they realized: “We’re not behind because we didn’t ‘go big’ on AI two years ago. We’re behind if we’re ignoring the basics.”

“My favorite takeaway was learning how companies in the EU and North America are tackling AI fluency. They aren’t just taking online AI courses to learn what an LLM is. They’re tasked with doing hands-on projects and experiments, and then they share their learnings, successes, and failures in a structured way across team and company-wide meetings.”

  • Jake Paul, Head of Product Innovation, Kyle & Co.

The Two Themes That Hit Hardest: The Need for Risk Management and AI Governance for HR

Across both sessions, the drivers that clearly hit the deepest were Risk and Governance—and the relationship between the two.

It was encouraging to hear how many of them are already trying to navigate risk, not just avoid it:

  • They’re working with AI councils and governance committees.
  • They’re pushing beyond “bias is bad” soundbites and asking, “What does real risk actually look like in this use case?”
  • They’re fighting for HR-specific governance instead of trying to wedge recruiting, internal mobility, and workforce planning into a generic enterprise AI policy that was written for something else.

What really stood out, though, was this:

North America is not the Wild West.

The EU/UK is not in wait-and-see mode.

That’s the lazy narrative—and it’s wrong.

In both rooms, these leaders were in active, ongoing conversations with their internal partners about how to:

  • classify and prioritize risk,
  • build guardrails that actually fit HR use cases, and
  • find responsible paths forward rather than hiding behind “we can’t because AI.”

The surface stories are different, but the work underneath is remarkably similar. Here are some key takeaways from each session:

AI in North America: Momentum with Friction

If I had to characterize the North American room in three words, it would be:

  • Restless
  • Political
  • Determined.

This group lives in a familiar tension:

  • On one side, you have CEOs who are huge proponents of AI—leaning in, asking for progress, expecting HR to show up with answers.
  • On the other, you have legal and compliance leaders who absolutely should be cautious about data, security, and liability—but who sometimes express that caution by trying to remove or neuter the AI entirely.

People talked candidly about legal being a “wet blanket,” and yet in the same breath said, “We want them in the room. None of us wants to be on the front page for the wrong reasons.”

This is not some cowboy culture ignoring risk. It’s a set of leaders stuck in governance that isn’t quite fit for purpose yet, doing what they can to move anyway.

A few things felt very North American to me:

  • The willingness to use political capital and pain as tools. Someone in the room said, “Make the problem suck big enough for everyone, especially the CIO, so they’re motivated to solve it with you.”
  • The instinct to push for guardrails that enable usage, not just policies that say “no.”
  • The growing recognition that data hygiene and process discipline are precursors—not nice-to-haves—if you want AI to work in your stack.

It’s “momentum culture,” yes. But it’s momentum trying to move through governance, not around it.

AI in the EU/UK: Navigating Compliance with Intent

If the North American room was restless and political, the EU/UK room felt:

  • Measured
  • Comparative
  • Principled

They opened with a different kind of worry:

“If HR is falling behind, who’s actually ahead of us in the business?”

They pointed to functions like finance and sales—places where AI’s been embedded into analytics and decision-making with fewer direct consequences for people’s careers. Not because there’s no risk there, but because the perceived regulatory and ethical stakes are lower.

Layer on top of that:

  • The GDPR hangover, where the instinct is to “put a life vest on everything” to be absolutely sure you’re covered.
  • The looming EU AI Act, where TA products often default into “high risk” regardless of nuance.

And yet, when they described what they’re actually doing, it wasn’t passive at all.

I heard EU/UK leaders talk about:

  • Standing up AI data, governance & ethics teams that do true, use-case-by-use-case assessment—looking at societal, individual, and company impacts.
  • Collaborating deeply and proactively with legal, compliance, IT, and cyber from the start.
  • Starting with “easy” use cases, knowing full well they’re laying the foundation for harder ones later.

One phrase that stuck with me from the discussion was this idea of, “Being a Laggard for good reason.”

I don’t always love that framing—but I respect the honesty in it. What they meant was:

“We’re intentionally slower on deployment because we’re doing the hard work of building the governance and ethics structures first.”

In a market where regulation isn’t theoretical but very real, that’s not denial. That’s a strategy.

My Big Recommendation: HR Needs to Remember it Knows All About Risk

If there’s one message I want HR and TA leaders on both sides of the pond to walk away with, it’s this:

When it comes to navigating AI in HR, remember you are not starting from zero when it comes to risk.

We are already privy to, and often driving, some of the highest-impact, highest-risk decisions our organizations make:

  • Workforce reductions and restructures.
  • Designing and enforcing performance management processes.
  • Balancing highly competitive offers with pay equity for existing teams.
  • Deciding when to coach and when to exit someone.

HR guides leaders through complicated decisions with real impact every day.

We know from experience that there is always nuance, that there is almost always a human factor that needs to be considered—and very few decisions are as clean as a policy would like them to be.

AI doesn’t erase that reality. It just adds a new surface area where our judgment, experience, and ethics are needed.

So yes—lean into new literacy. Build better, HR-specific governance. Partner differently with your CIO, CISO, CHRO, and GC.

But don’t buy into the idea that you’re starting from scratch. You’re not. AI is simply the next arena where all of that existing expertise is coming to a head.

Parting Thoughts: What I Wish I’d Had More Time For

If I’d had five more minutes with each group, I wouldn’t have used it to present more slides.

I would’ve used it to ask more questions.

Now that I’ve seen how consistently the Seven Drivers of AI Momentum land—and how much energy there is around risk, governance, and foundations—I want to go deeper into the “how.”

I want stories like:

  • How did you actually raise Literacy in your HR, TA, or leadership team?
  • What did it take to build HR-specific AI Governance that your legal and compliance teams trust?
  • Where did you start with Strategic Posture—one KPI, one pilot, one function?
  • Which Coalition moves actually unlocked progress—and which ones flopped?

I don’t just want to know that leaders are wrestling with this stuff—I want to know what they’re doing that’s working.

That’s the next phase of this work: We’re not just mapping AI Momentum from the Ivory Tower—we’re collecting the playbooks that show how organizations really are building it.

So… Hi. 😊 Whether you’re somewhere in North America or the EU/UK—or anywhere else that AI is being deployed across and/or in partnership with HR—and you’re doing interesting, messy, honest work… I want to hear from you. Whether you’d call yourself a leader, a laggard-for-good-reason, or something in between—we all need to be learning, growing, and innovating together.

And what interesting timing… We’re relaunching my podcast, Transformation Realness, this quarter. If you’re interested in joining the convo and sharing your work—or have someone to nominate—hit me up!

Until then, keep it going. Momentum requires motion. And now we know there are plenty of places where it’s safe, straightforward, and inexpensive to begin.

Cheers all!

Leave a Comment

Your email address will not be published. Required fields are marked *