I recently joined Kermit Randa, CEO of Symphony Talent, and Robin Epstein Ludewig — a veteran HR leader who spent nearly three decades shaping talent strategy at organizations like UCLA Health — for a conversation on something I think about constantly: the widening gap between what talent acquisition does and what it can actually prove.
It was one of those discussions where the hour disappears. And when it was over, I kept coming back to the same through-line: most of the problems we're trying to solve with AI and analytics aren't new problems. They're old problems we never fully solved — and we're now layering expensive technology on top of them.
That's not a comfortable take. But I think it's the right one.
The Funnel Is Broken Before The Data Is Bad
One of the first things that came up was visibility — rather, the lack of it.
Kermit quoted a couple of stats from Symphony Talent's research, including that nearly ninety percent of TA teams say they lack full funnel visibility. More than half say they don't have meaningful data insights. Those numbers aren't surprising to anyone who's worked inside these organizations.
We dug into exactly this in our Quality of Hire research. The organizations that measure it best are the ones that defined it first — across TA, HR, and the business.
The data problem is downstream of an alignment problem.
When I was in the practitioner seat, we didn't try to boil the ocean on quality of hire — we didn't have the time or the bandwidth. So we got specific about one thing: My team's job was to deliver interview-quality candidates.
That was our KPI. Not volume, not pipeline size — people who would actually make it to the recruiter screen.
It sounds narrow. But it was practical and clarifying. Because once you define that, you can start to have an honest conversation about what branding is attracting, what paid advertising is converting, and what the recruiting team is ultimately delivering to the hiring manager.
A lot of teams never have that conversation — and it's created a lot of problems when trying to measure ROI. Branding has one definition of quality, recruiting has another, and hiring managers have their own — and few have actually reconciled them.
That's precisely where the funnel collapses. Not in some dramatic system failure, but in a hundred small moments of "I thought you were tracking that."
This mirrors what we found in our Strategic Workforce Planning research: TA's exclusion from the planning process is one of the most expensive structural problems in HR — and one of the least discussed.
Robin put a fine point on it: At UCLA Health, she had a rule — you didn't open a req until there was agreement on what a successful candidate looked like. Profile first, then recruit. Sounds obvious. Most teams still aren't doing it.
AI Chapter One Was Just More Of The Same, Faster
There's been a lot of energy — and a lot of budget — going into AI in talent acquisition. And I'll be honest: Some of what I'm seeing now is genuinely different. Teams are running real pilots with defined problem statements and actual KPIs. That's a meaningful shift from a few years ago, when a "pilot" mostly meant free seats and vague enthusiasm.
Our research on AI momentum in HR found that nearly half of organizations are still stuck in exploration mode — activity without momentum, cycling through tools while old patterns persist.
Kyle & Co Research
The AI Momentum Model
Where HR teams sit on the path from curiosity to AI-first practice — and where most are still concentrated.
Source: Kyle & Co research · n = HR practitioners
Nearly half of organizations are stuck in exploration mode — activity without momentum, cycling through tools while old patterns persist. Read the AI Momentum Model →
But we should be clear-eyed about where we've been. What I'm calling AI Chapter One was mostly automation dressed up as optimization theater. We cut recruiting teams down to skeleton crews and used AI to scale the most basic elements of the programs that were already running.
We accelerated the operation. We didn't improve it.
The result? Higher volume, lower signal. More applicants, more noise, roughly the same outcomes — and in some cases, worse ones, because the flood of low-quality and bot-generated applications has made it harder to find the real candidates inside the pile.
If you're not clear on what's performing, what exactly are you optimizing?
Employer Brand Isn't A Marketing Problem. It's A Performance Problem.
Robin said something that I've been repeating since. When she was at UCLA, new managers used to call her and say, "You lied to me. Nobody here talks the way you talk." That stopped happening when they integrated employer brand into everything — not just attraction, but the entire employee experience through year one.
That's the actual ROI of employer brand — not impressions, not awareness, but whether what you promised in the recruitment process is what people actually experience when they show up.
Personally, I think we're seeing a surge right now in organizations going back to do real EVP work because the trust gap between employers and workers is real (and it's wide).
Candidates are doing more research than ever, including through AI-driven platforms, before they ever engage. That means your brand is being interpreted and contextualized in places you don't control.
The answer isn't more content. It's more honesty, more clarity, more real conversation. Be clear about what you expect from candidates — and what they can expect from you. Be clear about how you use AI in your process.
Get grounded. The organizations that are winning this right now aren't out-storytelling everyone else. They're just being more coherent.
The Big Challenge Is the Big Opportunity: Proving Value
Kermit asked both Robin and me what proving TA's value actually looks like to a CFO or a board. I think about this a lot.
Here's what I know: Data alone doesn't do it. I've never walked into a room and made an impact by reporting survey results. I show what the data reveals, then I go find the story underneath it.
What are people actually saying? What changed between last year and this year, in their own words? What outcome can I point to that a business leader would recognize as real?
Robin's point about stay interviews lands here. The organizations that have figured this out aren't asking why people leave. They're asking why people stay — and reporting that back to the business in a way that drives action.
The other thing I'd add: don't put shine on the apple.
Executives have lived through enough failed initiatives to know when the numbers are being massaged. Show what isn't working. Show how you're pivoting. Show what you stopped spending money on. That's credibility. That's what earns the trust to keep investing.
The Closing Thought I Keep Coming Back To
Kermit wrapped the conversation with a line I want to steal: Activity is easy to measure. Impact is not.
The teams that pull ahead won't be the ones doing more. They'll be the ones who can clearly see what's working, prove it to the business, and act on it.
That's not an AI problem. It's a leadership discipline problem. And it's very much still up for grabs.
Kyle Lagunas
Founder of Kyle & Co and Executive Director of the Human-Centric AI Council. Former IDC analyst and in-house TA leader. Writing about the gap between what HR does and what it can prove — and what to do about it.
