McKinsey's Development in the Future of Work research, surfaced widely through early-2026 reporting cycles, includes a finding that has been quoted in nearly every L&D and HR strategy deck written so far this year. Two-thirds of global executives plan to prioritize human-centered capabilities — problem-solving, adaptability, collaboration — in their hiring and development strategies. The TalentLMS 2026 L&D Report, citing the McKinsey work, frames it as a confirmed strategic shift: leadership training is now the second-highest L&D priority for 2026 (64% of HR managers); soft skills development is the explicit focus for 50%; cross-functional upskilling is the priority for 57%.

The data is consistent. The strategic intent is real. What is missing is the operational infrastructure to act on it.

Most companies hiring in 2026 are running a funnel and screening process built between 2008 and 2018, when the dominant signal in mid-market and enterprise hiring was credentials — degree, certification, prior employer brand, demonstrated technical skill on a take-home or whiteboard task. The capability signals that executives now say they want to prioritize — adaptability, collaboration patterns, problem-solving under ambiguity — are not signals the existing funnel was designed to capture. The hiring intent is shifting faster than the hiring infrastructure. And the gap between intent and infrastructure is producing three predictable misreads in how leadership teams are operationalizing the shift.

Myth 1: Soft skills aren't really trainable

The first myth is the oldest and the most persistent. It says: capabilities like adaptability, problem-solving, and collaboration are personality traits rather than skills. They can be screened for during hiring but cannot be developed meaningfully once a person is in the organization. The implication is that L&D investment in these areas is largely cosmetic — the real lever is hiring better, not training more.

The Udemy 2026 Global Learning & Skills Trends Report directly challenges this framing. Udemy's research, drawing on usage data from enterprise customers, finds that what the company calls "adaptive skills" — the cluster of capabilities that includes problem-solving, learning agility, and contextual judgment — are trainable when development is structured around practice-in-context rather than classroom delivery. The Association for Talent Development reports that microlearning paired with reinforcement can improve knowledge retention by up to 20% compared to longer, single-session training. The McKinsey research itself frames soft skills as developable through deliberate practice, mentor exposure, and structured feedback cycles — the same conditions that develop technical capability.

What is true is that the development methods that work for soft skills are different from the methods that work for hard skills. Sit-in-a-room training transfers technical knowledge reasonably well; it does not transfer collaborative judgment or adaptability. The skills that executives now say they want to prioritize require development infrastructure that most mid-market organizations have not built — apprenticeship pairings, structured stretch assignments, regular debrief cycles after high-stakes work, mentor exposure across functions. Building this infrastructure costs more and takes longer than buying a new LMS module. The cost gap is the reason most organizations default to the framing that soft skills cannot be trained — it is a cheaper conclusion than the one the research actually supports.

The transferable insight for mid-market leadership teams: if the strategic intent is genuinely to prioritize human-centered capabilities, the L&D investment shape needs to change. More structured stretch assignments, more cross-functional rotation, more deliberate mentor exposure. Less buying of new content libraries. Degreed's 2026 trends research finds that enterprise companies are increasingly paying for content libraries that don't get used enough to justify the expense — the marginal return on the seventh microlearning module is approximately zero. The marginal return on a well-structured rotation program with feedback discipline is significant.

Myth 2: The hiring funnel will catch these skills

The second myth is more recent and more dangerous because it allows organizations to feel that they are acting on the strategic shift without actually changing how they hire. It says: yes, we are now prioritizing human-centered capabilities, and yes, our existing funnel will surface candidates with those capabilities because behavioral interviewing techniques are designed to do exactly that.

The Accenture finding cited in the TalentLMS 2026 report puts a number on the gap here. Executive perception of how well their organizations support AI experimentation and development lags employee perception by as much as 16 percentage points. The pattern is consistent across most measures where executives are reporting on capability development inside their own organizations — what leadership teams believe is happening at the practice level diverges from what is actually happening, by a wide enough margin that the strategic intent rarely survives transmission to the hiring funnel.

The mechanical problem is that the hiring funnel — applicant tracking system filters, resume keyword matching, structured interview templates, take-home technical assessments, reference-check protocols — is calibrated for what was prioritized when the system was last redesigned. For most mid-market organizations, that redesign happened between 2015 and 2020, with technical and credential signals dominating. Adapting the funnel to capability signals is not a configuration change. It requires redesigning the screening rubrics, retraining the hiring managers, restructuring the interview loops, and shifting the calibration meetings that decide between candidates. Each of these is a multi-month workstream. Most organizations announce the strategic shift in a leadership offsite and assume the funnel will follow without explicit redesign work. The funnel does not follow.

The downstream effect is that organizations who say they want human-centered capabilities continue to hire for credentials and technical fit, then express surprise at the talent gap two years later. Hiring for capabilities means measuring something the standard funnel was not built to measure. Most companies are still trying to do it with the funnel they have.

For mid-market leadership teams, the practical implication is that announcing the strategic shift is the first 5% of the work. The other 95% is operational: redesigning the screening rubrics, building behavioral case-study assessments that surface adaptability under realistic ambiguity, retraining the hiring panel on what to listen for, and committing to the calibration discipline that makes new screening criteria consistent across hiring decisions. Organizations that do the first 5% without the other 95% will get the same hires they have been making, with a different cover sheet on the strategy deck.

Myth 3: AI-fluency screening replaces this

The third myth is the newest and is rapidly displacing the older framing. It says: the most important capability for the 2026 workforce is AI fluency, and AI-fluency screening — testing candidates on their ability to use AI tools effectively — captures the underlying adaptability we care about. If a candidate can use AI tools well, they are demonstrating problem-solving, learning agility, and the human-centered skills that matter.

The framing has surface logic but does not survive close examination of the underlying data. The 2026 Emeritus Global Workplace Skills Study found that 90% of professionals regularly reassess their skills, and nearly 90% now use AI tools for skill development — close to 60% on a weekly basis. AI tool usage is approaching saturation across professional workforces. It is therefore a poor differentiating signal at the hiring stage. Everyone uses the tools. What varies is how well they use them in context, which is a capability signal, not an AI-fluency signal.

The TalentLMS 2026 research surfaces this distinction explicitly. AI training inside organizations is shifting from awareness to performance — 72% of HR managers report the main purpose of AI training is to improve organizational efficiency, and 70% see it as a way to prepare employees for future-oriented skills. The leading indicator of strong AI performance inside organizations is not AI fluency per se, but the underlying adaptive skill set that lets people apply AI tools to specific work problems with judgment. Screening for AI fluency at the hiring stage is screening for a saturating signal; screening for adaptive capability is screening for the actual variable that produces performance.

The McKinsey position on this is consistent. The two-thirds of executives prioritizing human-centered capabilities are not prioritizing them as a complement to AI fluency — they are prioritizing them as the underlying capability that makes AI fluency productive. The Udemy framing is that adaptive skills are "complementary to AI skills and empower people to excel in an AI-driven world." The capability ordering matters. Adaptive skill is the leading variable; AI fluency is the application of that adaptive skill to a particular set of tools.

For mid-market leadership teams, the practical recommendation is to test AI fluency in context rather than in isolation. A take-home assessment that asks a candidate to use AI tools to solve a realistic business problem under ambiguous conditions surfaces both AI tool usage and the underlying adaptive judgment. An isolated test of AI tool knowledge surfaces only the first variable. The same hiring panel time produces meaningfully different signal density depending on how the assessment is structured. Most organizations are running the isolated test because it is easier to design and score.

What leadership teams should do in the back half of 2026

Three operational recommendations.

First, audit the hiring funnel against the stated strategic intent. If the executive team says it is prioritizing human-centered capabilities, the screening rubrics, interview templates, and calibration practices should reflect that priority. Most organizations have not done this audit. The audit itself is a one-quarter project at mid-market scale and produces clear, actionable gap analysis. Do the audit before announcing further strategic shifts.

Second, redirect L&D budget from content acquisition to development infrastructure. The Degreed research, the Udemy research, and the TalentLMS research all point in the same direction — the marginal return on the next content library is small, and the marginal return on structured practice, mentor exposure, and rotation is large. Most L&D budgets are still weighted toward content acquisition because content acquisition is easier to procure, contract, and report. The procurement structure is constraining the strategic outcome.

Third, build behavioral assessments that test capability-in-context. The hiring funnel can be redesigned to surface adaptive capability — through case-study work, paired problem-solving exercises, and structured debriefs that test how candidates think under ambiguity rather than what credentials they hold. This is meaningfully harder than designing a take-home technical assessment, but it is the design work that translates strategic intent into hiring reality. Organizations that do this work will outperform those that announce the shift without operationalizing it. The McKinsey survey will look the same; the hires will be different.

The strategic intent to prioritize human-centered capabilities is now the consensus position in 2026 leadership thinking. The operational gap between that intent and the hiring infrastructure most organizations actually run is the under-discussed variable. Closing the gap is what separates organizations that benefit from the strategic shift from organizations that announce the shift and continue to hire as they always have.

"Hiring for capabilities means measuring something the standard funnel was not built to measure. Most companies are still trying to do it with the funnel they have."

Share thisFacebookXLinkedIn