Degreed's 2026 research found 95% of businesses see zero ROI on in-house AI investments. WRITER's enterprise AI adoption survey confirms the pattern. The shared diagnostic across both: it's not a tool problem. It's a coaching problem.
Degreed's 2026 L&D trends report includes one of the more uncomfortable findings of the current AI adoption cycle: nearly 95% of businesses have seen zero return on in-house AI investments, and only 15% of generative AI users report their organizations see significant ROI from the technology. WRITER's May 2026 Enterprise AI Adoption survey, which polled hundreds of enterprise organizations, reinforces the same diagnostic from a different angle — agent adoption is no longer limited by model capability. The constraint has moved downstream. The tools work. The investment is being made. The return is not appearing.
The Degreed framing of why this is happening is direct: capability, not access, determines whether technology has an impact. Most organizations have over-invested in tools and access while under-investing in the structured human development that makes the tools productive. The tools are doing what they were designed to do. The people using them have not been coached on what to actually do with them.
For Powered's mid-market reader, this is not an abstract finding. It is the operational explanation for why the AI investments approved in 2024 and 2025 budgets are now coming up for renewal with no measurable productivity gain to point to. The coaching infrastructure that would have made those investments productive was not built alongside the tool deployment. Three myths about how to fix this are still circulating in mid-market leadership conversations, and each is producing predictable failure modes.
The first myth is the easiest to default into because it matches how procurement teams are organized. It says: the zero-ROI problem is a tool problem. The current tools are not yet capable enough; the next generation will close the productivity gap; the strategic move is to wait for the next platform release or to upgrade to the more advanced tier. The implication is that the ROI gap will close on its own as the tool capability improves.
The WRITER 2026 survey directly contradicts this. The headline finding — that agent adoption is no longer limited by model capability — moves the diagnostic away from tools and toward systems. Forty-six percent of respondents identified integration with existing systems as their primary challenge. Not model selection. Not capability tier. Integration into the operational substrate the organization already runs. The constraint has moved, and procurement teams that continue to optimize the tool layer are optimizing a variable that is no longer the binding constraint.
Degreed's framing of why this matters operationally is consistent. Nearly 95% of businesses reporting zero ROI is not a distribution that improves materially with a tool upgrade. The bottom of the distribution may shift slightly; the median will not. The companies in the top 5% — the ones reporting significant ROI — are not running materially better tools than the bottom 95%. They are running materially better integration, change management, and capability development around the same tools. The difference is human-systems, not technology.
For mid-market organizations, the tactical implication is to redirect the marginal AI dollar away from the tool layer and toward the integration and capability layer. The next $100K in AI budget will produce more measurable ROI if it is spent on integration engineering, manager-level coaching, and structured workflow redesign than if it is spent on premium tier subscriptions. The procurement structure most mid-market organizations operate makes this redirection harder than it should be — tool budgets are easier to approve than coaching budgets, and the procurement category structure was designed before the integration-capability constraint became dominant. The category structure needs to adapt before the spending pattern can.
The second myth is the natural reaction to the first. If the constraint is capability rather than tools, then the answer is more training — more courses, more LMS modules, more certification pathways, more hours of structured AI education. This framing has the advantage of being actionable and the disadvantage of not matching what the research shows actually works.
The Udemy 2026 Global Learning & Skills Trends Report frames the distinction directly. The kind of capability that produces AI ROI is built through practice in context, not through course completion. Skills stick when they are "exercised, adapted, applied, and refined on the job and in real-world projects." The classroom-and-certification model that most enterprise L&D infrastructure is built around does not match the development pattern that AI capability actually requires. The training is happening; the capability is not transferring.
The SHRM 2026 L&D Executives Priorities and Perspectives report surfaces the same pattern from the practitioner side. One in three L&D executives reported skills training as a top priority for 2026, with personalized learning paths and role-specific training as the leading focus areas. The shift is from generic training catalog access toward embedded, in-workflow learning. The companies seeing AI ROI are not the ones running more training hours; they are the ones embedding learning into the actual work that AI tools are supposed to make more productive.
What separates this from the "more training" myth is the structural shape of the coaching infrastructure. Effective AI capability development requires:
A coaching cadence at the manager level. Not annual training but weekly or bi-weekly debrief conversations where managers and team members work through specific AI-assisted work products and refine the workflow together.
Structured stretch assignments that put AI tools into the real workflow under realistic time pressure, with explicit debrief at the end. The Association for Talent Development research cited in multiple 2026 trends reports shows microlearning paired with reinforcement improves knowledge retention by up to 20% — but the "reinforcement" is what is doing the work, and most organizations skip it.
Documented feedback loops that capture what is working and what is not, then redistribute that learning across the organization. WRITER's framing of "super users" producing extraordinary results without any mechanism to spread the practices enterprise-wide is the failure mode. The capability exists; the diffusion infrastructure does not.
The transferable insight for mid-market organizations is that the AI ROI problem is not a content acquisition problem. The marginal LMS module produces approximately zero ROI lift on top of the existing training stack. The marginal coaching conversation, the marginal structured debrief, the marginal documented feedback loop — these produce measurable lift. The shape of the investment needs to change, not just the size.
The third myth is the most common operational response in mid-market organizations and the most subtle to dismantle. It says: appoint AI champions or AI leads at the team or functional level. These individuals become the local capability hub. They coach their colleagues, run office hours, share best practices, and progressively diffuse the capability through the organization. The diffusion happens organically.
The framing has surface logic and produces real local wins. Most mid-market organizations running an AI champions model can point to specific teams where the model has worked. What the WRITER 2026 survey makes visible is that local wins do not aggregate to enterprise outcomes. The five failure modes WRITER documents share a common diagnostic: organizations have super-users delivering extraordinary results, but no mechanisms to spread those practices enterprise-wide. The champions are real. The diffusion is not.
The structural reason the champions model fails to diffuse is that it relies on voluntary effort layered on top of existing role responsibilities. The AI champion in a mid-market finance team is doing their actual job — running monthly close, building forecasts, partnering with operations on variance analysis — plus the champion role. The champion role is the discretionary effort. When the actual job gets busy (which is most quarters), the champion effort is the first thing to slip. The capability stays local because the diffusion infrastructure is built on volunteer hours that are not protected.
The Edstellar 2026 corporate L&D trends research surfaces the alternative model. The companies that are actually diffusing AI capability at scale are doing it through embedded coaching — making AI workflow coaching a formal part of the manager's job, with allocated time, structured cadence, and performance measurement attached. The champions model decentralizes capability into discretionary effort. The embedded coaching model decentralizes capability into structured manager responsibility. The first produces local wins; the second produces organizational capability.
For mid-market organizations, the practical recommendation is to stop investing in the AI champions model as the primary diffusion mechanism. Champions can still exist as an informal layer, but the formal infrastructure should run through the manager line — manager coaching cadences, manager-level skill development for facilitating AI workflow conversations, and manager performance measurement that includes team-level AI productivity gains. This is harder, slower, and more expensive than the champions model. It is also the only structure the research supports for actual enterprise-wide diffusion.
Three operational moves, each derivable from the Degreed-WRITER diagnostic.
First, audit the existing AI capability investment against the 95% zero-ROI baseline before renewing any 2024–2025 contracts. Most mid-market organizations approaching their first AI contract renewals in mid-2026 will be tempted to renew at parity or upgrade tier. The Degreed and WRITER findings suggest the better default is to renew at a smaller tool footprint and reallocate the savings to integration and coaching infrastructure. The renewal conversation is the natural inflection point. Use it.
Second, formalize the coaching cadence at the manager level. The investment shape that produces AI ROI is structured, recurring conversations between managers and team members about specific AI-assisted work products. This requires manager time allocation, manager skill development (most managers have not been coached on how to coach AI workflow), and manager performance measurement that includes the practice. Each of these is a multi-quarter workstream. Start now and accept that the ROI on the coaching investment will not show up for two to three quarters.
Third, build the diffusion infrastructure that the champions model lacks. Documented case studies of what is working in specific teams. Structured sharing forums (not voluntary lunch-and-learns but formal cross-functional reviews). Manager-level skill exchanges. The infrastructure is harder to build than the champions model is to appoint, but the difference is what separates the 5% of organizations seeing ROI from the 95% that are not.
The AI investments approved in the past 18 months were largely tool investments. The renewals happening in the next 18 months will determine whether those investments produce return or remain in the 95% zero-ROI bucket. The variable that separates the two outcomes is not the tool quality. It is the coaching infrastructure that surrounds the tool. Mid-market organizations that recognize this in time will rebuild their L&D and AI procurement around the constraint that actually binds. Those that do not will run another 18-month cycle producing the same outcome and reaching the same diagnostic, on a delay.
The Degreed report calls this the difference between platforms and ecosystems. In coaching language, it is the difference between giving people tools and developing the capability to use them well. The first is easy and produces no return. The second is harder and produces the only return available.
"The tools are doing what they were designed to do. The people using them have not been coached on what to actually do with them."

An invitation, extended to Powered readers.
Private test drives available for Powered readers through Bentley Motors.