Skip to main content
Back to blog
Corporate AI Training ResultsAI Systems8 min read

From Under 20% to 71%: What an Asset Management Firm's Failed Training Programmes Actually Taught Us

A mid-sized asset management firm ran two failed AI training programmes before reaching 71% completion. Here's what changed and what L&D leaders can learn from it.

Overview

Two AI training programmes. Six hundred employees. Less than one in five completing either of them.

That was the starting point when a mid-sized asset management firm first approached me about a third attempt. They had executive sponsorship, adequate budget, access to established learning platforms, and internal communications support. On paper, the organisation looked ready. In practice, almost nobody meaningfully engaged.

By most conventional L&D measures, they had done the "right" things. There was senior sponsorship. There was budget allocation. There were launch announcements. Completion dashboards existed. Managers had visibility. The problem was that the training itself had almost no relationship to how employees actually worked.

What followed was a complete redesign.

Twelve months later, completion rates had increased to 71%, and 58% of participants had demonstrated measurable workflow change after three months. More importantly, the internal conversation around AI shifted from anxiety and compliance into practical application.

This article explains what changed, why the first two programmes failed, and what L&D leaders should understand before launching enterprise AI training inside knowledge-intensive organisations.

The Real Problem Was Never Motivation

One of the most damaging assumptions in enterprise AI adoption is that resistance comes from employees being unwilling to learn.

In reality, most professionals are perfectly willing to adopt systems that clearly improve their work.

The problem is that most corporate AI training never demonstrates that improvement concretely enough.

The asset management firm had intelligent, high-performing employees operating in a highly regulated environment where precision mattered. These were analysts, operations specialists, client services professionals, and portfolio teams working under significant cognitive load.

They were not anti-technology.

They were anti-irrelevance.

The first programme focused heavily on platform capability demonstrations. Participants were shown what the software could do. There were walkthroughs, feature explanations, and templated exercises. The second programme shifted towards broader AI literacy content delivered through the organisation's existing learning management system.

Both programmes suffered from the same structural flaw.

Neither answered the question employees were actually asking:

"How does this help me do my specific job better tomorrow morning?"

Without that connection, training becomes informational rather than transformational.

People complete modules because they are required to. They do not integrate the behaviours into their workflows.

Two Programmes, One Structural Failure

The first programme was effectively a software rollout disguised as capability development.

This is extremely common.

Vendors frequently position product exposure as workforce readiness. Employees are taught interface navigation, features, prompt examples, and generic use cases. The implicit assumption is that exposure naturally leads to adoption.

It rarely does.

The issue is not that software training is useless. The issue is sequencing.

If employees do not first understand:

  • where AI creates leverage in their role
  • where AI introduces risk
  • where judgment remains essential
  • how to evaluate outputs critically
  • how the technology fits inside existing workflows

then tool demonstrations become disconnected from operational reality.

The second programme attempted to solve this through broader AI literacy.

Unfortunately, generic AI literacy often creates another problem.

It becomes too abstract.

Employees learn broad concepts about AI transformation, industry disruption, and future-of-work narratives, but they still leave without practical integration strategies connected to their day-to-day responsibilities.

In this organisation, analysts sat through the same material as operations staff. Client services teams received the same examples as middle-office functions. Scenarios were generic. Exercises were synthetic. None of it resembled the pressure, ambiguity, and workflow complexity of actual asset management environments.

Completion rates collapsed because the training generated cognitive overhead rather than practical leverage.

Employees subconsciously categorised it as additional work.

That distinction matters enormously.

The most successful AI adoption programmes are not perceived internally as "learning initiatives".

They are perceived as mechanisms for reducing friction.

Why Generic AI Training Consistently Fails

The broader enterprise market is repeating this mistake at scale.

Most organisations still approach AI training through one of four ineffective models:

1. Vendor-led demonstrations These programmes optimise for platform familiarity rather than capability development.

Employees leave understanding buttons rather than judgment.

2. Generic AI awareness workshops These create temporary excitement but weak behavioural transfer.

People feel informed while remaining operationally unchanged.

3. Compliance-first AI literacy These programmes focus almost entirely on governance, policy, and risk.

Necessary, but insufficient.

Employees learn what not to do without learning what productive usage actually looks like.

4. Self-directed e-learning This assumes employees will independently map abstract concepts onto their workflows.

High performers occasionally succeed with this model. Most do not.

The deeper issue is that enterprise AI capability is fundamentally behavioural.

It is not knowledge acquisition alone.

It is workflow redesign.

That requires training architectures built around:

  • repetition
  • reinforcement
  • role specificity
  • accountability
  • operational relevance
  • visible application

Most programmes optimise for content delivery rather than behavioural integration.

That is why adoption collapses after initial enthusiasm.

Rebuilding the Programme From First Principles

The third programme started with a completely different premise.

Instead of asking:

"What should employees learn about AI?"

the redesign began with:

"What decisions and workflows consume the most cognitive energy across the organisation?"

That shift changed everything.

The redesign was built around four principles that now underpin most of the enterprise AI training work I deliver.

I refer to this broadly as the CORE framework.

Importantly, the framework does not start with software.

It starts with thinking.

Principle One: Role-Specificity Over Broad Coverage

The original programmes grouped employees together too broadly.

This happens constantly in enterprise environments because it appears operationally efficient.

One workshop.

One deck.

One facilitator.

One rollout.

The problem is that AI capability is highly workflow dependent.

Two employees with similar seniority levels may require completely different integration strategies based on how their work is structured.

In the redesigned programme:

  • analysts focused on research synthesis, interpretation, and reporting
  • operations teams focused on document review and workflow triage
  • client services teams focused on communication acceleration and summarisation
  • management functions focused on decision support and synthesis

Every exercise was mapped directly to recurring operational tasks.

This dramatically reduced abstraction.

Employees no longer had to imagine how AI might fit into their work.

They could see it.

That distinction alone significantly increased engagement.

Principle Two: Thinking Before Tools

Most enterprise programmes introduce tools too early.

That creates shallow capability.

Before participants touched software, the redesigned programme focused heavily on judgment.

Questions included:

  • When should AI outputs be trusted?
  • What signals indicate hallucination risk?
  • Which tasks require human review?
  • Where does automation become dangerous?
  • How do you interrogate AI-generated reasoning?

This stage was critical because confidence without evaluation capability becomes organisational risk.

One of the most underestimated realities of AI adoption is that poor AI users often become overconfident very quickly.

Enterprise readiness therefore depends less on technical mastery and more on calibrated skepticism.

Participants needed to understand:

  • AI strengths
  • AI limitations
  • AI reliability boundaries
  • workflow suitability
  • verification requirements

before practical implementation.

This sequencing materially improved adoption quality.

Employees became more willing to experiment because they understood the boundaries more clearly.

Principle Three: Horizontal Accountability

Most corporate learning structures are vertically enforced.

Managers monitor completion.

Employees complete modules.

L&D tracks participation.

This creates compliance behaviour rather than capability behaviour.

The redesigned programme instead used cohort-based accountability.

Small groups progressed together.

Participants discussed use cases collectively.

They shared experiments.

They surfaced failures.

They exchanged workflow adaptations.

This reduced perceived evaluation pressure.

In highly professional environments, employees often avoid experimentation publicly because visible uncertainty carries reputational cost.

Horizontal accountability changes that dynamic.

Instead of being evaluated, participants feel like they are collectively solving operational problems.

That distinction dramatically increased participation quality.

Principle Four: Visible Application

One of the biggest flaws in enterprise learning is reliance on self-reported confidence metrics.

Confidence is a weak proxy for behavioural change.

People frequently report feeling informed without altering operational behaviour whatsoever.

The redesigned programme therefore required visible workflow application.

Each module concluded with:

  • a real task
  • a documented AI-assisted workflow
  • a before-and-after comparison
  • practical reflection on usefulness

Importantly, these examples were shared within cohorts rather than escalated upward.

That prevented experimentation from feeling performative.

Employees could test practical applications without fear of managerial scrutiny.

This shifted the programme psychologically from training into collaborative optimisation.

That distinction matters.

People resist education less when it feels directly tied to operational relief.

The Three-Month Results Matter More Than Completion

The headline figure was 71% completion.

Compared to the previous sub-20% outcomes, leadership understandably focused on that metric initially.

But the more meaningful outcome was the behavioural adoption rate.

After three months:

  • 58% of participants had measurably integrated AI into recurring workflows
  • teams reported reduced cognitive load on repetitive synthesis tasks
  • document turnaround times improved in several operational areas
  • internal resistance to experimentation declined significantly
  • AI conversations became more practical and less abstract

This distinction is essential.

Completion measures exposure.

Behavioural integration measures transfer.

Most enterprise AI programmes optimise heavily for the former while barely measuring the latter.

That is why organisations frequently overestimate capability maturity.

They mistake attendance for adoption.

Why This Matters Beyond Financial Services

Although this case study comes from asset management, the underlying dynamics apply broadly across enterprise environments.

The same patterns appear in:

  • consulting firms
  • media organisations
  • legal environments
  • government institutions
  • healthcare systems
  • enterprise operations teams
  • corporate strategy functions

The workflows differ.

The behavioural mechanics do not.

AI adoption succeeds when employees can:

  • identify operational leverage
  • trust the process safely
  • integrate behaviours practically
  • reduce friction meaningfully
  • retain human judgment appropriately

Most organisations still treat AI capability primarily as a technology implementation problem.

It is much closer to a behavioural systems design problem.

What L&D Leaders Should Change Immediately

Three strategic shifts emerge clearly from this engagement.

Diagnose before designing Do not start with platforms.

Do not start with vendors.

Do not start with workshops.

Start with workflows.

Map:

  • repetitive cognitive tasks
  • synthesis-heavy processes
  • operational bottlenecks
  • communication friction
  • reporting overhead

Capability strategy should emerge from workflow analysis rather than software availability.

Stop measuring satisfaction first High satisfaction scores frequently correlate with low behavioural transfer.

People enjoy polished workshops.

That does not mean organisational capability changed.

Measure:

  • workflow adoption
  • behavioural persistence
  • repeated usage
  • operational integration
  • time reduction
  • quality consistency

Those metrics matter substantially more.

Treat AI capability as organisational infrastructure The most important insight from this engagement is that AI readiness compounds.

Once teams begin integrating AI effectively into operational systems:

  • experimentation accelerates
  • workflow redesign expands
  • peer learning increases
  • adoption barriers fall
  • organisational confidence grows

The reverse is also true.

Poor early rollouts create skepticism that becomes increasingly difficult to unwind later.

That is why initial programme architecture matters disproportionately.

The Real Enterprise AI Divide

Over the next several years, the most meaningful divide between organisations will not simply be access to AI tools.

Most companies will eventually have access.

The divide will be:

  • organisations that operationalised AI effectively
  • organisations that accumulated fragmented experimentation without behavioural integration

That gap is already emerging.

The companies seeing meaningful gains are not necessarily the ones spending the most.

They are the ones treating AI capability development as workflow transformation rather than software exposure.

This asset management case illustrates that distinction clearly.

The breakthrough did not come from better technology.

It came from better behavioural design.

Turn this into a workflow

Jay works with startups and global teams to move AI from experiments into deployed systems with measurable operational impact.

Book a discovery call