At Davos this year, one message was hard to miss: AI is no longer the differentiator. Human capability is.
As generative AI becomes embedded across organisations, access to models, copilots, and tools is rapidly equalising. What once felt like a source of advantage is becoming infrastructure – essential, but no longer distinctive.
Recent World Economic Forum work reflects this shift. As automation accelerates, productivity and resilience depend less on technical skills alone and more on how humans think, decide, adapt, and learn alongside machines.
Yet there is a problem hiding in plain sight.
While we increasingly agree what matters, we remain remarkably poor at seeing, measuring, and acting on it.
Skills enable AI access. Capability determines AI leverage.
Skills matter. They always will. They allow people to operate AI tools, understand interfaces, and engage with increasingly powerful systems. Skills are what get AI into an organisation.
But skills alone do not determine whether AI creates value, or risk.
That depends on capability.
Two people with the same AI skills, the same training, and the same tools can produce radically different outcomes. One amplifies insight, improves decision quality, and accelerates productivity. The other scales error, misapplies automation, or creates downstream risk that only becomes visible later.
The difference is not what they know.
It is how they apply what they know when conditions change.
What “judgement” really means in an AI-enabled workplace
In global discussions, including those led by the World Economic Forum, judgement is increasingly used as shorthand for the human advantage in an AI-saturated world.
It is a useful anchor, but also an imprecise one.
From a measurement perspective, judgement is not a single capability, nor something that exists independently of context. It is an observed outcome, revealed through how people apply different capabilities when faced with uncertainty, pressure, and change.
In practice, what organisations describe as “good judgement” emerges from capability profile sets – different combinations and priorities of human capabilities activated in response to a specific environment.
Consider two market analysts, both using the same AI tools to assess market movements, model scenarios, and generate forecasts.
Both have identical technical skills and access to the same models. Yet their outputs and their impact diverge quickly.
One analyst treats AI outputs as provisional signals. They demonstrate Effective Decision Making by weighing AI-generated insights against market context, seeking alternative views, and recognising when confidence intervals matter more than point estimates. They show Listening to Others, integrating perspectives from sales, risk, and operations before drawing conclusions. When conditions shift, they Adapt to Change, adjusting assumptions rather than defending earlier forecasts. Over time, they apply Learning capability and Growth Mindset, refining how they work with AI as patterns emerge.
The other analyst accepts outputs at face value. They act quickly, but without sufficient challenge or context. They Display Initiative, but do not Adapt to Change. When forecasts miss the mark, they struggle to adjust, showing lower Resilience and limited Learning from feedback.
Both are “skilled” in AI.
Only one consistently demonstrates what the organisation experiences as sound judgement.
In this example, judgement is not a trait. It is the situational expression of a capability profile set, combining Effective Decision Making, Listening to Others, Displaying Initiative, Adapting to Change, Learning, Resilience, and Growth Mindset – applied in the right balance for the environment.
Importantly, that balance is not fixed. In some roles, judgement is revealed through decisive action and initiative. In others, it is revealed through listening, collaboration, and the ability to integrate diverse perspectives before acting.
What we label as “judgement” is therefore contextual – not something to be assessed in isolation, but something that becomes visible when the right capabilities are applied, in the right combination, and in the right conditions.
Why AI failures are usually capability failures
When AI initiatives fall short, the explanation is often framed in technical terms: data quality, tooling, integration, or maturity.
But look more closely and a different pattern emerges.
Many AI failures stem from:
- over-reliance on outputs without sufficient evaluation
- inability to recognise when context has shifted
- discomfort with uncertainty or probabilistic answers
- or lack of persistence when early results are imperfect.
These are not technical shortcomings, they are capability gaps.
AI does not remove the need for human decision-making. It raises the bar for it.
By increasing speed and scale, AI magnifies the consequences of poor decision discipline, weak adaptability, and low learning capacity. In an AI-enabled organisation, mistakes propagate faster – but so do good decisions.
Capability variance becomes a productivity lever.
The labour-market blind spot
Across industry and government, enormous effort has gone into building skills frameworks to bring consistency and structure to workforce planning. Industry and government skills frameworks play an important role in defining what people need to know and do. They support classification, curriculum design, and mobility at scale.
But these frameworks are not designed to answer a different – and increasingly critical, question:
Who will apply those skills well in the environment and when it changes?
Skills frameworks describe requirements. They do not measure how individuals actually make decisions, adapt, learn, and sustain performance under changing conditions – precisely the conditions AI introduces.
As a result, some of the most valuable human capabilities in an AI-enabled economy – effective decision making, adapting to change, learning capability, resilience, and growth mindset, remain difficult to see and compare in labour markets, despite being widely acknowledged as critical.
This is not a taxonomy problem. It is a measurement problem.
The challenge organisations now face is not defining capability.
It is measuring it in a way that is predictive, contextual, and decision-grade.
Frameworks are necessary. They help align language and intent. But they do not tell us:
- who will sustain performance under pressure,
- who will adapt when AI outputs conflict,
- or who will learn faster as systems evolve.
Yet these are precisely the questions that matter most as AI becomes embedded in everyday work.
Without capability measurement, organisations are effectively betting their AI investments on hope: hope that skills will translate into performance, and hope that people will apply judgement well when the rules change.
Hope is not a strategy.
The next frontier of advantage
As AI becomes infrastructure, competitive advantage shifts again.
Not to better models alone but to better understanding of human capability.
The organisations that succeed in the next phase will not be those that simply deploy AI fastest, but those that understand:
- where strong decision capability exists,
- where adaptability supports change,
- and where learning capacity enables continuous improvement.
In that world, capability is no longer a “soft” consideration. It is a core determinant of productivity, risk, and return on AI investment.
Skills will always matter.
But in an AI-saturated environment, capability is the multiplier and measurement is the missing layer.


