top of page
ICS Logo for white background.png

The Bank of England Is Saying What We Have Been Saying About AI and Jobs

  • ICS AI
  • 29 minutes ago
  • 4 min read

Why AI will displace some roles, not cause mass unemployment - and why the real risk is the broken career pipeline


The Human Firewall - Why AI Won't Replace Most Jobs

This morning (19 December), Bank of England Governor Andrew Bailey gave his clearest statement yet on artificial intelligence (AI) and employment - cutting through much of the noise from recent months.


He was clear on three points:


·         AI is unlikely to cause mass unemployment.

·         It will displace a narrow range of highly automatable roles.

·         And there is a growing risk to entry-level and early-career pathways.


Those points align almost exactly with the position set out in ICS.AI’s CEO, Martin Neale’s, white paper, The Human Firewall - Why AI Won’t Replace Most Jobs, published today. Long before Bailey’s comments, we have argued that the dominant “AI takes all the jobs” narrative is wrong - and that the real challenge is not obsolescence, but accountability and transition.


Displacement is not the same as mass unemployment


Andrew Bailey explicitly compared AI to the industrial revolution, noting that while people were displaced from certain jobs, it did not result in mass unemployment. His view is that AI is likely to follow a similar pattern.


That distinction matters.


Much of the public debate has blurred the line between task automation and job elimination. In reality, AI is very effective at automating specific, repeatable activities. It is far less effective at replacing entire roles that involve judgement, responsibility, escalation and ownership of outcomes.


This is why we have consistently said that AI does not take jobs. It takes tasks.


The evidence supports this view. In Canada, where Statistics Canada tracks this directly, 89% of AI-using businesses report no change in employment. Despite rising AI adoption, only a small proportion of organisations report any direct headcount reduction attributable to AI. What we see instead is role reshaping, redeployment, and a growing need for human oversight around automated systems.


Bailey’s comments reinforce that this is not wishful thinking. It is how labour markets adapt to general purpose technologies.


Narrow displacement is real - and needs to be managed


The Governor was equally clear that displacement will happen, particularly in roles that are highly automatable.


This is a point we have been careful to make ourselves. Data entry, routine clerical work and low-level contact centre roles are shrinking. Pretending otherwise helps no one.


The difference is in how that reality is framed.


Displacement does not mean redundancy without recourse. It means transition. People in these roles need support, retraining and clear pathways into work that grows alongside AI - including assurance, exception handling, service improvement, compliance and governance.


Handled properly, AI becomes a redeployment engine rather than a redundancy programme.

This is where leadership matters. Fear-driven narratives do not help organisations or workers plan. Honest, evidence-based discussion does.


The broken rung is the real warning sign


Perhaps the most important part of Andrew Bailey’s comments was his warning about younger, inexperienced workers struggling to secure entry level roles - asking what AI is doing to the pipeline of future talent.


This is the issue we have described as the Broken Rung.


AI now performs much of the routine work that junior staff once learned from. Without deliberate redesign, entry-level roles shrink and the pathway from junior to senior expertise weakens.


This is not a theoretical risk. UK entry-level vacancies have fallen 32% since November 2022, and competition for graduate roles is intensifying.


The solution is not to slow AI adoption. It is to redesign early-career roles so that people learn to work with AI, orchestrate it, verify its outputs and remain accountable for outcomes.

Education and training systems must adapt quickly if we are to avoid creating a long-term skills gap.


Why accountability is the real constraint on AI


What underpins all of this is a simple reality that often gets lost in public debate.


AI does not remove responsibility.


Legal, ethical and liability constraints mean organisations cannot hand over consequential decisions to machines without human involvement. Someone still signs things off. Someone still owns the outcome. Someone is still accountable when things go wrong.


This is why we talk about the Human Firewall - the people, processes and controls that make AI usable at scale in real organisations.


As AI adoption grows, so does the need for governance, oversight and human judgement. This is not a transitional phase. It is the stable operating model for AI at scale.


A more grounded conversation about the future of work


Andrew Bailey’s comments matter because they legitimise a calmer, more evidence-based conversation about AI and jobs.


One that recognises displacement without exaggerating it into catastrophe.

One that focuses on transition rather than fear.

And one that understands that accountability cannot be automated away.


At ICS.AI, we will continue to lead with this position. AI will change work profoundly, but it will not eliminate the need for people to govern systems, make judgements and own outcomes.


That is not optimism. It is how organisations actually operate.


Read the full position


This blog reflects the conclusions of Martin Neale’s white paper, The Human Firewall – Why AI Won’t Replace Most Jobs - which sets out the evidence behind this position and the operating model required to make AI work responsibly.



ICS.AI Logo

bottom of page