The next year won’t be defined by whether organisations adopt AI — it’ll be defined by who operationalises it safely, at pace, and at scale. The Frontier is moving fast. And for leaders, the challenge is no longer “what could AI do?” but “what must we change now so AI delivers competitive advantage without introducing risk?”
The “The Future of Work & Tech” panel at our Frontier Firm event gave four distinct perspectives on the next 12 months of AI from Rich Thompson, CEO at ANS, Mark Nixon, AI Business Solutions GTM Lead for UK&I at Microsoft, Sue Daley OBE, Director Tech and Innovation at techUK, and Martin Neale, Founder & CEO of ICS.AI.
They talked about foundations, leadership behaviours, trust, skills, and operating models – the real work of becoming Frontier First.
Below are the four leadership shifts to focus on over the next 12 months.
1) Treat AI as a business strategy — not a technology workstream
Frontier Firms don’t just “run an AI programme” alongside everything else. They reshape the business around AI as a core capability — tied to operating model, data foundations, and how work gets done.
That starts with leadership intent and clarity. Rich Thompson was blunt about the journey ANS has been on: foundational change first, then acceleration.
“This is not a technology work stream.” — Rich Thompson
He also described what “Customer Zero” really looks like in practice: “foundational, fundamental changes” to enterprise architecture and simplification (including consolidating multiple CRMs into one) to create the conditions for Frontier outcomes.
What to do in the next 12 months
- Make AI a board level topic with operational ownership, not a side initiative owned by IT alone.
- Define 2–3 outcomes you’re aiming for (productivity, service improvement, risk reduction) and use them to prioritise work.
- Build the foundations that make scale possible: data, governance, simplified architecture, and clear decision rights.
This is “Think Bigger” in practice: AI becomes part of the plan, not a pilot that lingers.
2) Design trust in from day one — because trust is the accelerator
Leaders often treat governance as something that slows adoption. Frontier Firms take the opposite view: trust is what enables scale.
Sue Daley OBE framed it clearly:
“Trust… can be hard to achieve, can be really easy to lose.” — Sue Daly OBE
She also linked trust directly to ethics and responsibility: fairness, accountability, explainability, privacy, safety, reliability — the principles that increasingly underpin regulation and public expectations (including reference to the EU AI Act).
From Microsoft’s side, Mark Nixon pushed the same point into the agentic era: before agents take actions, leaders need responsible AI policy, and they need observability — visibility into what agents exist, who built them, what they can access, what they did, and who owns them when people leave.
“Does your organisation have a responsible AI policy in place…? Because that is the first place you should start.” — Mark Nixon
“You need to know what agents you have… who created them… what data they have access to… [and] trace that back for audit purposes.” — Mark Nixon
What to do in the next 12 months
- Establish Responsible AI policy that’s usable (clear do’s/don’ts) and not just a document.
- Build a practical control plane for AI: governance, auditability, and ownership — especially as you move from assistants to agents.
- Align legal, risk, and security early to avoid “bolt on governance” that stalls progress.
This is innovation with guardrails: adoption without the risk, with governance and control built in.
3) Make AI literacy a core workforce capability — and create time to learn
One of the most important (and overlooked) points from the panel: the barrier isn’t tool availability — it’s habit change.
Sue Daley OBE highlighted that organisations can often move faster than governments by partnering with staff, developing peer learning networks, and making adoption something done in collaboration with people instead of as a directive from above.
“People…want to feel like something is being done with them, not to them.” — Sue Daly OBE
Mark Nixon made the challenge explicit: deep rooted habit change is necessary, requiring day-in, day-out experimentation — not a single training course.
He also captured the adoption paradox leaders must solve:
“The number one thing that stops them adopting it is time…but this technology will be the number one thing that helps them save time.” — Mark Nixon
What to do in the next 12 months
- Treat AI literacy like spreadsheet literacy: baseline capability for everyone (not just specialists).
- Create a protected learning rhythm (even small) so experimentation can happen.
- Use peer champions and community models. ANS’ approach — champions first, then broader rollout — created demand and reduced fear through trust and community support.
This is “Empower your people” and “Eliminate the mundane” — productivity without burnout starts with capability, not access.
4) Prepare for the next wave: from copilots to agents — and beyond the screen
Many organisations still think about AI as “assistants on desktops.” The panel strongly signalled that’s too narrow for what’s coming.
Sue Daley OBE put it in leadership terms: leaders need to prepare teams for managing AI agents alongside people.
“This will be the last generation that will only manage humans.” — Sue Daly OBE
And Martin Neal pushed the point even further: the frontier will include physical AI — smart devices, smart glasses, robotics — which multiplies both opportunity and governance complexity.
He also offered a stark principle for the agentic era:
“No autonomy without control, period.” — Martin Neal
In other words: the future isn’t just about adopting agentic capabilities — it’s about building the systems that keep those capabilities safe, accountable, and auditable.
What to do in the next 12 months
- Move from “AI tools” to “AI operating model”: define how agents are created, approved, monitored, and retired.
- Create clarity on accountability: who owns an agent’s output, actions, and ongoing performance.
- Start mapping where physical AI may enter your environment and what control and assurance will be required.
This is how you become Frontier First. You don’t wait until the wave arrives — you build the foundations now.
Our perspective: Frontier advantage will be won through leadership choices
Across ANS, Microsoft, techUK, and ICS.AI, the shared message was consistent:
- AI adoption at scale is a leadership challenge.
- Trust and governance are the conditions for speed, not the enemy of it.
- AI literacy and culture determine whether you get beyond pilots.
- Agents and physical AI will expand what’s possible — and what must be controlled.
Frontier Firms will “do more with less” by moving organisations from AI readiness into AI realisation — with people at the centre, and guardrails built in.
Start your AI Readiness Assessment
If the next 12 months demand anything, it’s honesty about where you are today — and a roadmap for where you need to be.
Start your AI Readiness Assessment to:
- identify gaps in data, governance, skills, and operating model
- build a clear roadmap from readiness to realisation
- adopt AI safely, with governance and control built in
- unlock real world value from agentic AI — beyond deployment
