AI is no longer a side project. From code generation to intelligent monitoring and personalised product features, AI is changing how engineering teams design, build and operate software. For engineering managers, the central question is not “Should we use AI?” but “How quickly can our team become effective in an AI-powered environment?” That requires sizing the current skills gap and building a deliberate plan to close it.
If AI adoption is left to ad-hoc experimentation, the organisation ends up with pockets of expertise, fragile one-person dependencies and uneven quality. Engineering managers are closest to where work actually happens, so they are best placed to:
• Assess how work will change in their domain due to AI
• Define the new skill profiles needed on their teams
• Design learning paths and recruiting strategies to fill the gaps
Ignoring the skills question does not slow AI adoption; it just pushes it into shadow mode through unsanctioned tools and scripts.
The AI skills gap is not generic. It is shaped by how your organisation plans to use AI. As an engineering manager, start by mapping:
• Where AI will assist engineering work
◦ Code generation, test generation, refactoring
◦ Incident triage and anomaly detection
◦ Documentation summarisation and search
• Where AI will power product features
◦ Recommendations, ranking, search
◦ Natural language interfaces and chat
◦ Forecasting, anomaly detection, decision support
• Where AI will change operations
◦ Capacity planning and autoscaling
◦ Security, fraud detection, monitoring
Each of these uses implies specific knowledge: prompts, model behaviour, data quality, evaluation methods and safety considerations.
Once you know where AI fits, define a skills matrix for your team across three dimensions:
Map current team members to this matrix and assess depth as “none / basic / working / deep”. The gaps become visible.
Sizing the skills gap must not feel like a performance review trap. Use:
• Anonymous surveys to capture comfort with AI concepts and tools
• Hands-on experiments such as hack days or pilot projects to observe where people struggle
• 1:1 conversations to understand motivations and learning preferences
Frame the conversation as: “AI will change our jobs; our goal is to make sure the team moves up together and nobody is left behind.”
Upskilling does not mean sending everyone to a week-long ML course. As an engineering manager, design learning that is:
• Contextual: use your own code and data in exercises
• Incremental: introduce concepts as they become relevant to real work
• Layered: offer deeper tracks for those who want to specialise
A practical plan could include:
• Short internal sessions on AI 101 for engineers and using code assistants effectively and safely
• Weekly office hours where a more AI-savvy engineer helps others use tools on real tasks
• A structured pilot project where a small squad integrates an AI feature end-to-end, then shares learnings
• Allocating explicit learning time in sprints, for example four hours per engineer per sprint for AI experimentation
Some skills are faster to acquire internally; others may require hiring. Consider:
• Upskill internally when:
◦ The skill is adjacent to existing strengths, such as integrating AI APIs into a backend team
◦ Time-to-impact is flexible and you can learn while delivering
• Hire when:
◦ You need deep expertise quickly, such as designing a safety-critical recommendation engine
◦ The risk of mistakes is high, for example regulated AI decisions
Collaborate with HR and leadership to align job descriptions with your skills matrix, such as “Senior Backend Engineer with experience integrating LLM APIs and building evaluation pipelines.”
Skills alone are not enough; your ways of working must adapt too. Key updates include:
• Coding standards: define how AI-generated code is reviewed, tested and attributed
• Data governance: set clear rules for what data can or cannot be used in external AI tools
• Documentation: capture prompt patterns, best practices and common pitfalls from real projects
• Evaluation: establish how you measure AI feature quality using both offline metrics and real-world feedback
Engineering managers are responsible for ensuring teams do not over-trust AI outputs or leak sensitive information while experimenting.
AI raises real concerns about job security and relevance. Address them directly:
• Emphasise that roles will evolve, not disappear: more focus on system design, integration, evaluation and problem framing
• Highlight new growth paths such as AI platform engineer, AI product engineer and evaluation engineer
• Recognise and reward those who help others adopt AI responsibly, not just those who automate the most
Your posture as a manager sets the emotional tone: treat AI as an opportunity to increase team impact, not a looming replacement.
AI capabilities will not stabilise anytime soon. Make skills planning an ongoing practice:
• Revisit your skills matrix quarterly
• Refresh your upskilling plan based on new use cases and tools
• Regularly showcase internal AI wins in demos and all-hands
Engineering managers who treat AI skills as a strategic asset will build teams that can ride the waves of change instead of being swamped by them.
