“Governance is an irreplaceable role”: Microsoft Security VP on why diversity and sector expertise will keep security workers relevant in the age of agentic AI
Improved AI skills and a greater focus on ensuring agents are secure at point of deployment will be key for staying ahead of attackers


Security teams can stay relevant through AI skills and by doubling down on the benefits of diversity, according to a Microsoft Security expert. Even as organizations adopt more autonomous AI security agents to keep up with the evolving threat landscape.
Vasu Jakkal, corporate vice president, Microsoft Security, took to the RSAC Conference 2025 keynote stage to discuss how the security field will be changed by the rise of agentic AI, autonomous generative AI systems capable of adapting to changing context to achieve a pre-defined goal.
Jakkal painted a picture of a future in which every organization and individual has an interactive agent at their disposal, suggesting agents could be more ubiquitous than apps and will act as “digital colleagues” that act in tandem with human workers.
This could come in the form of research agents to draw together knowledge on a given subject, and analytics agents for sifting through raw data. Or what Jakkal dubbed a “chef of staff agent” which could work with one’s home agent to coordinate business and personal schedules.
Against the backdrop of the massive changes Microsoft is predicting AI can bring to the threat landscape, there is continuing disagreement over the extent to which the technology will improve individual cyber roles.
Jakkal said that cyber professionals can look forward to more of their time back, driven by innovation like AI agents, but that first leaders would need to consider how best to define, direct, and guide the tools to automate tasks in the best possible way.
“Perhaps one of the most critical aspects of our roles is going to be governance,” she said.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“Governance is an irreplaceable role we need to focus on, because it is critical as defenders that we make sure these AI agents do what they are intended to do and to help and serve humanity the way they’re intended to.”
Stressing the subject matter expertise of all those in attendance, Jakkal said the innovative, predictive, and creative thinking that is innate to humans will lose none of its value even as AI becomes more commonly used. Adding to this, she stressed that diverse perspectives and cognitive diversity as important as ever.
“One thing we know for sure is the attackers we face are very diverse, they come from all backgrounds and all facets, and the defenders need to make sure that we can think of all those facets.
“The AI that we build in security, that we use in security, needs to have this diversity at the heart of it.”
AI skills shortages continue to be a stumbling block for some organizations and regions, with some workers still only pretending to understand the technology even as prominent executives warn it’s a choice of upskill or be left behind.
Jakkal said that AI skills are a necessity, with cybersecurity leaders now required to become AI leaders as well.
“Developing AI, learning AI, is not going to be a nice-to-have – for us to thrive in this new world, it’s a must-have,” she said, adding that though the learning curve can be uncomfortable as with any new skill, it’s necessary to keep up with emerging AI-driven threats.
The AI threat landscape
Despite the benefits of AI for defenders in this future of ubiquitous, capable agents, Jakkal was clear that attacks will also increase as AI makes attacks easier to perform. She said it will therefore be critical for security teams to ensure that they can defend and respond to increasing threats.
“Last year when I was here with you all, we were facing 4,000 password attacks per second,” Jakkal said. “This year, it’s 7,000 password attacks per second – that’s 600 million attacks a day.
Microsoft is already seeing an uptick in sophisticated attacks linked to AI, Jakkal said, with attackers using the technology to get a leg up on traditional defenses.
“They’re using it to get more productive, they’re using it to launch new kinds of attacks, whether it’s new vulnerabilities that they can find, or malware and variants of malware, phishing and doing social engineering, intelligent password cracking, and then of course there’s deepfakes,” Jakkal said.
Jakkal also emphasized that agents raise their own complicated list of security considerations, for which organizations must prepare.
Identity controls, she said, will be needed to define the data agents are given access to as well as which users they can work alongside, will be key. She also invited the audience to consider how agents can be shielded from external or even internal users who could jailbreak them, noting that 20% of data breaches today are caused by insiders.
Organizations will have to ensure they observe and audit their own AI agents, Jakkal added, to prevent attackers from using prompt injection to jailbreak them for their own malicious gains.
“Identity is going to be a critical element of AI throughout its lifecycle. AI agents are going to need identities, they’re going to need to understand zero trust and how we verify them explicitly, manage least privilege access,” Jakkal added.
Above all, Jakkal stressed the need for cybersecurity teams to change with AI, to ensure AI implementation meets governance and compliance requirements. This will mean shifting from a static to a dynamic governance model, so that policies can keep up with shifting AI agent identities.
Of course, AI agents could be a core solution to these problems. Microsoft is already using AI agents for security, as part of its Security Copilot offering. In March, it announced 11 new agents covering areas such as phishing triage, threat intelligence, and gaps in identity policy.
In the future, Jakkal said that AI agents could be used to predict attacks and stop them before they happen, rather than simply respond to them, as well as to automate identity management and flag data that’s at risk.
Microsoft, she said, believes that in the next two years, AI will move from level zero autonomy – those systems that can simply automate repetitive tasks – to level three autonomy, in which it can set its own goals to achieve a more complicated outcome.
“Tomorrow, it’s going to be more autonomous, where it’s going to be able to create its own sub-goals, maybe change the models itself to achieve its goals and take these actions serendipitously and autonomously.”

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Microsoft: get used to working with AI-powered "digital colleagues"
News Tech giant's report suggests we should get ready to work with AI, revealing future trends for the workplace
By Nicole Kobie
-
HPE boosts Aruba, GreenLake security
News Tech giant hopes to help enterprises battle against rise of "sophisticated" cloud threats
By Nicole Kobie
-
Microsoft says workers should believe the hype with AI tools: Researchers found Copilot users saved three hours per week sifting through emails, gained more focus time, and completed collaborative tasks 20% faster
News Using AI tools paid dividends for some workers, but alternative research shows it could create problems for others down the line.
By Ross Kelly
-
Third time lucky? Microsoft finally begins roll-out of controversial Recall feature
News The Windows Recall feature has been plagued by setbacks and backlash from security professionals
By Emma Woollacott
-
Microsoft launches new security AI agents to help overworked cyber professionals
News Microsoft is expanding its Security Copilot service with new AI agents to help overworked IT teams deal with surging security threats.
By Bobby Hellard
-
‘The entire forecasting business process changed’: Microsoft CEO Satya Nadella says Excel changed the game for enterprises in 1985 – he’s confident AI tools will do the same
News The Microsoft CEO says we need to change how we measure the value of AI
By George Fitzmaurice
-
Microsoft exec touts benefits of AI productivity gains
News Microsoft CCO Judson Althoff said the company is unlocking significant efficiency gains from AI tools internally.
By George Fitzmaurice
-
‘We’ve created an entirely new state of matter’: Satya Nadella hails Microsoft’s 'Majorana' quantum chip breakthrough
News Microsoft has unveiled a new chip it says could deliver quantum computers with real-world applications in ‘years, not decades'.
By Emma Woollacott
-
Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving'
News Research from Microsoft suggests that the increased use of AI tools at work could impact critical thinking among employees.
By Nicole Kobie
-
The DeepSeek bombshell has been a wakeup call for US tech giants
Opinion Ross Kelly argues that the recent DeepSeek AI model launches will prompt a rethink on AI development among US tech giants.
By Ross Kelly