Industry unprepared for AI agent security challenges, experts warn
As organizations face widespread AI agent adoption, leaders will need to address concerns over letting autonomous systems roam free in sensitive environments
AI Agents look to be the future of how organizations put large language models to work for their enterprise, but questions remain around the security of these autonomous systems and how their identities should be managed.
There has been a flurry of activity around agentic AI in 2024: Google launched its Vertex AI Agents earlier in the year, Salesforce unveiled its new Agentforce platform in September, and AWS launched its re:Post Agent for Amazon Bedrock just days later
These systems promise to add tangible value to enterprises by following natural language commands from the end user, ‘reasoning’ how best to achieve the outcome, and actioning this using multiple tools without any further human interaction.
Speaking at Venafi’s Machine Identity Conference on 2 October in Boston, Katie Norton, research manager for DevSecOps & Software Supply Chain Security at IDC, explained how the proliferation of AI Agents will complicate security and data protection at organizations around the world.
“The difference here is that agentic AI is able to make decisions and take actions on behalf of the human without human involvement, and so this is different from an [robotic process automation] (RPA) bot, and that sort of identity that mimics what a human is doing and using a human identity,” she explained.
“This is a true machine identity, and it is incredibly important when AI is going out and acting on behalf of someone that that identity is secure because they’re going to be engaging with a bunch of different systems and levels of sensitive data to be able to execute these tasks.”
Speaking to ITPro, Matt McLarty, CTO at Boomi, walked through how agentic AI systems have introduced new questions around authentication and authorization. He gave the example of a user telling an AI agent about an error and the agent determining the best course of action is to open a support ticket in ServiceNow.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
“We need to at least open the support ticket and maybe send a Slack message to somebody in the IT support team, or to some channel to say ‘Look at this problem’, which can be done dynamically,” McLarty said.
“But how do you authorize that this agent is allowed to open the support case on that user’s behalf?
“Or maybe you have to extract some information about the user from Salesforce or something to determine who that customer is. Or there could be a chain, and you’ve got a human customer support agent using this agent, so there’s this whole multi-party authorization scheme.”
Businesses “not ready” to issue credentials for autonomous agents
As it stands, McLarty said that he doesn’t think businesses are adequately prepared to address these concerns and give agents the keys to the proverbial castle:
“I think the industry has to make this up as we go along. I would say right now the industry is not ready to say we’re going to issue credentials for these autonomous agents in these systems,” he argued.
“If you think about those, say 360 SaaS applications in your enterprise, you don’t want to go in and go through a whole identity management exercise of propagating identities to all the systems, it would be hard enough to do it even in one system internally, so you have to rely on the existing authentication authorization.”
Ultimately, McLarty argued that technology leaders should pair an end-user with any task an agent carries out, so any access is traceable back to the user who initiated the dynamic process.
Businesses will need to also think through how they will deal with checking and validating the authorization on the backend systems these agents are accessing, he added.
McLarty offered some solace to enterprises in that the industry has had to overcome similar challenges when it came to the explosion of APIs in enterprise IT environments. He told ITPro that in API risk management, the sector could find a model for securing agent identities.
“The good news is that’s a paved road when it comes to the API space, we already have these long chains of multiple systems and we’ve got standards like OAuth and OpenID Connect that allow us to authorize, issue, and exchange tokens have identity propagation through the whole thing.”
This is how Boomi is navigating the problem, McLarty said, reiterating the need for humans in the loop for the time being particularly with sensitive or protected systems.
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.