Generative AI security tools are a risky enterprise investment – WithSecure wants to change that
WithSecure sets its sights on the vast array of security copilot offerings with its new generative AI ‘experience’ Luminen, which promises to make prompt engineering a thing of the past
WithSecure has announced the launch of a new generative AI security platform, Luminen, which it claims will streamline security processes and unlock cost savings for enterprises.
The Finnish cloud security specialist took aim at the raft of generative AI security assistants that have flooded the market over the last year or so, which it claims are resource-intensive and vulnerable to leaking information fed into them via hallucinations or malicious prompt engineering.
Luminen, which was teased last week with more details revealed at its flagship SPHERE24 conference in Helsinki, is described by WithSecure as a generative AI tool that is natively embedded in the Elements cloud security platform.
Wary of using the term chatbot, assistant, or copilot, WithSecure has made a concerted effort to emphasize that they have taken a different approach to integrating generative AI into their security products.
During a media briefing before the opening keynote at SPHERE24, Paolo Palumbo, VP of WithSecure Intelligence, said rolling out generative AI integrations just for the sake of it is not helpful to security professionals, and actually makes their life harder.
“Just sprinkling generative AI all over the place is not the solution, it just adds complexity and might distract users from what they really need to do, which is to defend the complex environment.”
Speaking to ITPro, Leszek Tasiemski, VP of Product Management at WithSecure, explained that while the company understood some of the value generative AI can bring, the company was hesitant to rush into the generative AI security space for a number of reasons, with the first being cost.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
“There is a genuine value in [generative AI] so we don’t disregard that but when we started thinking, okay what can we do? What are our options? At that time it was late 2022 we knew that pretty much the only option was either to host our own instance of some of the open source models like Meta’s Llama, which is extremely expensive,” he explained.
“We tried that and we estimated that just hosting the whole model, which would do nothing and be idle, would cost 200k a year, just for the data center to keep it running.”
Tasiemski noted that WithSecure also did not want to simply use the OpenAI API as it would require sending customers data into their public model, which presented an unacceptable risk of data leakage.
“The prompt itself is so important”
The rollout of AWS’ Bedrock platform offered WithSecure the cost-effective and secure alternative it was waiting for, with a series of foundation models available for private use as closed engines.
With Bedrock, Tasiemski and his team were able to plug the foundation model into WithSecure’s infrastructure without the risk the model would leak other companies’ sensitive information. However, the risk of malicious actors intentionally removing the guardrails from the model through carefully refined prompts still remained.
To get around this, Tasiemski said Luminen instead locks down its prompt functionality, only allowing users to pick prompts that are suggested to them based on context data He explained that the propensity of LLMs to hallucinate was simply too great to justify allowing an open text interface on the system.
Instead, Luminen will generate “extremely sophisticated” prompts that are tailored to ensure the data does not make up false information, and then presents the user the option to run this query through the model.
“The prompt itself is so important… We decided that we wanted to make it easy and simple for the user so that we own the prompt, we just expose the button, then we do the magic in the background”.
The system will use context data, for example, if it sees something that resembles an IOC, like a domain name, IP address, or a hash. The user does not need to create a prompt to ask Luminen to tell the user more using WithSecure’s threat intelligence databases.
As a self-described hacker, Tasiemski was wary of saying it was impossible to get Luminen to operate outside of its guardrails using malicious prompts, but would go as far as saying the system is “pretty [much] immune” to this attack vector.
Only using generative AI where it adds value
Another benefit of locking down the prompt generation functionality in Luminen is that it allows the system to be far more resource efficient, according to Tasiemski, as it only runs queries that actually require the generative capabilities of LLMs.
“We are trying to make everything as energy efficient as possible, and then comes this huge energy sucker LLM. So how do we merge that? We came up with the solution that we do as much as possible outside of the large language model, that’s why we pre-compute the data. We don’t ask the LLM to gather the data and to pre-compute it because we can do that with other methods that are more economical”.
This pre-computed data is tokenized and given to the model as a context which the prompt can understand and feed back an answer formulated in a natural language output.
Tamsiemski stated that by limiting what types of actions Luminen will get involved with, and handing over tasks that can be completed more efficiently with other systems, WithSecure are able to guarantee users they will not be faced with exorbitant usage fees that other systems may levy at them.
“We are trying to limit the energy usage, we make it faster to respond so the latency is lower because there’s less to process, we keep costs down which is important because as you know all the competitors when they give you generative AI tooling it comes with a price tag. We don’t plan on introducing a price tag because we are using LLMs only where LLMs add value.”
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.