Security and compliance are holding back generative AI adoption - what can businesses do?

A businesswoman is leading a meeting with three other colleagues. She is talking and they are listening.
(Image credit: Getty Images)

With the generative AI boom in full swing, many companies are now at the point where they must make some concrete decisions about implementation and deployment

As most businesses will likely look to adopt it in some form, they will also need to consider the impact the technology will have on their organizational processes. 

Generative AI is still in its nascent stages and, while promising much in the way of value, comes with a large element of risk, both known and unknown. Any company looking to incorporate the technology will have to deal with and weigh up the risks.

Dealing with risk, however, can be easier said than done, especially when working with a technology as complex and as fundamentally landscape-shifting as generative AI. 

Security and compliance, for example, have both rightly become areas of concern. This is largely in relation to the large bodies of data that generative AI tools both tap into and utilize to create new and novel forms of content.

When this data is proprietary or sensitive, which, in the context of organizational generative AI, it often is, then maintaining the security of such data becomes imperative, as well as ensuring compliance with regulation.

There’s no question that businesses need to adopt generative AI to keep pace with competitors, but they will need to do so while ensuring safety and resilience. 

Why do businesses need to adopt generative AI?

Many organizations are in the early stages of generative AI exploration and experimentation, testing various large language models (LLMs) and platforms to gauge how effective they are for their unique set of circumstances. 

While generative AI shouldn’t be shoehorned into every area of a business, many executives are quickly learning the benefits of using the technology in specific departments, such as administration and development.  

Development, in particular, has become an appealing test case, particularly with the growing use of platforms like GitHub Copilot, a generative AI tool used to aid developers and increase development efficiency.

Companies have already begun rolling out this tool to enhance the developer experience, while others have been tentatively integrating other platforms into staff workflows to cut down time spent on tasks like drafting emails.

This sea change in productivity is evidenced by various pieces of research, with PwC recently revealing that employment sectors in which AI can be readily used for some tasks are experiencing a productivity growth of nearly five times. 

This trending uptick in productivity has been in the making for some time, with research from Boston Consulting Group (BCG) stating last year that 93% of CMOs reported a “positive or very positive” improvement in overall organization.

Similarly, 91% reported a “positive or very positive” impact on efficiency, with BCG’s own initial observations suggesting that generative AI’s “low cost and ease of use” can deliver productivity gains of up to 30%.

Why is generative AI so risky?

Despite its allure, the risks of generative AI are clear, especially at the point when organizations begin to consider letting LLMs loose on their proprietary data in order to deliver more useful results. 

A company's proprietary data may include all manner of sensitive information, from employee records to customer billing details. If business leaders can’t trust the security of a generative AI platform, then they can’t trust that such vulnerable data is secure.

Security flaws in generative AI models are already rearing their heads in the form of jailbreaking, with some users able to bypass security guardrails on generative AI platforms by using various techniques.  

Similarly, the extent to which such data can be used, harnessed, or acted upon via a generative AI platform may be inhibited by certain restrictions and data regulation policies. 

In Europe, for example, the conversation around GDPR looks likely to enter a new phase of complexity in the age of generative AI, specifically regarding processes of data collection and data processing. 

Ultimately, fears around generative AI are throwing up myriad pain points for IT leaders and company decision-makers, particularly in areas of regulatory concern and platform risk.

Companies relying on older models, for example, face instability in the generative AI landscape, while the rapid development of the technology as a whole is impeding the ability of teams to keep up with data security and privacy processes.  

The proliferation of different tools, models, and platforms is also causing a level of application sprawl which increases attack surfaces, and there is a clear need for simplification and consolidation.  

While CIOs need to balance the maximization of value with the minimization of risk and cost, security executives need to be prepped for the latest security threats. Generative AI platforms thus need to accommodate these various needs. 

How can businesses safely adopt AI?

When dealing with these stores of potentially sensitive data, companies are also often working with unstructured data, a type of data that lacks detailed labels and identifiers.

Unstructured data may come in the form of data not typically stored in a centralized system or used regularly, such as video or audio footage, and this means it can be difficult to manage from a security or compliance point of view. 

There are services, however, that offer a more centralized strategy for managing such data, in turn allowing organizations to adopt generative AI systems in a way that maintains a level of control. 

Cloud-based content management company Box, for example, offers the Box Content Cloud and Box AI systems, going beyond typical file management in an effort to centralize company data strategies.

By keeping organizational data within the Box Content Cloud, businesses can then access this data through a single content layer which maintains security at its core, before applying generative AI capabilities.

Box’s solution combines enterprise standards with top-level AI capabilities, enabling business users to both maximize value-focused content and ensure compliance.

By using Box, organizations can implement Box AI to integrate AI models, allowing users to drive business insights, create content such as emails or newsletters, and automate business processes.

All of this is done while maintaining a keen eye on security and compliance, as businesses can easily manage and keep track of the data that is being used in their generative AI deployment

Find out more about Box the Content Cloud; the intelligence platform for secure content management and collaboration

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.