Sponsored by Cloudflare
How businesses can stay secure as AI takes hold
AI is a transformative technology, but with new opportunities come new security challenges

Generative AI is being adopted rapidly across the business world, promising to make organizations more efficient and uncover insights that may have been previously overlooked. The technology's potential is vast, with the ability to streamline operations, enhance decision-making, and drive innovation in ways that were unimaginable just a few years ago. AI can help automate routine tasks, free up employee time for more strategic initiatives, and even improve customer engagement through personalized interactions.
Alongside these opportunities come significant risks, however, including the rise of shadow AI—where employees use AI tools without oversight—and the threat of increasingly sophisticated AI-powered cyberattacks. The risk of inadvertent data leakage is also a significant concern, especially when employees use unauthorized AI tools or input sensitive information into public AI platforms.
As AI becomes more deeply integrated into business operations, the stakes are higher than ever. How can businesses leverage AI's potential while staying secure, ensuring they harness its benefits without compromising safety?
The dual nature of AI: balancing innovation and security
Companies are using AI to unlock deeper insights, improve customer interactions, and optimize their workflows. By automating routine tasks, for example, AI allows employees to focus on more strategic initiatives that foster creativity and drive organizational innovation.
RELATED WEBINAR
Yet, with the immense opportunities AI provides, there are equally substantial risks. The rapid expansion of AI usage has highlighted vulnerabilities, such as the rise of so-called "shadow AI”, where employees start using AI tools without the knowledge or oversight of IT departments. This can lead to significant security risks, as these tools may not comply with organizational security policies, may introduce vulnerabilities, and can create data governance challenges that leave sensitive information exposed.
While employee behavior may be seen as a latent threat, malicious actors are being more proactive and exploiting AI to conduct sophisticated cyberattacks. Enhanced phishing campaigns, personalized social engineering, malicious code generation, and deepfake scenarios are just a few examples of how AI can be used for nefarious purposes.
To navigate this complex landscape, businesses must adopt a balanced approach that harnesses AI's potential for innovation while rigorously addressing its security challenges. This involves not only implementing effective technical defenses but also fostering a culture of awareness and accountability, ensuring that every aspect of AI integration is approached with a security-first mindset.
AI in business: navigating the risks
To successfully navigate the risks associated with AI, businesses must adopt a proactive stance. While AI offers immense opportunities for growth and efficiency, organizations must remain vigilant to the security risks it introduces. This means not only recognizing the potential threats but also actively working to mitigate them.
Government regulations and frameworks are increasingly being developed to guide the safe use of AI, but businesses cannot rely solely on external guidelines. Companies must take proactive steps to ensure their AI adoption does not compromise security. This includes implementing robust internal policies, conducting regular security assessments, and staying informed about the latest AI-related threats. By fostering a culture of security awareness, businesses can strike the right balance between leveraging AI and safeguarding their operations.
Educating employees on the risks of unauthorized AI use
As businesses increasingly adopt AI tools, it's crucial to educate employees about the dangers of unauthorized AI usage and the potential consequences of sharing sensitive information with public AI platforms. Shadow AI presents significant risks, including inadvertent data leakage and compliance violations. Employees may not always be aware that the tools they use, while seemingly harmless, could expose confidential information to external threats or violate data governance policies.
To mitigate these risks, companies need to implement thorough training programs that explain the importance of authorized AI use and data security. Employees must understand that public AI platforms are not secure places to input sensitive data, as the information could be stored or used in ways that compromise security. By raising awareness and providing clear policies on AI usage, organizations can prevent unauthorized tools from being introduced into their workflows and ensure that data remains protected. Creating a culture where employees are encouraged to seek guidance on AI tools and are aware of the consequences of shadow AI will go a long way in securing company data and mitigating AI-related risks.
Three strategies for secure AI adoption
All this isn’t to say that businesses should row back on their AI projects or put the brakes on all together, though. There are ways of making AI adoption more secure without having to slow down.
EXPLAIN THE PROBLEMS CAUSED BY UNAUTHORIZED AI
Organizations should educate employees about the dangers of unauthorized AI usage and the potential consequences of sharing sensitive information with public AI platforms. Shadow AI presents significant risks, including inadvertent data leakage and compliance violations. Employees may not always be aware that the tools they use, while seemingly harmless, could expose confidential information to external threats or violate data governance policies.
Thorough training programs that explain the importance of authorized AI use and data security can help mitigate this risk. By raising awareness and providing clear policies on AI usage, organizations can prevent unauthorized tools from being introduced into workflows and ensure that data remains protected. Creating a culture where employees are encouraged to seek guidance on AI tools and are aware of the consequences of shadow AI will go a long way in securing company data and mitigating AI-related risks.
SET UP ADDITIONAL LAYERS OF EMAIL PROTECTION
Email continues to be the primary vector for cyberattacks, especially with the rise of AI-enhanced phishing techniques. Attackers are now using generative AI to craft sophisticated phishing emails that are far more convincing and difficult to identify. To counter this, businesses need to set up additional layers of email protection. Once again, employee training is crucial; regular sessions to educate staff on identifying phishing attempts and suspicious activity can go a long way in reducing the risk of breaches. Ultimately, though, human error can’t be avoided and technical solutions are necessary to support these efforts. Advanced email security tools that leverage AI can detect and block sophisticated threats, even if they come from seemingly trusted sources. By combining employee education with advanced technical safeguards, businesses can significantly reduce their vulnerability to email-based attacks.
MOVING TOWARDS A ROBUST, HOLISTIC CYBERSECURITY STRATEGY
Businesses need to adopt a comprehensive approach to cybersecurity that goes beyond traditional perimeter defenses. A robust, holistic cybersecurity strategy should focus on securing where data lives and how it is accessed, which is why many organizations are shifting towards a zero trust architecture. Zero trust operates under the principle of "never trust, always verify," meaning strict access controls are enforced, and no one is implicitly trusted—whether inside or outside the network. This approach can prevent attackers from gaining unrestricted access to data and applications, even if they manage to breach the initial network defenses. In addition to zero trust, companies should implement acceptable use policies, data loss prevention measures, and other technical controls to manage AI-related risks. By creating a layered defense strategy, organizations can stay ahead of evolving threats and secure their operations against AI-driven cyberattacks.
Securing the Future with AI
AI adoption is here to stay, as it brings a wealth of benefits that far outweigh the potential threats. However, to fully harness these benefits, businesses must implement proper security measures and strategies to safeguard their operations. By proactively adopting AI-driven cybersecurity solutions and fostering a culture of security awareness, organizations can mitigate the risks associated with AI and confidently integrate these technologies into their workflows.
Cybersecurity professionals should remain curious and proactive, continually experimenting with AI tools to better understand their capabilities and stay one step ahead of emerging threats. AI's transformative power is just beginning to be realized, and with the right approach, businesses can ensure they leverage this technology safely and effectively.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.
For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
By Rory Bathgate Published
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Law enforcement needs to fight fire with fire on AI threats
News UK law enforcement agencies have been urged to employ a more proactive approach to AI-related cyber crime as threats posed by the technology accelerate.
By Emma Woollacott Published
-
OpenAI announces five-fold increase in bug bounty reward
News OpenAI has announced a slew of new cybersecurity initiatives, including a 500% increase to the maximum award for its bug bounty program.
By Jane McCallion Published
-
Hackers are turning to AI tools to reverse engineer millions of apps – and it’s causing havoc for security professionals
News A marked surge in attacks on client-side apps could be due to the growing use of AI tools among cyber criminals, according to new research.
By Emma Woollacott Published
-
Multichannel attacks are becoming a serious threat for enterprises – and AI is fueling the surge
News Organizations are seeing a steep rise in multichannel attacks fueled in part by an uptick in AI cyber crime, new research from SoSafe has found.
By George Fitzmaurice Published
-
12,000 API keys and passwords were found in a popular AI training dataset – experts say the issue is down to poor identity management
Analysis The discovery of almost 12,000 secrets in the archive of a popular AI training dataset is the result of the industry’s inability to keep up with the complexities of machine-machine authentication.
By Solomon Klappholz Published
-
Microsoft is increasing payouts for its Copilot bug bounty program
News Microsoft has expanded the bug bounty program for its Copilot lineup, boosting payouts and adding coverage of WhatsApp and Telegram tools.
By Nicole Kobie Published