AI detection tools risk losing the generative AI arms race
Intellectual property (IP) rights and academic concerns could drive demand for AI detection software that can pick out machine-generated content from the rest
As the popularity and reach of generative AI grows, many are starting to sound alarm bells over the potential for misuse and exploitation. Models like ChatGPT can already generate text that can be passed off as human-generated, and AI-generated media continues to develop rapidly.
If threat actors use the chatbots of tomorrow for phishing emails, to write malware, or for misinformation – and there’s every indication they will – public and private entities must be equipped with tools to flag AI-generated content. The same also applies to detecting plagiaristic content made with chatbots, a rapidly-emerging concern amongst academics.
In January, OpenAI released its classifier, a tool intended to pick out text that was generated by large language models (LLM), whether GPT-4 or a similar programme. The problem? It’s highly inaccurate. OpenAI’s classifier could only label 26% of AI-written text as “likely AI-written” in tests, while it incorrectly identified human-written text as machine-generated 9% of the time. Strings under 1,000 characters are especially tricky to unpick, and the classifier can’t process languages other than English at nearly the same degree of accuracy.
It’s clear OpenAI believes there’s scope for these tools in academia, with the company noting it’s “an important point of discussion among educators” in a blog post. But businesses could also gain a great deal from tools of this kind.
What are the dangers of undetected AI content?
One immediate concern is identifying the sources generative AI tools use. Many popular systems are trained on models that scrape information from the internet. In a black box development scenario with little insight into the data, it can be difficult to reassure companies their intellectual property (IP) hasn’t been used to ‘create’ something else.
There are already concerns that AI threatens the livelihoods of artists, and if systems such as OpenAI’s deep learning model DALL·E 2 or the CompVis Group’s Stable Diffusion draw on licensed works for their output, then firms from all sectors might find it in their interests to establish a method for the granular analysis of AI content.
At PrivSec London, a panel of AI experts urged firms to adopt greater transparency over how their models work to avoid regulatory difficulties down the line. Panel host Tharishni Arumugam, global privacy technology and operations director at Aon noted that if firms aren't already doing so, now's the time to think about what goes into their AI contracts. Companies should, for example, make a decision over whether or not to have clauses around the use of their data for training AI models.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
OpenAI addressed this concern with its recently-announced ChatGPT API, which doesn’t use the data it processes for training purposes unless a company opts in.
“Broadly, there will be a space for a set of technologies that interrogate generated results,” Artem Koren, co-founder and CPO at Sembly AI tells IT Pro. “Because IP compliance is not the only issue with generated results. There are also other things like inappropriate content, violent aggressive content, illegal content, incorrect content. AI today loves to lie, it’s really good at it.”
This capacity to lie isn’t just problematic for its potential role in generating pro-Russian propaganda, misinformation, and convincing lies that could damage a company’s reputation. Chatbots like ChatGPT are a source of worry due to their extremely wide user base, the ease with which one can access it, and the large volume of content it can generate at once.
The impending generative AI vulnerability nightmare
These factors could result in a large and well-meaning public using generative AI with damaging consequences. For example, code generated with large language models such as OpenAI’s Codex may contain logic errors that could cause damage to a business’ IT stack down the line. Stanford University researchers, for example, found developers using AI assistants introduced more security vulnerabilities into their code compared to those who wrote code from scratch. Those in the study who used AI assistants also disproportionately rated their code as more secure, indicating misplaced trust in the ability of such AI tools.
Even when errors in code don’t open up software to zero-day exploits, they can lead to disruptions to normal operations, as seen in Windows Defender’s false reporting over ransomware.
The widespread use of AI assistants could also open companies up to legal challenges given the potential for models to have been unlawfully trained on licensed content. Challenges are already underway, with GitHub Copilot being sued for “software piracy on an unprecedented scale,” in a charge that also contests the legality of OpenAI’s Codex model for generating programming languages.
Due to the ease with which AI-generated code can be obtained, it can be shared on forums, and quickly overwhelm moderators who have little way of reliably identifying whether code is written by humans or machines. This has led to the popular coding forum StackOverflow banning ChatGPT responses entirely.
“I think the reality is that eventually, it will not be possible to tell whether an AI generated something,” Koren elaborates. “Or, to put it another way, there will be large language models that specifically generate things in a way that you won't be able to tell that it was a model that generated it.”
With this in mind, the development window for creating tools to detect machine-generated code is fast closing. If models develop become sophisticated faster than countermeasures can be designed, businesses could find they have few mechanisms to identify potentially troublesome sources for code such as AI models.
Increased testing, or changing how we test?
Beyond legal concerns, there’s also widespread alarm over the potential for AI to plagiarise or too easily generate responses to exams and essay questions. Offensive Security has banned the use of ChatGPT in cyber security certification exams, stating the model does not allow for an accurate assessment of an applicant’s skill level.
Solve global challenges with machine learning
Tackling our world's hardest problems with ML
Dr Usama Fayyad is the executive director of the Institute for Experiential AI at Northeastern University, which aims to develop responsible AI solutions and foster better practical understanding of AI models. He agrees, in future, there’s a “huge” need for tools that can detect AI generated content, but also contends academia will need to accept change.
“We've done a successful transition from slide rules to calculators, to computers to, hey, the whole search engine on your mobile world. In each of those cases, we adapted how we teach, how we train, and how we assess,” he tells IT Pro.
“The good news is the technology isn't magic, and people have already come up with quick tricks to detect; you can use GPT on GPT. All of those [tools] are red herrings in my mind. The real issue here is it's not going to be too hard to change the way we assess and the way we teach to adapt to the fact: hey, this thing is here, it helps people, so part of your smarts is how well do you leverage it?
“I think people will come up with ways to make it really hard and very obvious that you're using GPT.”
While detecting unlawful content is more of a black and white endeavour, Fayyad suggests that academic concerns over the potential for students to use models like ChatGPT to write essays and exams are the result of a culture shock. It necessitates a change in procedure, whether that’s exams or hiring processes.
“In job interviews, we need to get to a level where we can say, ok, here is the right way to assess and here is what you need to focus on. Because this other stuff, even if this candidate is not good at it… I can't multiply 27 digit numbers in my head, but I can use a calculator and that's not a skill.
“And I think people will elevate to that level and say, “hey, you can use these engines and then your proficiency, your ability to utilise them correctly becomes a skill that's highly valued. Much like what happened with the graphic arts when Adobe first became a big tool.”
Davio Larnout, CEO at AI firm Radix states there are two main camps in this debate. They’re torn between rejecting generative AI entirely for fears “we’ll all become stupid if we start using that” and embracing these tools.
“For me, [arguments against] don't have a lot of value, because we should embrace it. The way that ChatGPT works today, it makes mistakes and even requires you to be more critical. Your critical thinking needs to go a level up, because you need to read it, you need to understand it and then think “where might this go wrong?”
Here, Larnout and Fayyad are in agreement; tools to detect AI content may be key in the near future, but in the long term firms will have to adapt to the prevalence of generative AI and approach critcial assessment from a new perspective.
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.