Hugging Face issues warning after detecting 'unauthorized access' to its Spaces platform
Hugging Face users are being told to refresh any keys or tokens they may have for the company’s Spaces platform
Machine learning (ML) development platform Hugging Face has issued a warning to users after it detected unauthorized access to its Spaces platform last week.
In a statement, the firm said the access was restricted to the Spaces platform, used for testing and demonstrating ML applications.
As a result, Hugging Face said it had "suspicions that a subset of Spaces’ secrets could have been accessed without authorization.”
The exact number of users or applications affected by the incident is unconfirmed, but Hugging Face assured users it was working with external security partners to triage the incident.
“We are working with outside cyber security forensic specialists, to investigate the issue as well as review our security policies and procedures.”
The company said it immediately revoked a series of HF tokens that were stored as secrets that it believes were accessed during the incident, and users whose tokens have been revoked will have already received an email notice.
Secrets refer to a method by which developers can add environment variables for their application without having to hard-code them inside the app itself. Hugging Face encourages users to store access tokens, API keys, or any sensitive value or credentials as secrets.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The notification also recommended that customers refresh any key or token and consider switching HF tokens to fine-grained access tokens, which will now become the new default access method.
Fine-grained tokens give developers more granular control over the permissions and repository access they grant, with the ability to select the scope of the token as well as the rights they grant.
Hugging Face will be taking a series of additional measures to improve security in the face of this incident, including totally removing org tokens, which it said should result in increased traceability and audit capabilities.
It will also be implementing a key management service for Spaces secrets and improving their ability to identify leaked tokens as well as proactively invalidate them.
Finally, It announced it would be completely deprecating the classic read and write tokens in the near future, as soon as its fine-grained access tokens reach feature parity.
Hackers may have had free access to private models and API keys hosted on Hugging Face
The incident may have given criminals access to private models hosted on Hugging Face, as well as any API keys for services like OpenAI that could be sold through underground hacking forums.
This is the latest security hiccup involving the Hugging Face platform within a short timeframe, after IT security company Wiz released analysis raising concerns over vulnerabilities that could affect companies running AI as a service offerings on the platform.
The report showed that after uploading manipulated models to Hugging Face, researchers were able to run arbitrary code and escalate their level of access using the platform's inference API feature.
If successfully exploited, experts said the attack could have a devastating impact, as it could give an attacker access to millions of private AI models and applications.
Reacting to the analysis from Wiz, Eric Scwake, director of cyber security strategy at Salt Security, outlined how malicious actors can leverage unauthorized access to private models to manipulate their outputs, and cause widespread disruption.
"While AI presents exciting opportunities, it also introduces novel attack vectors that traditional security solutions may need to catch up on. The very nature of AI models, with their complex algorithms and vast training datasets, makes them vulnerable to manipulation by attackers,” he explained
“AI is also a potential ‘black box’ which provides very little visibility into what goes on inside of it. Malicious actors can exploit these vulnerabilities to inject bias, poison data, or even steal intellectual property.”
Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.