Published on June 5th, 2024
Hugging Face, an Artificial Intelligence (AI) company, has revealed that it detected unauthorized access to its Spaces platform earlier this week. “We have suspicions that a subset of Spaces’ secrets could have been accessed without authorization,” it said in an advisory.
Spaces, a platform offered by Hugging Face, allows users to create, host, and share AI and machine learning (ML) applications. Additionally, it serves as a discovery service for finding AI apps developed by other users.
Response To Security Event
In response to the security incident, Hugging Face is taking measures to address the issue.
It has announced the revocation of a number of HF tokens present in the compromised secrets and are notifying affected users via email.
“We recommend you refresh any key or token and consider switching your HF tokens to fine-grained access tokens which are the new default,” it added.
Investigation And Disclosure
Hugging Face has not disclosed the exact number of impacted users, stating that the incident is currently under further investigation.
Additionally, it has informed law enforcement agencies and data protection authorities about the breach.
Context And Background
The incident occurs amid the rapid growth of the AI sector, with AI-as-a-service (AIaaS) providers becoming targets for attackers seeking to exploit them for malicious purposes.
Earlier in April, cloud security firm Wiz highlighted security vulnerabilities in Hugging Face’s platform, potentially allowing adversaries to gain cross-tenant access and compromise AI/ML models through CI/CD pipelines.
Previous research by HiddenLayer uncovered vulnerabilities in Hugging Face’s Safetensors conversion service, enabling the hijacking of AI models submitted by users and facilitating supply chain attacks.
Potential Consequences
Wiz researchers emphasized the severity of a potential compromise on Hugging Face’s platform, warning that it could lead to unauthorized access to private AI models, datasets, and critical applications, posing significant supply chain risks.
Rise Of AI-as-a-Service (AIaaS) Attacks: How To Stay Safe In The Cloud
As AI becomes more accessible through cloud-based services (AIaaS), a new kind of cyberattack is emerging. These attacks target AIaaS platforms to steal sensitive data or manipulate AI models. This can lead to stolen intellectual property, biased algorithms, or even compromised decision-making in critical applications.
So how can you stay safe? Here are two key points:
Choose Your Provider Wisely: Research the security practices of AIaaS providers. Look for companies with strong security protocols, regular vulnerability assessments, and transparent data handling policies.
Security Within Your Service: Don’t rely solely on the provider’s security. Use strong passwords, enable multi-factor authentication, and monitor your projects for suspicious activity. By following these steps, you can minimize the risk of AIaaS attacks and keep your AI projects safe.