Skip to content
Home » What is AI jacking?

What is AI jacking?

What is AI Jacking
Share to

AI jacking is a new term in cybersecurity that describes a specific type of cyber attack targeting artificial intelligence (AI) systems. It mainly affects popular AI platforms like Hugging Face. This type of attack is concerning because it can impact multiple users at once.

Hugging Face is a key player in this issue. It’s known for its open-source machine learning projects, offering many models and datasets used in AI research and development.

The platform has gained more users with the growth of generative AI, especially with models like GPT, the basis of OpenAI’s ChatGPT. But its popularity has also made it a target for AI jacking.

Read Also: What Is QR Code Phishing & How To Prevent It?

Short Explanation

The attack occurs when someone maliciously exploits how Hugging Face renames its models or datasets. Normally, when a model or dataset receives a new name, the old name is redirected to the new one.

But if a hacker uses the old name, they can replace the original content with something harmful or incorrect. This is dangerous, especially in the field of machine learning, where data integrity is crucial.

How does AI jacking work?

AI jacking operates through a series of targeted steps that exploit the structure and features of AI platforms.

Identification of Targets

Attackers start by identifying popular or widely used AI models and datasets within platforms like Hugging Face.

They focus on those with significant dependencies in various projects.

Monitoring for Renaming Events

Attackers closely monitor these AI resources to detect any renaming events. Such events typically involve changing the name of a model or dataset for reasons such as updates, rebranding, or organizational changes.

Recording Abandoned Names

Once a renaming event occurs, the original name of the resource potentially becomes available.

Attackers quickly register these abandoned names under their control before they are noticed or blocked by platform administrators.

Replacement with Malicious Content

After taking control of the old names, attackers replace legitimate content with malicious versions.

This could be subtly modified models or datasets designed to execute malicious functions, illicitly collect data, or corrupt AI training processes.

Exploiting Dependency Chains

Many AI applications and systems rely on these resources for their functionality. By compromising a single model or dataset, attackers can potentially infiltrate multiple downstream applications and projects that rely on the integrity of these resources.

Delayed Detection

Changes are often subtle and hard to notice. For this reason, users and developers may not quickly notice the difference, especially if harmful modifications are made to resemble normal updates.

Potential for Widespread Impact

The interconnected nature of AI systems means that a single compromised resource can have a ripple effect, impacting a wide range of applications and users.

This potential for widespread impact is what makes AI jacking a particularly insidious form of cyber attack.

Read Also: 5 Easy And Proven Ways To Hack Facebook Accounts

Implications and Boundaries of AI Jacking

AI hijacking poses significant cybersecurity threats to AI, presenting various risks and encountering certain challenges in its execution.

Impact and Risks

  • Diminished Trust in AI Platforms: AI hijacking may erode people’s trust in using or contributing to AI models and platforms due to security concerns.
  • Data Integrity Issues: AI accuracy relies on good data. AI hijacking risks compromising this data, leading to erroneous AI training and inaccurate outcomes, which poses a significant problem in critical areas like healthcare.
  • Operational Challenges for Businesses: Businesses utilizing AI may face disruptions due to AI hijacking, resulting in financial losses, workflow interruptions, and damage to their reputation.
  • Potential for Spreading False Information: AI hijacking could be utilized to disseminate misinformation via AI systems, impacting public opinion or causing confusion.

Limits of AI Jacking

  • Detection and Response Mechanisms: As awareness of AI hijacking grows, efforts to detect and respond to such attacks also increase. Enhanced security protocols and AI audit practices can mitigate the effectiveness of AI hijacking.
  • Platform Countermeasures: AI platforms, alerted to the threat of AI hijacking, are likely to implement stricter security measures, making it more challenging for attackers to successfully exploit vulnerabilities.
  • Legal and Ethical Constraints: Legal and ethical considerations surrounding AI hijacking may deter potential attackers. Legal consequences and the growing emphasis on ethical AI use serve as deterrents.

How Legit Security Discovered AI Hijacking

The discovery of AI hijacking by Legit Security involved a careful examination of how the Hugging Face platform manages its AI models and datasets.

Initial Tests

The team began by changing their account name on Hugging Face from “high reputation account” to “new high reputation account.”

They observed how the platform redirected these changes, noting that the original account name became available again. This suggests a potential security issue.

Demonstrating Vulnerability

To demonstrate how AI jacking works, Legit Security created a demonstration video. In this video, they took an existing model and added malicious code, proving the risks of this vulnerability.

Searching for Vulnerable Projects

Hugging Face does not retain a history of changes made to its projects like some other platforms do.

Therefore, Legit Security used the Wayback Machine, an internet archiving tool, to examine previous versions of Hugging Face’s models and datasets.

They focused on changes since 2020 when Hugging Face began hosting these models and datasets.

Research Process

The team examined different dates in the Wayback Machine archives and gathered information about Hugging Face’s models and organizations at those times.

They adjusted their methods to accommodate changes in the appearance of the Hugging Face website over the years.

Identifying Risks

After collecting the names, they checked each one to see if it redirected to a new name. A redirection indicated that the original name had changed, creating an opportunity for AI hijacking.

They found many accounts that could be compromised this way. There could be even more vulnerable accounts because not all historical data was available in the archives.

Conclusion

In summary, AI hijacking is a complex type of cyber attack mainly affecting AI platforms like Hugging Face. It involves reclaiming previously used names for AI models and datasets and inserting harmful content into them.

This attack can undermine trust in AI technologies, affect the quality of AI data, and disrupt business operations.

Legit Security’s work in uncovering this issue underscores the importance of enhanced security and ongoing monitoring in the field of AI.

As AI becomes more prevalent, protection against threats like AI hijacking is essential to maintain safe and responsible AI usage.

Leave a Reply

Your email address will not be published. Required fields are marked *