Microsoft has announced legal action against a ‘foreign-based threat actor group’ accused of running a hacking-as-a-service operation designed to bypass the safety controls of its generative AI services and create harmful and offensive content.
The company’s Digital Crimes Unit (DCU) uncovered the group’s sophisticated methods, which involved exploiting customer credentials scraped from public websites and illegally accessing accounts tied to generative AI services. These actors reportedly altered the capabilities of tools like Azure OpenAI Service and monetized their access by selling it to other malicious entities, complete with detailed instructions for generating harmful content.
Microsoft discovered the malicious activity in July 2024 and has since revoked the group’s access, introduced new security measures, and strengthened its defenses. Additionally, the company obtained a court order to seize the domain ‘aitism[.]net,’ which was a key component of the criminal operation. Few weeks back hackers sucessfully injected malicious code into chrome extensions.
Microsoft has revealed it is taking legal action against a “foreign-based threat actor group” accused of operating a hacking-as-a-service platform to bypass the safety controls of its generative AI services and produce harmful and offensive content.
According to the company’s Digital Crimes Unit (DCU), the group employed sophisticated techniques, including exploiting customer credentials scraped from public websites and unlawfully accessing accounts associated with generative AI services. These actors allegedly modified the capabilities of tools like Azure OpenAI Service, monetizing their unauthorized access by selling it to other malicious actors along with detailed instructions for generating harmful content.
The malicious activities were detected in July 2024, prompting Microsoft to revoke the group’s access, implement enhanced security measures, and fortify its systems against similar threats. Furthermore, the company obtained a court order to seize the domain “aitism[.]net,” which played a central role in the group’s criminal operations.
Microsoft stated that the defendants’ “de3u” application communicates with Azure systems through undocumented Microsoft network APIs, sending requests designed to mimic legitimate Azure OpenAI Service API calls. These requests are authenticated using stolen API keys and other compromised credentials.
Notably, the misuse of proxy services to illegally access large language model (LLM) services was highlighted by Sysdig in May 2024 as part of an “LLMjacking” attack campaign. This campaign targeted AI platforms from providers such as Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI, using stolen cloud credentials to sell unauthorized access to other malicious actors.
“Defendants have operated the Azure Abuse Enterprise through a coordinated and ongoing pattern of illegal activities aimed at achieving their unlawful objectives,” Microsoft alleged.
The company further noted that the illegal activity is not confined to attacks on Microsoft. Evidence suggests that the Azure Abuse Enterprise has also targeted and victimized other AI service providers.