
Cybersecurity researchers have disclosed a brand new form of identify confusion assault known as whoAMI that permits any individual who publishes an Amazon Device Symbol (AMI) with a selected identify to achieve code execution throughout the Amazon Internet Products and services (AWS) account.
“If completed at scale, this assault may well be used to achieve get entry to to hundreds of accounts,” Datadog Safety Labs researcher Seth Artwork stated in a file shared with The Hacker Information. “The inclined development may also be discovered in lots of personal and open supply code repositories.”
At its middle, the assault is a subset of a provide chain assault that comes to publishing a malicious useful resource and tricking misconfigured device into the use of it as an alternative of the reputable counterpart.

The assault exploits the truth that any individual can AMI, which refers to a digital system symbol that is used as well up Elastic Compute Cloud (EC2) cases in AWS, to the neighborhood catalog and the truth that builders may just disregard to say the “–owners” characteristic when in search of one by the use of the ec2:DescribeImages API.
Put otherwise, the identify confusion assault calls for the beneath 3 stipulations to be met when a sufferer retrieves the AMI ID throughout the API –
- Use of the identify clear out,
- A failure to specify both the landlord, owner-alias, or owner-id parameters,
- Fetching probably the most the not too long ago created symbol from the returned record of matching photographs (“most_recent=true”)
This results in a state of affairs the place an attacker can create a malicious AMI with a reputation that fits the development specified within the seek standards, ensuing within the introduction of an EC2 example the use of the danger actor’s doppelgänger AMI.
This, in flip, grants far off code execution (RCE) features at the example, permitting the danger actors to start up more than a few post-exploitation movements.
All an attacker wishes is an AWS account to submit their backdoored AMI to the general public Neighborhood AMI catalog and go for a reputation that fits the AMIs sought by way of their objectives.
“It is rather very similar to a dependency confusion assault, excluding that within the latter, the malicious useful resource is a device dependency (comparable to a pip bundle), while within the whoAMI identify confusion assault, the malicious useful resource is a digital system symbol,” Artwork stated.
Datadog stated more or less 1% of organizations monitored by way of the corporate had been suffering from the whoAMI assault, and that it discovered public examples of code written in Python, Cross, Java, Terraform, Pulumi, and Bash shell the use of the inclined standards.
Following accountable disclosure on September 16, 2024, the problem used to be addressed by way of Amazon 3 days later. When reached for remark, AWS informed The Hacker Information that it didn’t in finding any proof that the method used to be abused within the wild.
“All AWS products and services are running as designed. In line with intensive log research and tracking, our investigation showed that the method described on this analysis has best been completed by way of the licensed researchers themselves, with out a proof of utilization by way of some other events,” the corporate stated.

“This system may just have an effect on shoppers who retrieve Amazon Device Symbol (AMI) IDs by the use of the ec2:DescribeImages API with out specifying the landlord price. In December 2024, we offered Allowed AMIs, a brand new account-wide atmosphere that allows shoppers to restrict the invention and use of AMIs inside of their AWS accounts. We suggest shoppers review and enforce this new safety keep an eye on.”
As of remaining November, HashiCorp Terraform has began issuing warnings to customers when “most_recent = true” is used with out an proprietor clear out in terraform-provider-aws model 5.77.0. The caution diagnostic is predicted to be upgraded to an error efficient model 6.0.0.