Skip to main content

As bad of an issue ransomware is within data centers, I was a bit skeptical that it was much of a problem in cloud. I, personally, hadn’t run into any incidents and I started to think it was more theoretical than anything else. It turns out, I was a little wrong. Okay, totally wrong. Not only is it a bigger problem than I thought but the attack pattern was different than I expected.

At the AWS re:Inforce conference, I attended an awesome session lead by Kyle Dickinson, Megan O’Neil, and Karthik Ram. The session was totally packed and the room minder turned away dozens of people. This post is a bit of a recap of the session, combined with my own experiences and recommendations. Any errors and omissions are mine, not theirs.

Is ransomware a problem in AWS?

Yep. Turns out more of a problem than I originally thought. Real customers are being affected, it isn’t merely theoretical.

How does it work?

I’ll cover the initial exploit vector in the next question, but there are 4 possible techniques in use, but only 2 of them are really viable and commonly seen:

  • A traditional ransomware attack against instances in AWS. The attacker compromises an instance (often via phishing a user/admin, not always direct compromise), then installs their malware to encrypt the data and spread to other reachable instances. This is really no different than ransomware in a data center since it doesn’t involve anything cloud-specific.
  • The attacker copies data out of an S3 bucket and then deletes the original data. This is the most commonly seen cloud native ransomware on AWS.
  • The attacker encrypts S3 data using a KMS key under their control. This is more theoretical than real, due to multiple factors. It’s a lot easier to just delete an object/bucket than it is to retroactively encrypt one.
  • The attacker does something to data in another storage service to lock/delete the data. I’m being nebulous because this isn’t seen, and most of those services have internal limitations and built-in resiliency that make ransomware hard to pull off.

In summary: S3 is the main target and the attackers copy then delete the data. Instances/servers can also be a target of the same malware used to attack data centers. There are a few theoretical attacks that aren’t really seen in the wild.

How do the attackers get in?

Exposed credentials. Nearly always static access keys, but possibly keys obtained from a compromised instance (via the metadata service). You know, pretty much how nearly all cloud-native attacks work.

What’s the attack sequence?

I’ll focus on the S3 scenario since that’s the cloud native one we want to focus on.

  1. Attacker obtains credentials.
  2. Attacker uses credentials for reconnaissance to determine allowed API calls and identify resources they can access.
  3. Attacker discovers they have S3 write permissions and list/read access to identify buckets. Note that the attacker may not have List privileges but can obtain the bucket names from other sources, like DNS, GitHub, or other locations. This is much less likely.
  4. Attacker copies/moves data to another location, which isn’t necessarily in AWS.
  5. The attacker deletes the source objects/files.
  6. Attacker uploads a ransom note (or emails).

Since this is all automated, the process can start within a minute of a credential exposure.

How can I detect the attack?

Well, if you can’t skip to prevention…

The attacker will usually leave a note with contact information so you can send them Bitcoin, which is convenient. But the odds are most of you will want to identify a problem before that.  Let’s walk through the attack sequence to see where we can pick things up.

First, you will want to enable more-in-depth monitoring of your sensitive buckets. Since this post is already going longer than I’d like I will skip over all the ins and outs of how to identify and manage those buckets and, instead, focus on a few key sources to consider. For cost reasons don’t expect to turn these on for everything:

  • CloudTrail, of course.
  • CloudTrail Data Events for any buckets you care about. This costs extra.
  • GuardDuty.
  • Optional: Security Hub. This is the best way to aggregate GuardDuty and other AWS security services across all your accounts.
  • Maybe: S3 Server Access Logs. If you have CloudTrail Data Events you get most of what you would want. But S3 logs are free to create (you just pay for storage) and do pick up a few events that CloudTrail might miss (e.g. failed authentications). They also take hours to see so aren’t useful in a live-fire incident. Read this for the differences: https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-S3.html

These numbers correspond to the attack sequence listed above:

  1. Self or third party identification of an exposed credential. Usually via scanning common repository, like GitHub. AWS once found one of mine and emailed me. Oopsie.
  2. Your account credential recon detections will work here. Some options include:
    1. GuardDuty findings for instance credential exfiltration. However, this has about a 20 minute delay and there are evasion techniques.
    2. The GetCallerIdentity API call isn’t always bad, but isn’t a call you should see a lot in production accounts.
    3. GetAccountAuthorizationDetails should trigger an alarm every time.
    4. Multiple failed API calls from a single IAM entity.
  3. Now we start focusing on detections that indicate the attacker is focusing in on S3. You likely notice that early detection in these phases can be difficult due to the noise, but keep in mind these will be more viable in situations like production accounts managed via CI/CD with limited human access. Heck, this might motivate you to use more cloud-native patterns.
    1. The GuardDuty S3 findings for Discovery events, which have to be enabled on top of just turning on GuardDuty, depending on how your account and organization are set up. See https://docs.aws.amazon.com/guardduty/latest/ug/s3html for more details.
    2. Filter for failed Read and List Management and Data Events on the S3 service. You might catch the attacker looking around. You could do this in your SIEM but it’s also easy to build CloudWatch Metrics Filters for these.
  4. The attacker is reading the objects and making copies. This may intertwine with the next phase if they read (copy) then delete on each object. The detections for phases 3 and 5 also apply here.
  5. This is the “uh oh” stage. The attacker isn’t merely looking around, they are executing the attack and deleting the copied data.
    1. The GuardDuty exfiltration/impact S3 findings will kick in. Remember, it takes at least 20 minutes to trigger and depending on the number of objects this could be a late indicator.
    2. CloudTrail Insights, if you use them, will alert on the large number of Write events used to move the data.
    3. You can build your own detections for a large number of delete calls. Depending on your environment and normal activity patterns, this could be a low number and trigger faster than GuardDuty. Your SIEM and CloudWatch Metrics Filters are good options.
    4. Mature organizations can seed accounts with canary buckets/objects and alert on any operations touching those buckets.
    5. While it’s a less-common attack pattern, you can alert on use of a KMS key from outside your account.
  6. If you don’t detect the attack until here, call law enforcement and engage the AWS customer incident response team. https://aws.amazon.com/blogs/security/welcoming-the-aws-customer-incident-response-team/

How can I prevent the attack?

To succeed, the attacker needs 3 conditions:

  • Access to credentials
  • Permissions to read and write in S3
  • The ability to delete objects that can’t be recovered

The first layer of prevention is locking down IAM, then using the built-in AWS tools for resiliency. That’s easy to say and hard to do, but here is an S3-focused checklist, and I’ll try to leave out most of the common hygiene/controls:

  • Don’t allow IAM users, who come with static access keys, at all. If that isn’t possible definitely use your tooling to identify any IAM users with S3 delete permissions.
  • Require MFA for SSO/federated users. Always and forever.
  • Have administrators escalate to a different IAM role when they need to perform delete operations. You can even fully separate read and delete permissions into separate roles.
  • If an instance needs access to S3, make sure you scope the permissions as tightly as possible to the minimal required API calls to the minimal resources.
  • Use a VPC endpoint to access the bucket, and layer on a resource policy that only allows delete from the source VPC. Then the attacker can’t use the credentials outside that VPC.
  • Turn on versioning, AWS Backup, and/or bucket replication. All of these will ensure you can’t lose your data. Well, unless you REALLY mess up your IAM policies and let the attacker have at it. Some of these need to be enabled when you create the bucket, so you might need to have a migration operation to pull it off.

You’ll notice I’m skipping Block Public Access. That’s a great feature, but many orgs struggle to implement it at scale since they need some public buckets and it won’t help with attacks using exposed credentials.

This all takes effort and adds costs, so I really recommend focusing on the buckets that really matter at the start. There are some more advanced strategies, especially if you are operating larger environments, that can’t fit in a post but drop me a line if you want to talk about them.

If you only had time for 2 things what would they be?

Assuming I know which buckets matter, I’d turn on versioning and replication. Then I would require privilege escalation for any S3 delete operations.

How can FireMon help?

We have a new IAM product in Beta for Just in Time privileges that should be out soon. We also offer posture checks and threat detectors in DisruptOps to identify risky buckets and alert on malicious activity. Drop me a line if you want to talk about them, or even if you just want some general advice on the AWS options I mentioned in the post.

What new things did you learn in the re:Inforce session?

I didn’t know that S3 ransomware was so common. I also didn’t know that copy-then-delete was the preferred attack technique. I thought it was using KMS encryption and it makes total sense why that is more theoretical and rare. I was familiar with the detectors and defenses, but the AWS speakers did a wonderful job tying them all together in a very clear and usable way. The talk definitely exceeded my expectations.

Get 9x
BETTER

Book your demo now

Sign Up Now