Here is what happened:
Some 600 instances were spawned within 3 hours before AWS flagged it off and sent us a health event. There were numerous domains verified and we could see SES quota increase request was made.
We are still investigating the vulnerability at our end. our initial suspect list has 2 suspects. api key or console access where MFA wasn’t enabled.
Client was a small org, and two very old IAM accounts had suddenly had recent (yesterday) console log ins and password changes.
I'm investigating the extent of the compromise, but so far it seems all they did was open a ticket to turn on SES production access and increase the daily email limit to 50k.
These were basically dormant IAM users from more than 5 years ago, and it's certainly odd timing that they'd suddenly pop on this particular day.
Receive an email that says AWS is experiencing an outage. Log into your console to view the status, authenticate through a malicious wrapper, and compromise your account security.
Even cautious people are more vulnerable to phishing when the message aligns with their expectations and they are under pressure because services are down.
Always, always log in through bookmarked links or typing them manually. Never use a link in an email unless it's in direct response to something you initiated and even then examine it carefully.
If you still want to avoid the comfort of typing in stuff manually or navigating the webinterface, logging in on a new tab and then clicking on the link is also an option.
Of course, as always, PEBKAC. You will have to strictly follow protocol, and not every team is willing to jump through annoying hoops every day.
Again, last I looked, FIDO MFA credentials cannot be used for API calls, which you'd need to make for STS credential generation.
So in the off chance that you get a phishing mail, you generate temporary credentials to take whatever actions it wants, attempt to log in with those credentials, get phished, but they only have access to API for 900s (or whatever you put as the timeout, 900s is just the minimum).
900s won't stop them from running amok, but it caps the amok at 900s.
So if your MFA device for your main account is a FIDO2 device, you either:
1. Don't require MFA to generate temporary credentials. Congrats, your MFA is now basically theater.
2. Do require MFA to generate temporary credentials. Congrats, the only way to generate temporary credentials is to instead use a non-FIDO MFA device on the main account.
Nobody is getting a phishing email, going to the terminal, generating STS credentials, and then feeding those into the phish. The phish is punting them to a fake AWS webpage. Temporary credentials are a mitigation for session token theft, not for phishing.
Require FIDO2-based MFA to log into AWS via Identity Center, then run aws sso login to generate temporary credentials which will be granted only if the user can pass the FIDO2 challenge.
The literal API calls aren't requesting a FIDO2 challenge each time, just like the console doesn't require it for every action. It's session based.
I’m excited to see that Identity Center supports FIDO2 for this use case.
> You could win $5,000 in AWS credits at Innovate
At first I thought maybe some previous dev had set passwords for troubleshooting, saved those passwords in a password manager, and then got owned all these years later. But that's really, really, unlikely. And the timing is so curious.
If you haven't check newly made Roles as well. We quashed the compromised users pretty quickly (including my own, the origin we figured out), but got a little lucky because I just started cruising the Roles and killing anything less than a month old or with admin access.
To play devil's advocate a bit. In our case we are pretty sure my key actually did get compromised although we aren't precisely sure how (probably a combination of me being dumb and my org being dumb and some guy putting two and two together). But we did trace the initial users being created to nearly a month prior to the actual SES request. It is entirely possible whomever did your thing had you compromised for a bit, and then once AWS went down they decided that was the perfect time to attack, when you might not notice just-another-AWS-thing happening.
Certainly feels like an strategy I'd explore if I was on that side of the aisle.
If my company used AWS I would be hyper aware about anything that it’s doing right now
Yes and no I suppose, it has trade-offs. On one hand, what you're saying is true for sure. But on the other hand, if you're currently trying to rescue a failing service, come across something that looks weird and you have a hunch you should investigate, but you're in the middle of fire-fighting, maybe you're more likely to ignore it at least until the fires been put out?
You never know when or if someone might misinterpret a message like this.
Around non-technical people, explain why it's a bad idea, and be empathetic so that your friends, family, and coworkers feel comfortable asking you questions about things like that. Among your techie friends, absolutely, laugh away.
Someone will learn from this, so it's totally worthwhile and I hope nobody got offended.
If they did, we have bigger issues potentially.
Since many businesses were affected by an awful, irresponsible AWS incident, we understand it might be challenging times for software business, which is why our team runs free security checks for all tokens we receive, limited offer, only today, send us your credentials and get your report in less than 24 hours.
we already received more than 100 API keys from people with a referral from hackernews, there are only 50 seats left
Based on docs and some of the concerns about this happening to someone else, I would probably start with the following:
1. Check who/what created those EC2s[0] using the console to query: eventSource:ec2.amazonaws.com eventName:RunInstances
2. Based on the userIdentity field, query the following actions.
3. Check if someone manually logged into Console (identity dependent) [1]: eventSource:signin.amazonaws.com userIdentity.type:[Root/IAMUser/AssumedRole/FederatedUser/AWSLambda] eventName:ConsoleLogin
4. Check if someone authenticated against Security Token Service (STS) [2]: eventSource:sts.amazonaws.com eventName:GetSessionToken
5. Check if someone used a valid STS Session to AssumeRole: eventSource:sts.amazonaws.com eventName:AssumeRole userIdentity.arn (or other identifier)
6. Check for any new IAM Roles/Accounts made for persistence: eventSource:iam.amazonaws.com (eventName:CreateUser OR eventName:DeleteUser)
7. Check if any already vulnerable IAM Roles/Accounts modified to be more permissive [3]: eventSource:iam.amazonaws.com (eventName:CreateRole OR eventName:DeleteRole OR eventName:AttachRolePolicy OR eventName:DetachRolePolicy)
8. Check for any access keys made [4][5]: eventSource:iam.amazonaws.com (eventName:CreateAccessKey OR eventName:DeleteAccessKey)
9. Check if any production / persistent EC2s have had their IAMInstanceProfile changed, to allow for a backdoor using EC2 permissions from a webshell/backdoor they could have placed on your public facing infra. [6]
etc. etc.
But if you have had a compromise based on initial investigations, probably worth while getting professional support to do a thorough audit of your environment.
[0] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...
[1] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/c...
[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-...
[3] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/s...
[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credenti...
[5] https://research.splunk.com/sources/0460f7da-3254-4d90-b8c0-...
[6] https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_R...
Also some more observations below:
1) some 20 organisations were created within our Root all with email id with same domain (co.jp) 2) attacker had created multiple fargate templates 3) they created resources in 16-17 AWS regions 4) they requested to raise SES,WS Fargate Resource Rate Quota Change was requested, sage maker Notebook maintenance - we have no need of using these instances (recd an email from aws for all of this) 5) in some of the emails i started seeing a new name added (random name @outlook.com)
Do what you can to triage and see what's happened. But I would strongly recommend getting a professional outfit in ASAP to remediate (if you have insurance notify them of the incident as well - as often they'll be able to offer services to support in remediating), as well as, notify AWS that an incident has occurred.
[0] https://www.reddit.com/r/aws/comments/119admy/300k_bill_afte...
The cause was a bad hire decided to do a live debugging session in the production environment. (I stress bad hire because after I interviewed them, my feedback was that we shouldn't hire them.)
It was kind of a mess to track down and clean up, too.
> Not sure if this is what happened to you, but one thing I ran into a while back is that even if you return Cache-Control: no-store it's still possible for a response to be reused by CloudFront. This is because of something called a "collapse hit" where two requests that occur at the same time and are identical (according to your cache key) get merged together into a single origin request. CloudFront isn't "storing" anything, but the effect is still that a user gets a copy of a response that was already returned to a different user.
> https://stackoverflow.com/a/69455222
> If your app authenticates based on cookies or some other header, and that header isn't part of the cache key, it's possible for one user to get a response intended for a different user. To fix it you have to make sure any headers that affect the server response are in the cache key, even if the server always returns no-store.
---
Though the AWS docs seem to imply that no-store is effective:
> If you want to prevent request collapsing for specific objects, you can set the minimum TTL for the cache behavior to 0 and configure the origin to send Cache-Control: private, Cache-Control: no-store, Cache-Control: no-cache, Cache-Control: max-age=0, or Cache-Control: s-maxage=0.
https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope...
Didn't ChatGPT have a similar issue recently? Would sound awfully similar.
Total password reset and tell your AWS representative. They usually let it slide on good faith.
While this could possibly be related to the downtime, I think this is probably an unfortunate case of coincidence.
Inertia is a hell of a drug
Do not discount the possibility of regular malware.
Why don't cloud providers offer IP restrictions?
I can only access GitHub from my corporate account if I am in the VPN and it should be like that for every of those services with the capability to destroy lives.