Amazon Web Services

Is Amazon Web Services down?

No problems detected

If you are having issues, please submit a report below.

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

Amazon Web Services Outage Chart 03/12/2026 00:15

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  1. Errors (44%)

    Errors (44%)

  2. Sign in (30%)

    Sign in (30%)

  3. Website Down (26%)

    Website Down (26%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

Loading map, please wait...
City Problem Type Report Time
United StatesDaytona Beach Errors
United StatesSan Francisco Sign in
United StatesOklahoma City Errors
United StatesHudson Sign in
United StatesMaricopa Website Down
United StatesReston Errors
Map Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • xucaen Geeky Tech - Ooo Presents! 🚀🎄🎁🕹️✌️♥️⚛️ (@xucaen) reported

    @AWSSupport The link you posted does not take me to an article pertaining to this issue. It says something about light sail and blocks.

  • eoin_jennings Eoin Jennings (@eoin_jennings) reported

    @CTOAdvisor @awscloud You have to tolerate risk or you can’t push feature - the problem is the systematic risk

  • DaveParkinson 🅳🅐🆅🅸🅳 - Brave & ❤️ (@DaveParkinson) reported

    @awscloud I need to close my account as getting billed for services I no longer use. After trying your help pages I now have several dozen tabs open and none the wiser after endless errors. Can someone help?

  • Tyler_J_J 𝐓𝐲𝐥𝐞𝐫 𝐉𝐨𝐡𝐧𝐬𝐨𝐧 #metaDNA 🎸 (@Tyler_J_J) reported

    @CTOAdvisor @awscloud All good questions, which is why IDP is a single point of failure for virtually everyone today. Doesn't change the fact that if your IDP does have an outage, your entire business is dead in the water. BTW, we're working on solving this @PrivOps

  • realnerdjunkie Nerd Junkie ® (@realnerdjunkie) reported

    @awscloud @AWSstartups @JeffBezos Regarding yesterday: Only as a theory if ran some ware is not an issue. The 3rd party software that is manipulating Flex, with API calls, and Pings. The more software the merchant sells to drivers the increase demand for the API call or ping

  • ZachHurt4 Zach Hurt (@ZachHurt4) reported

    If the lessons you took from yesterday's #awscloud issues were "We need to be active-active multi-cloud", you might want to start moving all of your apps to Kubernetes and get ready to shell out $300+ an hour to be able to afford qualified engineers.

  • Arfness Andrew Fraser (@Arfness) reported

    @TVwithThinus @BritBox_ZA An Amazon AWS outage took britbox down yesterday

  • bensie James Miller (@bensie) reported

    Two tough @awscloud takeaways from yesterday: * “Global” consoles such as Route53 are only served from us-east-1. If you want to fail over DNS during a regional outage, better have API calls ready and hope the control plane is up. * Root account access only works in us-east-1.

  • larsmb Lars Marowsky-Brée (@larsmb) reported

    @hikhvar @awscloud Yeah, that makes sense; most scenarios have pretty 1:1 relations, so that's not too terrible.

  • atticlr LA (mask up) BOOMER (@atticlr) reported

    @dlnt Amazon AWS (Amazon Web Services) outage.

  • KirkKelly Kirk Kelly (@KirkKelly) reported

    Thanks @awscloud @JeffBezos lost out on a rush audition due to aws being down and I had a great feeling about that one. I could scream right..........

  • josecarlosrivas Jose Carlos Rivas (@josecarlosrivas) reported

    Did y’all hear that the interwebs was down yesterday? 🌧 #awscloud

  • Engineerisaac Engineerisaac.com (@Engineerisaac) reported

    So Yea @awscloud I didn't notice you go down. I've learned really fast. that Cloud means NOT YOUR COMPUTER

  • RamonGainez Ramon Gainez (@RamonGainez) reported

    @cloudpundit @awscloud In the real world this didn’t happen though. Far too many AWS managed services have central dependencies on us-east-1. Who cares if I can spin up ec2 instances in another az if kinesis is down and my RDS failover died because amazons dns isn’t resilient to AZ failure

  • ghaff Gordon Haff (@ghaff) reported

    @cloudpundit @zehicle @awscloud I know it's hard but this sort of this (dependence on something that can go down to resolve the outage) has happened before and will happen again. Not a failure mode no one has hit before.

  • iamnottomgreen John 💀 (@iamnottomgreen) reported

    @awscloud Fix your damn severs.

  • pdehlke Pete Ehlke (@pdehlke) reported

    So um... @AWSSupport we do still live in linear time right? aws cloudtrail --region us-east-1 lookup-events --start-time 1-15-2021 --end-time 1-16-2021 An error occurred (InvalidTimeRangeException) when calling the LookupEvents operation: The start time precedes the end time.

  • cadpanacea R.K. McSwain (@cadpanacea) reported

    @AutoCAD @thecadgeek Uh #3 - until @awscloud goes down :-(

  • cloudpundit Lydia Leong (@cloudpundit) reported

    In large part what's interesting is what *wasn't* affected by the @awscloud us-east-1 issues: Other regions. Despite the immense size of us-east-1, other regions were able to take the increased load from customers who were multi-region or did regional failover, AFAIK?

  • CryptoWorld2211 Crypto (@CryptoWorld2211) reported

    Maybe @JeffBezos can purchase a @SlotieNft for all the mayhem @awscloud caused in the middle of the server crashes #toMars

  • pdehlke Pete Ehlke (@pdehlke) reported

    @AWSSupport It's a terrible way to say "You can't query events that old with cloudtrail; please use athena instead." "The start date precedes the end date" should never have passed code review :(

  • father_ward discount mgk (@father_ward) reported

    @AndreasVdb @QuinnyPig @awscloud If you dockerize and don't rely on tools you can't self host, it shouldn't be an issue. But I'm not a Dev ops guy.

  • BinaryNights BinaryNights (@BinaryNights) reported

    @supermurs Because of the Amazon AWS outage yesterday, we were experiencing a few issues. Users couldn't register ForkLift or they didn't receive their license keys. Please contact us in an email if you have received a license key but it is not accepted.

  • sjmaclellan Steve MacLellan (@sjmaclellan) reported

    "Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region" AWS's description of yesterday's outage. Worth a read.

  • larsmb Lars Marowsky-Brée (@larsmb) reported

    The @awscloud outage is unlikely to be a good reason for pursuing a DR HA setup except for the rare cases where it is. But it does highlight that perhaps more services should have "offline/local" modes as fallback instead of being bricked?

  • tkreidl Tobias J. Kreidl (@tkreidl) reported

    @CTOAdvisor @awscloud Given the recent AWS outage (in addition to a number of outages among pretty much all the major vendors over the years), multicloud seems to be something to seriously consider. The Cloud is nowhere near the reliability of the old Bell Telephone system!

  • mbhbox MBH (@mbhbox) reported

    What if Amazon AWS renames us-east-1 to us-east--1? I think will solve a lot of their problems. #AWSoutage

  • JadonNaas Jadon Naas (@JadonNaas) reported

    @QuinnyPig @awscloud I wonder if part of the problem is that cloud tech leads to thinking that you have eliminated risk by using cloud technologies. I don't think you can ever eliminate risk, just shift it around, concentrate it, or distribute it. Does AWS still even know their own failure domain?

  • RamonGainez Ramon Gainez (@RamonGainez) reported

    @cloudpundit @awscloud Kinesis has a central dependency somewhere in us-east-1. My us-east-2 pipelines had elevated error rates almost the whole day. And yes, r53 was down but DNS capabilities across AWS were effected

  • larsmb Lars Marowsky-Brée (@larsmb) reported

    @hikhvar @awscloud Yes. Reconciling data after partitions is hard, but feasible in most cases. (That said, so would be a server-side DR architecture that assumed a cooperative client. Ah well.)