Amazon Web Services

Amazon Web Services Outage Map

The map below depicts the most recent cities worldwide where Amazon Web Services users have reported problems and outages. If you are having an issue with Amazon Web Services, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Amazon Web Services users affected:

Less
More

amazon-web-services-aws Hero Image

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Map Check Current Status

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Bogotá, Distrito Capital de Bogotá 3
Delhi, NCT 2
Quito, Provincia de Pichincha 2
Mumbai, MH 2
Mangalore, KA 2
Cali, Departamento del Valle del Cauca 2
Rio de Janeiro, RJ 1
Gurgaon, HR 1
Ypsilanti, MI 1
Torreón, COA 1
Belo Horizonte, MG 1
Curitiba, PR 1
Indore, MP 1
Sūrat, GJ 1
Dehra Dūn, UT 1
Stockholm, Stockholm 1
Brookline, MO 1
Chennai, TN 1
Bucaramanga, Departamento de Santander 1
Xochimilco, CDMX 1
Córdoba, CD 1
York, England 1
Zipaquirá, Departamento de Cundinamarca 1
Vadodara, GJ 1

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • sPaGhEtTi_CaT__ i need to think of a better name (@sPaGhEtTi_CaT__) reported

    @OrdinaryGamers just rent a bunch of servers in random countries, and set them up to let you route through them. don’t buy them cause it’s a lot easier for the fbi or something to bust down your door than Amazon AWS

  • QuinnyPig Corey Quinn (@QuinnyPig) reported

    @ghaff @awscloud Making sure you only punch up and not down is a constant study in self reflection, for sure. I still get it wrong way more often than I'd like.

  • IvanMansueto Ivan Mansueto (@IvanMansueto) reported

    @sergienko_mark @awscloud We're running into the same problem. In our case seems to happen only on models that have connections to other tables, but definitely an AppSync issue.

  • NagaRameshReddy Naga Ramesh (@NagaRameshReddy) reported

    @AWSSupport @apurva_mistry In service health page it showing Appsync operating normally but still we are facing same issue

  • Frame_io Frame.io (@Frame_io) reported

    @juliacgross Hi Julia. Some users are currently experiencing intermittent connectivity issues. Our engineers have determined the root cause is an issue with Amazon AWS. As of right now we have no ETA on a fix, but we will update you when we know more.

  • codecadwallader Steve Cadwallader (@codecadwallader) reported

    @sergienko_mark @awscloud @AWSSupport I created a support case and heard: > I understand you are observing high latencies in us-east-1. Please note we have had multiple reports of increased latency in us-east-1 region. The service team is currently investigating the root cause and fixing the issue as we speak.

  • dswersky Dave 'WYFM' Swersky (@dswersky) reported

    @QuinnyPig @awscloud Oh joy another inscrutable combinatorial problem I will NEVER use in real life

  • adekorir Adams Korir 🇰🇪 (@adekorir) reported

    @digitalocean you have terrible support. I can't believe I actually went for this service! Note to self: avoid 'automated security tooling' and 'review by our support staff'-like fishy services. I should have stuck with @awscloud or @linode.

  • cloudresearch CloudResearch (@cloudresearch) reported

    @Baspower_ @AWSSupport Hi Bas! We've known that approval ratings alone are ineffective for some time. Our blog below explains part of the problem. But you can still collect high quality data from MTurk using the CR MTurk Toolkit. 1/3

  • sohom83 Sohom Bhattacharjee (@sohom83) reported

    @cruisemaniac @awscloud This thing was responsible for slowing down the entire pipeline.

  • faermanj Julio (@faermanj) reported

    @QuinnyPig @awscloud Same issue we were discussing @SunilJagadish ... it's not only about containers. I believe cloudformation should at least have shell hooks (local and/or cloud).

  • sohom83 Sohom Bhattacharjee (@sohom83) reported

    @cruisemaniac @awscloud I was burned by this so much. A testing stack started rolling back and we had to wait for it to timeout because cancelling a CF stack between two bad states results in weird issues. (that is the best I can describe it)

  • carrotcrow13 Alex Wayne (@carrotcrow13) reported

    @awscloud Is it true you guys are the reason we can't purchase our medical ******** right now with servers down?

  • iemejia Ismaël Mejía (@iemejia) reported

    Dear @awscloud can somebody please check why the Spark History Server console seems to be broken since last week (Probably related to AWS EMR 5.33.0 update?) @awswhatsnew

  • doriofb Darío (@doriofb) reported

    @QuinnyPig @awscloud Use actions, jenkins, circle, whatever...The CiCD aws stack its just broken. CloudFormation is not that bad but terraform is way more mature. Hope CDK in the future can a be a production option.

Map Check Current Status