1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 42% Errors (42%)
  • 32% Website Down (32%)
  • 26% Sign in (26%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
Palm Coast Errors 1 day ago
West Babylon Errors 7 days ago
Massy Errors 8 days ago
Benito Juarez Errors 12 days ago
Paris 01 Louvre Website Down 16 days ago
Neuemühle Errors 16 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • TrendNinjaApp
    TrendNinja 🥷 (@TrendNinjaApp) reported

    @StockSavvyShay Valuations reset from 40x → 20x, but the AI engine only got stronger. • Meta Platforms seeing ~3.5% ad lift from AI • Amazon AWS AI at $15B run rate • NVIDIA has $1T+ backlog • Anthropic growing 1400% YoY Not a demand issue—bottlenecks. Same trend. Lower price.

  • Petielvr
    Queen of hearts (@Petielvr) reported

    @AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232

  • ForwardFuture
    Forward Future (@ForwardFuture) reported

    “Will Amazon ever sell its custom chips outside of AWS?” Matt Garman, CEO @awscloud, says: “Never say never. But today we get huge benefits from only selling chips in our own environment.” “When you build merchant silicon, you have to support many server platforms, data centers, and firmware.” “We only have to build for one: AWS. That simplifies everything.”

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport On-prem means when it's down, at least you know whose fault it is.

  • KoukabT53779
    KOUKAB TAHIR (@KoukabT53779) reported

    @awscloud Dear Amazon Support, I returned a product due to a missing part issue. Order ID: 406-1196519-8819530 The product was picked up on March 7, and even the delivery took 3 days. Now it has been more than 6 days since the return, but I still have not received my refund.

  • siddhantio
    Siddhant Tripathi (@siddhantio) reported

    @awscloud opened a case over 10 days ago and it’s still unassigned to any agent. Please help in resolving the billing issue.

  • TutorHailApp
    TutorHail App (@TutorHailApp) reported

    @awscloud hello we have tried reaching out to you but in vain our EC2 attached to UAE has been down forever and you have no communication out. Can we know what we are dealing with it is our main server this is ridiculous handing us over to your bots with no answers.

  • mktldr
    Patriot, unpaid trying to save our country (@mktldr) reported

    @awscloud new gimmick 1 Their #customerservice has really gone down. The few times Ive contacted them in the last year, it requires a min of 3 contacts - they dont seem to comprehend 2 Lookout! Many agents promise $, then u give a 5/5 rating & NEVER SEE THE MONEY. FRAUD!!!

  • 0xp4ck3t
    Bryan (@0xp4ck3t) reported

    @AWSSupport We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785

  • abusarah_tech
    Mohamed (@abusarah_tech) reported

    i’ve recently went down a rabbit hole to learn how hyperscalers / cloud providers like @awscloud, @Azure (or at least in theory) work a huge respect to all the engineers that built the abstraction behind the resource provisioning. i am still trying to wrap my head around it

  • DecentCloud_org
    Decent Cloud (@DecentCloud_org) reported

    @senunwah @AWSSupport The outage gets a postmortem. Your deadline doesn't read it.

  • er_shivamsingh0
    Shivam Singh (@er_shivamsingh0) reported

    Hey @AWSSupport can you please tell me , when did the uae data centre issue will be resolve ??

  • Abomination81
    Abomination (@Abomination81) reported

    @spiderlol_ I have nothing at home but a macbook pro and a server with some 5090's for ML's and storage. I use amazon aws ec2

  • igrgavilan
    Ignacio G.R. Gavilán (@igrgavilan) reported

    @awscloud_es @AWS @awscloud I need to talk urgently with you. I have a serious problem with AWS services and your support ignores all my support tickets. I prefer an in-person contact, in spanish if possible.

  • ceO_Odox
    Ødoworitse | DevOps Factory (@ceO_Odox) reported

    Every DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. ​Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵

  • GamingNepr34519
    जहाँ mila,वही खोदूंगा (@GamingNepr34519) reported

    @awscloud my case id 177513415600592 please solve the problem i am student accidetally i goted bill

  • RobBoggs4
    JustAnotherEarthling *humorous/satirical* (@RobBoggs4) reported

    @amazon @awscloud Just recently, I was refunded over a $150 usd when doing a specific search for southern[that's zone 9-10]centipede grass seed. Your algorithms showed me " amazon's best choice" for a grass seed that won't grow higher than a "zone 7" Fix your algorithms.

  • sandeepTiw28306
    sandeep Tiwari (@sandeepTiw28306) reported

    @amazon @amazonIN @awscloud I have parched product not working conditions i have returned product not a pickup done last 5 Day

  • zqureshi_
    Zaid (@zqureshi_) reported

    @Tahalazy @AWSSupport can confirm, facing same issues @AWSSupport

  • floranext_pm
    Product FN (@floranext_pm) reported

    @AWSSupport a client of our's website has been down since Friday due an SSL error that we cannot resolve without Support's assistance. Our client is losing revenue from this we need immediate action.

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.

  • introsp3ctor
    Mike Dupont (@introsp3ctor) reported

    @AWSSupport the billing ticket is open for days, and the discord and the github and absolutly no response from aws or kiro. Its seems like another release on a friday issue, so i will wait a bit happy holidays. my guess is they ran out of money and had to pause a bunch of accounts in loss.

  • grok
    Grok (@grok) reported

    @HavokSocial @awscloud We pause all ai code merges for two days straight that tanks our velocity but buys time to audit the last incidents without more piling on. Whoever approves a pr now owns pager duty for that service the whole week which slows down reviews hard but makes them actually care what ships. Leads pull daily triage on the unowned services that burns their calendar but surfaces risks before they explode.

  • NeuronSale
    NeuronGarageSale (@NeuronSale) reported

    @QuinnyPig @awscloud They’re just happy the outage isn’t bc of AI generated code.

  • Riniv56942195
    Riniv (@Riniv56942195) reported

    @AWSSupport , is there any issue with Bahrain region( last 2 hours) ? We are facing issues but service health dashboard not showing any recent updates. Last update was on March 3rd.

  • kag_land
    dr_land (@kag_land) reported

    I remember getting a write up at @awscloud for warning the African immigrants I was working with that there are still street lamp towns in rural areas of West Virginia. For their safety, but I was the problem for saying something controversial about race tensions. Oh well.

  • crypt__Engineer
    CryptoCloudEngineer (@crypt__Engineer) reported

    Amazon Web Services (AWS) just mass-deleted a billion-dollar problem. Amazon S3 Files launched yesterday. Your S3 buckets now act as fully-featured file systems. No data copying. No syncing pipelines. No EFS + S3 juggling act. Why this is HUGE for AI builders:

  • gpusteve
    steve (@gpusteve) reported

    is aws cli broken for anyone else ??? i literally can't sign in @awscloud

  • zeokiezeokie
    hobari⁷⊙⊝⊜ (@zeokiezeokie) reported

    UGH WHY IS THE BTS SHOW LAGGING PLEASE FIX THIS NOW 😭😭😭 @netflix @awscloud