1. Home
  2. Companies
  3. Amazon Web Services
Amazon Web Services

Amazon Web Services status: access issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".

Problems in the last 24 hours

The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Amazon Web Services users through our website.

  • 38% Errors (38%)
  • 33% Website Down (33%)
  • 28% Sign in (28%)

Live Outage Map

The most recent Amazon Web Services outage reports came from the following cities:

CityProblem TypeReport Time
Alamogordo Website Down 4 days ago
San Francisco Website Down 6 days ago
Mercersburg Sign in 8 days ago
Palm Coast Errors 11 days ago
West Babylon Errors 17 days ago
Massy Errors 18 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Amazon Web Services Issues Reports

Latest outage, problems and issue reports in social media:

  • zeokiezeokie
    hobari⁷⊙⊝⊜ (@zeokiezeokie) reported

    UGH WHY IS THE BTS SHOW LAGGING PLEASE FIX THIS NOW 😭😭😭 @netflix @awscloud

  • Palatineirish
    Martin Alltimes (@Palatineirish) reported

    @awscloud What’s the difference between modernising and rebooting. Terrible, confusing text.

  • PsudoMike
    PsudoMike 🇨🇦 (@PsudoMike) reported

    @HashiCorp @awscloud Unmanaged secrets in S3 is a real problem especially in fintech where you have long running services that accumulate config files, export artifacts, and database dumps over years. The hard part is not the scanning, it is what you do with the findings. Rotation pipelines and downstream dependency mapping are where most teams get stuck after discovery.

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure

  • mjha2088
    manish (@mjha2088) reported

    @AWSSupport Thank you! The entire db.r7i family shows reduced vCPUs for SQL Server & Oracle vs MySQL/PostgreSQL/Aurora in console. The docs page has no mention of this engine-specific difference — undocumented and critical for licensed engine customers planning costs.

  • samchbe
    Sam (@samchbe) reported

    @AWSSupport found out that cloudflare can do this without problems. moved.

  • danisconverse
    dani (@danisconverse) reported

    @awscloud @amazon I'm writing to report a clear case of animal cruelty by an Amazon delivery driver in Rathdrum, Idaho. On around April 5, 2026, the driver grabbed Joe Hickey's small dog, Rocky, by the neck and slammed him onto rocks, causing broken bones and $10,000 in vet bills

  • Evans000601
    Evans (@Evans000601) reported

    @amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?

  • manikgem37
    Manik Sharma (@manikgem37) reported

    @AWS @awscloud @amazonIN Hi, I am not able to use Marketplace models on Amazon bedrock even after having Activate Founders credit and payment methods added. Getting the 'Invalid_payment_instrument' error. Raised case too. Here is the latest one id 177678551600643. Please help!

  • its_me_kundan
    Kundan Kumar Kushwaha (@its_me_kundan) reported

    @AWSSupport @awscloud Facing an AWS account activation issue for 2+ days. Stuck in registration loop (error page), upgrade not working, ticket unassigned, and no response via chat despite hours of waiting. Account ID: 8651-2244-3590 Please assist urgently.

  • arshad_ans5268
    Arshad Ansari (@arshad_ans5268) reported

    @AmazonHelp @awscloud Please tell me when my problem will be solved. I have sent you all the details.

  • manas__vardhan
    Manas Vardhan (@manas__vardhan) reported

    @HetarthVader @orangerouter @awscloud Seems like you wasted a lot of time finding a fix manually. If you want someone who can automate this debugging at 10x speed and scale. Let me know. I'm a researcher at USC, prev at JPmorgan. I automate stuff for fun.

  • CyberSecBoss
    Sergey Medved (@CyberSecBoss) reported

    A story of applying for Startup credits. @Azure - great customer service, hassle-free. @awscloud - back and forth straight out rejections, "at our discretion" from the support, to finally learn that the issue is in spelling out address street name "North" instead of "N".

  • Stunner_99
    zI£|~ (@Stunner_99) reported

    @AWSSupport I have an issue with the OnVUE exam I can't take a physical exam because of my schedule Now online option seems to be the worst It tells something unusual has happened each time it wants to launch my exam after going through alot of stress This is annoying

  • Haleyafabian
    Testing Account (@Haleyafabian) reported

    @AWSSupport my package was broken when delivered. I need it replaced asap.

  • Rob_Shenanigans
    Roberto Shenanigans (@Rob_Shenanigans) reported

    @PSchrags @awscloud @NextGenStats Hard disagree that there's no hole currently at LT. Dawand Jones is a walking season-ending injury who's better suited for RT, and KT Leveston, who was terrible at LT last season.

  • MurariYuvi
    Murari mishra (@MurariYuvi) reported

    @AWSSupport Our organization with account id - 625119556213 is not able to login to the root account, also getting emails reg abuses in the account. Have been trying to get help but can't raise any support even. The Account manager has no clue what's going on. Can you help ASAP?

  • ChristhylCC
    Christhyl Ceriche (@ChristhylCC) reported

    @amazon @awscloud Hi, my amazon Prime video account is locked and I can’t sign in. When I try to contact support, it asks me to log in and I’m stuck in a loop. Could you please help me recover access?

  • WilliamNextLev1
    WilliamNextLvl (@WilliamNextLev1) reported

    @WatcherGuru Only problem is...$NET is not in the business of cyber security. LOL Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET here...)

  • mktldr
    Patriot, unpaid trying to save our country (@mktldr) reported

    @awscloud new gimmick 1 Their #customerservice has really gone down. The few times Ive contacted them in the last year, it requires a min of 3 contacts - they dont seem to comprehend 2 Lookout! Many agents promise $, then u give a 5/5 rating & NEVER SEE THE MONEY. FRAUD!!!

  • HavokSocial
    ©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported

    @grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.

  • JLSports24
    Joe Sutphin (@JLSports24) reported

    @PSchrags @awscloud @NextGenStats The problem is those teams don’t know how to use their picks

  • scooblover
    Kane | AI Insider 🤖 (@scooblover) reported

    This isn't Iran's first move against AI/tech infrastructure. Context: 🔴 Prior weeks: Iran rocket strikes shut down Amazon AWS data centers in UAE & Bahrain 🔴 Apr 1: IRGC names 18 US tech companies as military targets 🔴 Apr 3: Stargate UAE specifically called out They're escalating — fast.

  • Akintola_steve
    Akintola Steve (@Akintola_steve) reported

    Where statelessness breaks at the infrastructure level, silently. File uploads written to local disk. The file lives on one instance. Others cannot see it. Fix, object storage like Amazon S3. In process caches with no sharing. One instance caches, another does not. Fix, centralized cache. Background jobs tied to instance memory. Jobs disappear on restart. Fix, external queue. WebSocket connections tied to one server. Broadcasts miss other clients. Fix, shared pub or sub.

  • Hershal0_0
    Hershal Dinkar Rao (@Hershal0_0) reported

    @awscloud @PGATOUR still won't help me fix my slice though

  • SaadHussain654
    Saad Hussain (@SaadHussain654) reported

    @awscloud @sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days

  • dhananjaym182
    Dhananjay Maurya (@dhananjaym182) reported

    @AWSSupport I have sent Aws case in private message please have look and fix the issue

  • ClassicDavid3
    Decentralized Dave (@ClassicDavid3) reported

    I'm also initiating short on $AMZN Amazon. Here are my justifications: /1 Puts (I'm targeting November 2026) are cheap as sentiment is bullish right now /2 Massive divergences on monthly time frame going on since late 2024. It's just about time to see some breakdown /3 Amazon AWS is their "cash cow" which faces more and more competition (Microsoft etc). Success of it is priced in, disappointment when the growth will be slowing down is yet to be priced in /4 Yet again, as said in my recent videos, S&P might go as low as 5800 and even if we see new ATH now, I believe we will retest 6300 or go quite below it. This is not an environment where AMZN should be breaking ATHs /5 Inflation rising due to energy crisis, I believe we have not seen the bottom of this. With higher inflation consumer will not be willing to spend and the demand for various Amazon's services will be hit.

  • InvincibleXALE
    XALE (@InvincibleXALE) reported

    @AWSSupport Hi again, Its been over 2 days now, and over 30+ customers are affected of ours and its been critical And our account is still the same Feeling kind of hopeless on this.. I need this issue resolved ASAP, PLEASE Before the week starts or this issue can force us to lose clients

  • BuddyPotts
    Neal🅾️ (@BuddyPotts) reported

    @danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense