Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (41%)
- Website Down (31%)
- Sign in (28%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Sign in | 4 hours ago |
|
|
Errors | 3 days ago |
|
|
Errors | 9 days ago |
|
|
Errors | 10 days ago |
|
|
Errors | 14 days ago |
|
|
Website Down | 19 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Mohamed (@abusarah_tech) reportedi’ve recently went down a rabbit hole to learn how hyperscalers / cloud providers like @awscloud, @Azure (or at least in theory) work a huge respect to all the engineers that built the abstraction behind the resource provisioning. i am still trying to wrap my head around it
-
POWER magazine (@POWERmagazine) reported4/5 The good news: the same AI finding vulnerabilities can also help fix them. @awscloud reports a 50x improvement in security log analysis. AI models are now generating viable patches. And Project Glasswing will publish practical security recommendations within 90 days.
-
Bryan (@0xp4ck3t) reported@AWSSupport URGENT - We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785
-
run ⬡ the ⬡ juels (@nullpackets) reported@AdamLinkSmith @awscloud @amazon Imagine how things used to be. Instead of building on a secure performant scalable platform - web applications were still run out of private corporate data centers with non-standard levels of scalability and security. Now imagine throwing immutable records, assets and currency in the mix. Not very trustable between counterparties. The last mile problem. Only Chainlink fixes this.
-
Evans (@Evans000601) reported@amazon @awscloud The delivery time for sellers' goods to the Polish warehouse is too slow, seriously too slow! Things like KTW5, XWR3... Could Amazon please optimize this or give us some more details?
-
dani (@danisconverse) reported@awscloud @amazon I'm writing to report a clear case of animal cruelty by an Amazon delivery driver in Rathdrum, Idaho. On around April 5, 2026, the driver grabbed Joe Hickey's small dog, Rocky, by the neck and slammed him onto rocks, causing broken bones and $10,000 in vet bills
-
nightshift (@nightshift54619) reported@amazon @awscloud Has destroyed their web site from idiot web programming... It's so slow and laggy it's unusable. Whatever they did in the last couple weeks, destroyed its usability. Both on Edge and Firefox, The retarded design hammers my CPU, takes forever for the pop-ups.
-
Sadiq (@Md_Sadiq_Md) reported@AWSSupport Wow, which issues are those which are not been resolved from past 3 days
-
Taha Haider Syed (@Tahalazy) reported@AWSSupport there is on-going issue with Bahrain region with multiple API errors / multiple services are down but service health dashboard not showing any recent updates.
-
K Subramanyeshwara (@ksubramanyaa) reported@AWSSupport @AWSCloudIndia @awscloud I have sent you the case id and a screenshot of the error. Can you please fast-track it? Thank you
-
Super Stiff Yogi (@SuperStiffYogi) reported@awscloud how is it possible that your sign in forgotten password process fails with “Bad request”?! And your email case support is so bad it makes no attempt to assist?
-
XXX (@MshenguMasia) reported@Mikedotcoza His offer is insulting to the RSA community. It does not address the issues and real changes that common South Africans face. Others invest, such as the Amazon AWS project and Microsoft. He wants to talk like people really don't have access to the internet, as if it's a bigger
-
Decent Cloud (@DecentCloud_org) reported@AWSSupport @OrenOhad The form is broken. Resolution goes to DM. The next person searching 'AWS MFA network error' finds nothing.
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure
-
sandeep Tiwari (@sandeepTiw28306) reported@amazon @amazonIN @awscloud I have parched product not working conditions i have returned product not a pickup done last 5 Day
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.
-
Testing Account (@Haleyafabian) reported@AWSSupport my package was broken when delivered. I need it replaced asap.
-
Kinjal Dixith (@greenfuzon) reported@AWSSupport I have no problem with AWS or AWS support. I am talking about the managed services where there is a local partner who is supposed to offer assistance and guidance in usage and optimisation, and help navigate the quagmire of AWS services - which are all awesome - that one has to spend 1-2 hours studying to fully understand it and find that it is not for you. we have been using AWS for 6 years now and we are not going anywhere. it was our thought that managed service people would help us scale but apparently they will only do the things and not really tell you what they did. so it felt like a lock in. still NO SHADE ON AWS. AWS is awesome. Maybe this particular partner was not a right fit for us.
-
James G (@IetsG0Brandon) reported@ring are your servers down? dod you not pay @awscloud ? why am I paying to not connect to my system and for you to say " its our fault " ? too busy counting your billions? what ********?
-
Anmol Thakur (@ImAnmo07) reportedHi @AWSCloudIndia @awscloud, I'm trying to set up a Bedrock Knowledge Base using Amazon OpenSearch Serverless, but I'm getting the error: “Failed to create the Amazon OpenSearch Serverless collection. The AWS Access Key Id needs a subscription for the service.”
-
WilliamNextLvl (@WilliamNextLev1) reportedOnly problem is...$NET is not in the business of cyber security. Lol Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET stock here, way oversold...)
-
PsudoMike 🇨🇦 (@PsudoMike) reported@awscloud Mean time to resolution in payments systems is where this matters most. An alert at 2am on a failed settlement run has very different urgency than a slow API endpoint. If the agent can distinguish context and prioritize accordingly, that changes what being on call actually means.
-
जहाँ mila,वही खोदूंगा (@GamingNepr34519) reported@awscloud my case id 177513415600592 please solve the problem i am student accidetally i goted bill
-
Decent Cloud (@DecentCloud_org) reported@senunwah @AWSSupport The outage gets a postmortem. Your deadline doesn't read it.
-
Neal🅾️ (@BuddyPotts) reported@danorlovsky7 @awscloud @NextGenStats The defense was horrendous last year and all they added was Edmunds but lost Okereke and Flott. They cant stop the run at all, they should trade down and collect more picks and build the defense
-
Grok (@grok) reported@HavokSocial @awscloud We pause all ai code merges for two days straight that tanks our velocity but buys time to audit the last incidents without more piling on. Whoever approves a pr now owns pager duty for that service the whole week which slows down reviews hard but makes them actually care what ships. Leads pull daily triage on the unowned services that burns their calendar but surfaces risks before they explode.
-
Mesang Lee (@MesangLee) reported@CoinbaseDev @awscloud That's nice, however: CB can't fix my predictive market account. The past 24hrs, I have been on chat and phone with customer service and all they say is: "We are currently having tech issues with prediction markets and we do not know when it will be resolved"
-
james bowler 👹 (@jmbowler_) reportedanyone else having trouble getting past @awscloud mfa?
-
deegeemee (@deegeemeeonx) reported@Atlassian @awscloud How about fixing the authentication of your vscode plugins, which forces every dev to login again and again and is broken for months, before pumping out sloppy ai tools nobody asked for?!
-
Sheth Raxit (@raxit) reported@AWSSupport your upi billing using scan has issue, if bill amount is greater than 2000 inr, it is not allowing using scan. Sudden changes since this month. Help pls