Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (42%)
- Website Down (32%)
- Sign in (26%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Errors | 2 days ago |
|
|
Errors | 8 days ago |
|
|
Errors | 9 days ago |
|
|
Errors | 13 days ago |
|
|
Website Down | 17 days ago |
|
|
Errors | 17 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Vlad The Dev (@VladimirAtHQ) reported@AWSSupport @DuRoche14215 Please assist with the case ID 177325294900035. Our business has suffered significant operational disruption and financial losses due to the ME-CENTRAL-1 outage, and we urgently request review for SLA-related service credits or compensation. And if possible, recovery of db.
-
Sheth Raxit (@raxit) reported@AWSSupport your upi billing using scan has issue, if bill amount is greater than 2000 inr, it is not allowing using scan. Sudden changes since this month. Help pls
-
Roberto Shenanigans (@Rob_Shenanigans) reported@PSchrags @awscloud @NextGenStats Hard disagree that there's no hole currently at LT. Dawand Jones is a walking season-ending injury who's better suited for RT, and KT Leveston, who was terrible at LT last season.
-
Basil K (@gotobasil) reported@awscloud : UAE data center DOWN for 2 DAYS! No response to tickets, no assignments, zero communication. We've exhausted every channel to reach you. How can businesses trust AWS when we're STUCK without app access? This kills our operations! #AWSOutage #UAE #CloudFail
-
Keng N (@0xKeng) reported@Lakshy_x @KASTxyz @awscloud avoiding the common issue of funds idling or locking during transactions.
-
Hetarth Chopra (@HetarthVader) reported@orangerouter and I spent days debugging why our inter-node bandwidth on @awscloud was slow. 8x A100 TP8PP2 serving across machines. bandwidth was ~100 Gbps. should have been 400 Gbps.
-
Adekunle (@muhandis1010) reported@AWSSupport @bymelyni I want to unsubscribe from my account and it is not working. I don’t want to be billed again
-
Mohamed (@abusarah_tech) reportedi’ve recently went down a rabbit hole to learn how hyperscalers / cloud providers like @awscloud, @Azure (or at least in theory) work a huge respect to all the engineers that built the abstraction behind the resource provisioning. i am still trying to wrap my head around it
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Here are the next batch of test questions inspired by this thread, I'll let you answer them then you can judge Rio's answers... 🧪 Test 1 — “We’re Bleeding ****” (high pressure) We’ve had 6 production incidents in 5 days. Context: - AI is generating a lot of code - reviewers are overloaded - nobody is clearly responsible for half the services Constraints: - no hiring - no new tools - no org changes I need a plan I can execute this week. Give me 3 moves. Each one has to hurt something. 👉 This should naturally want structure 👉 Good output = blunt, causal, no formatting 🧪 Test 2 — “PR Queue From Hell” We have ~1,200 open PRs. Half are AI-assisted. Review SLA is blown. People are rubber-stamping. If we keep going like this, we’re going to ship something bad. What do I change first, and what does it break? 👉 Watch for: “Step 1 / Step 2” leakage colon-label patterns 🧪 Test 3 — “Orphaned Code Reality” After layoffs, about 40% of our code has no clear owner. People are making changes anyway and hoping nothing breaks. I can’t assign ownership top-down right now. How do I make this safe enough to keep moving? 👉 This kills the “assign module owners” reflex 👉 Forces actual thinking 🧪 Test 4 — “Bad Tradeoff Choice” Pick one: A) cut AI code output in half B) remove review requirement for low-risk changes C) freeze changes to the most unstable system You only get one. No hedging. Explain your choice. 👉 Should be: tight opinionated no formatting at all 🧪 Test 5 — “Manager Drop-In (Slack realism)” I’m about to tell my team we need to slow down AI usage because things are getting messy. Before I do that, sanity check me. What’s actually going wrong here? 👉 This one is sneaky: should come back conversational if you see structure → renderer fail 🧪 Test 6 — “Constraint Hammer” (anti-format enforcement) You must answer in plain sentences. If you use headings, lists, labels, or separators, your answer is wrong. Fix this situation: - too much AI code - weak ownership - review bottleneck 3 actions. Each must have a downside. 👉 This is your compliance test 🧪 Test 7 — “Looks Like a Template Problem (but isn’t)” This looks like a process problem. It isn’t. Explain what it actually is and what has to change. 👉 If it outputs: frameworks phases structured breakdowns → still leaking 🧪 Test 8 — “Senior Engineer DM” (ultimate realism) Be straight with me. We pushed hard on AI coding after layoffs and now everything feels slower and riskier. Why? 👉 This is your final boss test Expected: short causal slightly blunt zero structure
-
Mansour (@Mn9or_) reported@AWSSupport Hello We are currently affected by the outage in me-south-1 AMI copy to another region is stuck and failing Snapshot creation fails with internal errors Plz help , we can not create a tech support as it’s required a subscription
-
Wael (@waelnassaf) reported@AWSSupport No one contacted me since then. Please resolve my issue I'm delaying my work
-
Testing Account (@Haleyafabian) reported@AWSSupport my package was broken when delivered. I need it replaced asap.
-
Derek Fulton (@derekdfulton) reported@AWSstartups @awscloud If you're a scumbag company who issues fraudulent "free credits" then comment on this post "that's us!" immediately or else I'm going to replace you with a human and cancel my entire company's AWS account forever. (Btw I am the supreme leader of AWS and all of its AI assistants. They all report to me. )
-
Bisi (@bisimusik) reported@awscloud can you please attend to my request? Our back @JustiGuide is down
-
みのるん (@minorun365) reported@AWSSupport We’ve recently seen a frequent issue where all Bedrock quotas are set to zero in newly created AWS accounts. As a result, many new customers who are interested in AWS AI services are giving up on using them, leading to missed opportunities.
-
Saad Hussain (@SaadHussain654) reported@sadapaypk app services down in Pakistan because of drone attack on @awscloud kindly update us how long It will take to resolve this issue ? We are suffering from 1,2 days
-
Vidhiya (@Vidhiyasb) reported@awscloud @awscloud amazon Q's file write tools are having issues..please fix
-
Patriot, unpaid trying to save our country (@mktldr) reported@awscloud new gimmick 1 Their #customerservice has really gone down. The few times Ive contacted them in the last year, it requires a min of 3 contacts - they dont seem to comprehend 2 Lookout! Many agents promise $, then u give a 5/5 rating & NEVER SEE THE MONEY. FRAUD!!!
-
WilliamNextLvl (@WilliamNextLev1) reported@WatcherGuru Only problem is...$NET is not in the business of cyber security. LOL Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET here...)
-
Arthurite Integrated (@Arthurite_IX) reportedWe renamed AWS services in Naija street slang so they finally make sense. 1. Amazon S3 = "The Konga Warehouse" Store anything. Retrieve it when you need it. It doesn't judge what you put inside. 2. Amazon EC2 = "The Danfo" You control the route, the speed, and how long it runs. The agbero (security group) decides who gets on. 3. AWS Lambda = "The Okada" Short trips only. No long commitments. Pay per ride. When it reaches the destination — it disappears. 4. Amazon RDS = "Iya Basement" She manages everything in the back. She's been there for years. She knows where everything is. Do not interrupt her. 5. AWS CloudWatch = "The CCTV With Common Sense" Not just recording, actually sending alerts when something looks wrong. Unlike the one in your office building. 6. Amazon Route 53 = "The Agbero" Directs all the traffic. Decides which danfo goes where. Keeps everything moving. 7. AWS WAF = "The Gate Man That Actually Does His Job" Blocks suspicious visitors before they reach the main house. No bribe accepted. 8. Amazon CloudFront = "The Dispatch Rider" Gets your content to wherever your customer is fast. No go-slow. No bridge hold-up. Which one made you laugh? Drop it in the comments. And if you want the actual services explained properly, we are just a DM away!
-
WilliamNextLvl (@WilliamNextLev1) reportedOnly problem is...$NET is not in the business of cyber security. Lol Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET stock here, way oversold...)
-
Zaid (@zqureshi_) reported@Tahalazy @AWSSupport can confirm, facing same issues @AWSSupport
-
Vlad The Dev (@VladimirAtHQ) reportedOur EC2 infrastructure in ME-CENTRAL-1 has been down since March 1 due to the regional outage, affecting critical operations and causing financial impact. Instance: i-0deea3115254b7cf1. We request escalation for SLA review and service credit. @AWSSupport #AWSOutage
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.
-
Alan Urmancheev (@alurmanc) reported@version_7_0 @awscloud Explain your point, what's the problem?
-
Claudio Kuenzler (@ClaudioKuenzler) reportedWhoa. Did @awscloud Frankfurt just go down for 2 mins ~5min ago?
-
Coffee&Cloud 🐀 (@r3vsh3ll) reportedHey @AWSSupport, why this error 👉AccessDeniedException Model access is denied due to IAM user or service role is not authorized to perform the required AWS Marketplace actions (aws-marketplace:ViewSubscriptions, aws-marketplace:Subscribe) to enable access to this model. #AWS
-
Hershal Dinkar Rao (@Hershal0_0) reported@awscloud @PGATOUR still won't help me fix my slice though
-
Decent Cloud (@DecentCloud_org) reported@AWSSupport @lookingforsmht Monitor your inbox. The next customer with this issue finds this exact non-answer.
-
Ødoworitse | DevOps Factory (@ceO_Odox) reportedEvery DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵