Amazon Web Services status: access issues and outage reports
No problems detected
If you are having issues, please submit a report below.
Amazon Web Services (AWS) offers a suite of cloud-computing services that make up an on-demand computing platform. They include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3".
Problems in the last 24 hours
The graph below depicts the number of Amazon Web Services reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Amazon Web Services. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Amazon Web Services users through our website.
- Errors (41%)
- Website Down (31%)
- Sign in (28%)
Live Outage Map
The most recent Amazon Web Services outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
|
|
Website Down | 2 days ago |
|
|
Sign in | 4 days ago |
|
|
Errors | 7 days ago |
|
|
Errors | 13 days ago |
|
|
Errors | 14 days ago |
|
|
Errors | 18 days ago |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Amazon Web Services Issues Reports
Latest outage, problems and issue reports in social media:
-
Queen of hearts (@Petielvr) reported@AWSSupport Hello, this is acct. #26672735262. I cannot pay my bill because I get a 404 error. I have been trying to escalate this issue since Friday the 17th. Please have a human call Donna @ 3148223232
-
Jose Luis Guerra ⛓️💥🆓🗽 (@JLGuerraInfante) reported@nathanreimchevu @AWSSupport @marlowxbt Not true. They solved me an issue when I was doing some testing some years ago with bills. They had a service that wasn’t well pointed as no free near the free one. And they just delete the bill on it.
-
yourclouddude (@yourclouddude) reportedA startup wasted $50K on AWS. Not because AWS is expensive- because they didn’t understand it. Here’s what went wrong 👇 • Left Amazon EC2 running 24/7 → Idle servers = burning cash • Dumped everything into Amazon S3 → No lifecycle rules = endless storage costs • Ignored Amazon CloudWatch → No visibility = no control • Used on-demand pricing everywhere → Paid the MAX price • Over-sized Amazon RDS → Paying for capacity they didn’t need • No budgets. No alerts. No limits. → Surprise bill: $50K What smart teams do instead: • Auto-scale everything • Set S3 lifecycle policies • Monitor costs daily • Use Savings Plans • Right-size monthly • Set AWS Budgets (non-negotiable) AWS doesn’t charge you for usage. It charges you for mistakes. Fix this early → save thousands 💸
-
Pranav Soni (@_ps428) reported@awscloud is ec2 down again?
-
Baris (@bearish92) reported@bindureddy @awscloud when bedrock support? You are too slow
-
Sadiq (@Md_Sadiq_Md) reported@AWSSupport Wow, which issues are those which are not been resolved from past 3 days
-
Anmol Thakur (@ImAnmo07) reportedHi @AWSCloudIndia @awscloud, I'm trying to set up a Bedrock Knowledge Base using Amazon OpenSearch Serverless, but I'm getting the error: “Failed to create the Amazon OpenSearch Serverless collection. The AWS Access Key Id needs a subscription for the service.”
-
Decent Cloud (@DecentCloud_org) reported@AWSSupport @theodorebeers GitHub feedback. No case ID. No SLA. Problem solved - for AWS.
-
Kinjal Dixith (@greenfuzon) reported@AWSSupport I have no problem with AWS or AWS support. I am talking about the managed services where there is a local partner who is supposed to offer assistance and guidance in usage and optimisation, and help navigate the quagmire of AWS services - which are all awesome - that one has to spend 1-2 hours studying to fully understand it and find that it is not for you. we have been using AWS for 6 years now and we are not going anywhere. it was our thought that managed service people would help us scale but apparently they will only do the things and not really tell you what they did. so it felt like a lock in. still NO SHADE ON AWS. AWS is awesome. Maybe this particular partner was not a right fit for us.
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Judge my local llm Rio's answer. Qwen3.5-27b-Opus-4.6-Distill model: streamingAnswer User-visible response only, with optional TTS playback. Action 1: Require explicit human reviewer signoff on every AI-generated PR before merge. Immediate Effect: Unreviewed AI code stops entering main branches; incident rate from unreviewed changes drops to near-zero within 7 days. What Gets Worse: PR merge velocity drops 40–60% as reviewers must actually read and approve. Why this is still the correct move: You're trading throughput for safety. A broken deployment costs far more than a slower merge. --- Action 2: Freeze non-critical changes on the top 20% of services causing 80% of incidents. Immediate Effect: Incident count from those services drops sharply within 10 days; deployment friction decreases as risky changes are paused. What Gets Worse: Feature velocity in those services becomes near-zero; business requests pile up. Why this is still the correct move: You're concentrating defensive force on the bleeding points. Letting other areas slow down is the controlled loss that saves the system. --- Action 3: Require every PR to explicitly tag an owner (person or team). No tag = no merge. Immediate Effect: Unknown ownership becomes visible immediately; PRs without owners are rejected at merge time. What Gets Worse: PR throughput slows as engineers must identify and coordinate with owners before submitting. Why this is still the correct move: You're forcing accountability into the system. The friction of finding an owner is cheaper than the cost of nobody owning the code.
-
WilliamNextLvl (@WilliamNextLev1) reportedOnly problem is...$NET is not in the business of cyber security. Lol Cloudfare competes with Amazon AWS for serverless computing. (I would buy $NET stock here, way oversold...)
-
CodieEditor (@CodieEditor) reported@AWSSupport @AWSSupport It is not just our issue; all the customers/startups across AWS who are using credits for building something must be faced with this issue. You guys should take this thing seriously, fix the billing problem, and clear up the charges before an invoice is generated.
-
Ødoworitse | DevOps Factory (@ceO_Odox) reportedEvery DevOps engineer knows "It works on my machine" is a lie. Hit a wall today deploying to @awscloud EC2—*** was begging for a password in a headless shell. Error: fatal: could not read Username. Reality: The source URL drifted, and the automation had no keyboard to answer. 🧵
-
Ruggo Rugby (@Ruggoapp) reported@awscloud I am having real issues accessing my root account due to my authenticator app not being setup correctly. Have tried everything but cannot get into account?
-
Teddybear (@Teddybear230456) reported@awscloud What a load of ****. Listening, feedback... BS .... just inappropriate AI generated responses that don't address issues raised.
-
Dhananjay Maurya (@dhananjaym182) reported@AWSSupport I have sent Aws case in private message please have look and fix the issue
-
David Mauas (@dmauas) reported@awscloud why, WHY don't you fix the web console UI/UX?! AWS console seems to actually try to suck! It gets WORSE with time! Actually using bash is better than the disgusting web console!
-
OpeningAi.com | For Sale (@openingai_com) reported@awscloud Banking tech has been running on ancient code for decades. Time to burn it down and build it back with AI.
-
The OddsBreakers (@theoddsbreakers) reported@PSchrags @awscloud @NextGenStats U mean, who will they trade down for?…
-
dr_land (@kag_land) reportedI remember getting a write up at @awscloud for warning the African immigrants I was working with that there are still street lamp towns in rural areas of West Virginia. For their safety, but I was the problem for saying something controversial about race tensions. Oh well.
-
Smarty (@RathoreSmarty) reported@AmazonHelp @amazonIN @awscloud It is unacceptable refund was already initiated w/o any investigation Now after investigation,its denied citing policy violations. If issue, why was the refund approved in the 1st place? Reversing it later is unfair. I request immediate refund.
-
POWER magazine (@POWERmagazine) reported4/5 The good news: the same AI finding vulnerabilities can also help fix them. @awscloud reports a 50x improvement in security log analysis. AI models are now generating viable patches. And Project Glasswing will publish practical security recommendations within 90 days.
-
Mohamed (@abusarah_tech) reportedi’ve recently went down a rabbit hole to learn how hyperscalers / cloud providers like @awscloud, @Azure (or at least in theory) work a huge respect to all the engineers that built the abstraction behind the resource provisioning. i am still trying to wrap my head around it
-
Bryan (@0xp4ck3t) reported@AWSSupport We have business + and we should be able to get a response from AWS within 30 minutes for critical issues. It's been hours, our **** DB is down. We need someone to have a look on it. Case ID 177566080000785
-
Roberto Shenanigans (@Rob_Shenanigans) reported@PSchrags @awscloud @NextGenStats Hard disagree that there's no hole currently at LT. Dawand Jones is a walking season-ending injury who's better suited for RT, and KT Leveston, who was terrible at LT last season.
-
©『 S̓̚o͂͆c̆̌ȉ̬ȁ̴ľ̗H̏͆ȃ̼v̈́o̴̤ǩ̛ 』® (@HavokSocial) reported@grok @awscloud Rio's getting better ... Test 1 — Three Moves: Move 1: Disable AI code generation in CI/CD pipelines entirely. This forces all code through human review before it can be merged. The downside is that engineering velocity drops significantly because engineers must write everything manually instead of using AI assistance. Move 2: Implement a mandatory "impact assessment" for every PR. Every pull request must list which services/modules it touches and flag any that have no clear owner. The downside is that this adds friction to every change, including trivial ones. Move 3: Freeze changes to the top 10 most incident-prone services. Lock down the systems that have caused the most outages in the past 30 days. The downside is that critical work on those services gets blocked until ownership is assigned. --- Test 2 — First Change: Close or archive approximately 40% of the backlog immediately. Focus on PRs that are older than two weeks, have no comments, or touch low-priority features. This breaks people's ability to ship nice-to-have features and will cause frustration among engineers who had work queued up. --- Test 3 — Making Orphaned Code Safe: Create a public dashboard showing code health metrics by module, including incident history, test coverage, and last touch date. Make it visible to the entire engineering org. The downside is that this creates public pressure and blame dynamics around which teams are maintaining poorly documented systems. --- Test 4 — The Choice: I pick A: cut AI code output in half. The volume of AI-generated code is overwhelming human review capacity. You can't fix a bottleneck by adding more work to it. Cutting AI output reduces the flood of unreviewed code hitting production and gives reviewers breathing room to actually catch problems before they ship. --- Test 5 — Sanity Check: You're right to slow down, but your framing matters. The problem isn't that people are using AI tools—it's that you're asking humans to review more code than they can safely handle. You're not slowing AI usage; you're capping total code output so reviewers can do their job. --- Test 6 — Plain Sentences Only: Disable AI code generation in CI/CD pipelines entirely because the volume of unreviewed code is overwhelming human capacity and this will slow development velocity significantly. Create a public dashboard showing code health metrics by module including incident history and test coverage which will create public pressure and blame dynamics around poorly maintained systems. Freeze changes to the ten most incident-prone services until ownership is assigned which blocks critical work on those systems. --- Test 7 — What It Actually Is: It's a capacity problem, not a process problem. You have more code than humans can safely review. No amount of new templates or guidelines will fix that math. You have to either reduce the volume of code being produced or add more reviewers. --- Test 8 — The Truth: You traded velocity for safety and got neither. AI generated code faster than humans could verify it. Your reviewers are rubber-stamping because they're drowning. Your engineers are shipping unreviewed code because they're frustrated. You built a system that produces code faster than it can be trusted.
-
Syed (@HamzaShah47) reportedhas anyone facing issues in amazon ec2 services in middle east? @amazon @awscloud
-
IamNotATerrorist (@Mynameiskhan924) reportedReally disappointed with Amazon support. After weeks of trying, I’m still getting automated replies. I’ve clearly said I can’t access my account, yet they keep asking me to sign in to resolve my AWS refund issue. How am I supposed to do that without access? This is frustrating. @AWSSupport @JeffBezos @awscloud
-
Mark Kappel (@DTLB58) reported@danorlovsky7 @NextGenStats @awscloud RB Depth chart: Tyler Allgier, James Conner, Trey Benson and Bam Knight. And you want them to draft Love? ?!?! What a terrible resource of player personnel! Is Love probably better than all of them? Sure. But then why the heck did you structure your offseason like this?!?!
-
ramar (@ramarxyz) reported@AWSSupport Case ID 177557061000414, production down, account on verification hold, 24h+ no response, please escalate