Cloudflare Outage Map
The map below depicts the most recent cities worldwide where Cloudflare users have reported problems and outages. If you are having an issue with Cloudflare, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
Cloudflare users affected:
Cloudflare is a company that provides DDoS mitigation, content delivery network (CDN) services, security and distributed DNS services. Cloudflare's services sit between the visitor and the Cloudflare user's hosting provider, acting as a reverse proxy for websites.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Istanbul, Istanbul | 1 |
| Greater Noida, UP | 2 |
| Paris, Île-de-France | 1 |
| Crisfield, MD | 1 |
| Noida, UP | 2 |
| Augsburg, Bavaria | 1 |
| Bengaluru, KA | 1 |
| Montataire, Hauts-de-France | 1 |
| London, England | 1 |
| Attleborough, England | 1 |
| Colima, COL | 1 |
| Leuven, Flanders | 1 |
| New Delhi, NCT | 2 |
| Mâcon, Bourgogne-Franche-Comté | 1 |
| Amsterdam, nh | 1 |
| Ashburn, VA | 1 |
| Rosario, SF | 1 |
| Merlo, BA | 1 |
| Frankfurt am Main, Hesse | 1 |
| Birmingham, AL | 1 |
| Dayton, OH | 1 |
| Miami, FL | 1 |
| Osnabrück, Lower Saxony | 1 |
| Bulandshahr, UP | 1 |
| A Coruña, Galicia | 1 |
| Easton, PA | 1 |
| Guayaquil, Guayas | 1 |
| El Port de Sagunt, Valencia | 1 |
| Medellín, Antioquia | 2 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Cloudflare Issues Reports
Latest outage, problems and issue reports in social media:
-
LastManStanding (@KrisRy14) reportedIt’s cloudflare down again, please get ur sh* together
-
Fozzy (@fozzydiablo) reportedcloudflare is my number one holding based on the theory all vibe code would be edging I never knew what this meant but smart people told me about it. So I decided to host my app backend on cloudflare tech is amazing but ended up being the complete opposite of what I needed lol
-
niknak (@niknak) reported@twolays @Cloudflare Very bad
-
Corey Quinn (@QuinnyPig) reported@rauchg I like this. I'm really going to have to sit down and describe what I imagine to be the idea flow for an agent-native cloud platform. It might be CloudFlare. It might be Vercel. It absolutely will not be AWS.
-
Ash Nallawalla (@ashnallawalla) reported@gaganghotra_ Great reminder. Most geo-blocking plugins and CDN rules need a specific exemption for known crawler IP ranges. Cloudflare and Wordfence both support this, but it's rarely enabled by default — so it's worth checking even on sites that seem to be working fine.
-
DataDan|AI Data Engineering (@ba_niu80557) reportedA function receives a webhook, validates it, queries a database (150ms network round-trip), and returns a response. Total wall-clock time: 170ms. Actual CPU time: 5ms. AWS Lambda bills you for 170ms. Cloudflare Workers bills you for 5ms. Same function. Same result. 34x billing difference — because one platform charges for time your code spent waiting, and the other charges only for time your code spent computing. (Source: Morph, "Cloudflare Workers vs AWS Lambda 2026", April 2026) That billing model difference is the most underappreciated shift in backend infrastructure in 2026. And it's quietly reshaping how production systems are built — not just for edge use cases, but for everything. Here's the thesis I keep arriving at after watching teams migrate over the past 18 months: The "deploy to a region, scale with containers" model that dominated backend engineering from 2015-2024 is being replaced by an "deploy everywhere, scale with isolates" model. And most backend engineers haven't noticed because the migration is happening one function at a time. Baselime reported 80% lower cloud costs after migrating from AWS to Cloudflare. Not 8%. Eighty percent. (Source: Morph, April 2026) The numbers are that dramatic because three structural differences compound: Difference 1: Cold starts don't exist anymore. Lambda cold starts: 100ms to 3,000ms depending on runtime, package size, and VPC config. Java Lambda in a VPC? You might wait 3 full seconds before your code runs a single line. Cloudflare Workers cold starts: under 5 milliseconds. Effectively zero. Because Workers don't spin up containers. They run V8 isolates — the same lightweight sandboxing technology that runs your Chrome browser tabs. For a web API serving human users, 100ms cold start is noticeable but tolerable. For an AI agent making 200 API calls per session, cold starts compound catastrophically. 200 calls × 500ms average cold start = 100 seconds of dead time per session. The agent is waiting for infrastructure, not computing. (Source: Morph, April 2026) This is why every serious AI agent infrastructure team I know is evaluating edge-first deployment. Not because edge is trendy — because their agents are burning money and latency on cold starts that V8 isolates eliminate entirely. Difference 2: Global distribution is the default, not the exception. You deploy a Lambda function. It runs in us-east-1. A user in Tokyo hits your API. Their request travels 11,000 km to Virginia, your function processes it, and the response travels 11,000 km back. Round trip: 200-400ms of pure network latency before your code does anything. You deploy a Cloudflare Worker. It runs in 330+ cities worldwide. A user in Tokyo hits your API. The request reaches a Cloudflare edge node in Tokyo. Your code runs there. Response returns from Tokyo. Round trip network latency: effectively zero. This isn't edge computing as a niche optimization. This is "every function is global by default" as the deployment model. You don't choose a region. There is no region. Your code runs wherever the user is. For a traditional CRUD API, this reduces TTFB by 60-80%. (Source: Digital Applied, "Edge Computing: Cloudflare Workers Dev Guide 2026", January 2026) For AI agent endpoints that serve users across time zones — a customer support agent used by a global company, a coding assistant used by distributed teams — the latency reduction is the difference between "feels instant" and "feels slow." Difference 3: The ecosystem became a full stack. The reason edge computing stayed niche from 2018-2023 was simple: you could run code at the edge, but your data was still in a region. Every edge function that needed a database round-tripped to us-east-1 anyway, killing the latency advantage. In 2026, Cloudflare solved this by building an entire data layer at the edge: → D1: SQLite at the edge. Global read replication. Your queries run where your users are. → KV: Key-value storage with edge caching. Sub-millisecond reads globally. → R2: Object storage. S3-compatible. Zero egress fees. (This alone saves thousands/month for media-heavy applications.) → Durable Objects: Stateful computing at the edge. Strongly consistent, globally coordinated state — the thing that was impossible at the edge until 2024. → Queues: Message queuing with guaranteed delivery. → AI inference: Run ML models on serverless GPUs at the edge. → Vectorize: Vector database for semantic search at the edge. (Sources: Cloudflare Workers docs; Calmops, "Edge Computing with Cloudflare Workers", March 2026) This changes the calculus completely. In 2022, edge was "run your CDN logic there." In 2026, edge is "run your entire application there" — database, storage, queues, AI inference, state management. Full stack. The framework ecosystem caught up too. Hono — under 14KB, zero dependencies, Express-like routing — became the standard routing framework for Workers in 2026. You write code that looks almost identical to Express/Fastify, but it runs globally with zero cold starts. What this means for how you should think about your next backend project: The decision tree has changed: Is your workload I/O heavy (API calls, database queries, webhook processing)?→ Workers bills CPU time only. You pay for 5ms of compute, not 170ms of waiting. The cost difference is 10-34x. This is most web backends. Does your application serve users globally?→ Workers runs in 330+ cities automatically. No multi-region deployment to manage. No cross-region replication to configure. Global is the default. Does your application need zero cold starts?→ Workers uses V8 isolates: sub-5ms startup. Lambda uses containers: 100ms-3s startup. If you're serving real-time AI agents, chatbots, or latency-sensitive APIs, cold starts are unacceptable. Does your workload need heavy compute (video transcoding, ML training, data processing)?→ Lambda. Workers caps at 128MB memory and has CPU time limits. For compute-heavy tasks, Lambda's 10GB memory and 15-minute execution are necessary. Are you deeply integrated with the AWS ecosystem (DynamoDB, SQS, S3 triggers, Step Functions)?→ Lambda. Workers can't trigger on S3 events or consume DynamoDB streams. Migrating away from Lambda means migrating away from the AWS event-driven ecosystem. The honest assessment: 80% of web-facing backend functions are I/O-heavy, globally distributed workloads where Workers is structurally cheaper and faster. 20% are compute-heavy or AWS-locked workloads where Lambda is the right choice. Most teams are running 100% on Lambda because that's what they learned in 2018. The AI angle that ties this back to my usual topics: Every AI agent infrastructure pattern I've written about — MCP servers, tool endpoints, RAG retrieval APIs, model routing gateways, cost tracking middleware — is an I/O-heavy workload that serves global users and needs zero cold starts. These are exactly the workloads where edge-first architecture delivers the largest improvement over traditional serverless. An MCP server on Lambda: cold start + regional latency + wall-clock billing = slow and expensive. An MCP server on Workers: zero cold start + global distribution + CPU-only billing = fast and cheap. The infrastructure layer beneath AI agents matters as much as the orchestration layer above them. Most agent architecture discussions focus on LangGraph vs CrewAI and ignore the fact that the function layer underneath is adding 100+ seconds of dead time per session to cold starts. Three uncomfortable questions for any backend team in 2026: 1) What percentage of your Lambda invocation time is your code actually computing vs waiting for I/O? If you're not measuring this — you're paying for wait time. For most web APIs, CPU time is 3-10% of wall-clock time. The other 90-97% is network round-trips that Lambda bills you for and Workers doesn't. 2) Where are your users, and where is your code? If users are global and code is us-east-1 — you're adding 100-300ms of pure network latency to every request. Workers eliminates this by running your code where your users are. Automatically. 3) When was the last time you evaluated whether your serverless architecture is still the right one? If "when we set it up in 2020" — the infrastructure landscape has fundamentally changed. Edge-first wasn't viable in 2020. It is in 2026. A 2-day migration experiment on a single non-critical endpoint will tell you whether the cost and latency improvements are real for your workload. The thesis: → 2016-2020: "serverless means Lambda" → 2021-2024: "edge is interesting for CDN logic but not real backends" → 2026: "edge-first is the default for I/O-heavy global workloads, and traditional serverless is the fallback for compute-heavy regional workloads" The inversion already happened. Most backend engineers are still deploying to Lambda because that's what the tutorials taught them in 2018. The teams that re-evaluated are running the same functions at 60-80% lower latency and 80% lower cost. Same code. Different infrastructure. Dramatically different bill. The boring infrastructure migration wins. It always does. Especially when the exciting AI agent is waiting 100 seconds for cold starts nobody measured.
-
rep1c.eth (@rep1cxyz) reported"Stocks always rise after layoffs" - really? Cloudflare: beat earnings, cut 1,100 people → stock dropped 18% Coinbase: cut 700, called it "AI-native" → surprise loss, stock down 5% Upwork: cut 24% of workforce → stock cratered 19% I went through the actual numbers. The full breakdown ↓
-
kitcatixoxo || Margaritamar (@Caddue) reportedbtw the problem seems to be cloudflare and many apps suffer momentarily
-
Rebound Capital (@rebound_capital) reportedSaaS Shenanigans #1 Cloudflare ($NET) has been around since 2009, but it still hasn't figured out how to make a GAAP profit. Even with the best products. - They're currently running at a -10% operating margin. How do they keep the lights on? - Every single quarter, they issue new stock worth about 18% of revenue. They call it stock-based compensation, meant to align employees' interests with the company's. In effect, it dilutes shareholders. Good while the market bids up the stock. Very bad once the gravy train stops: just look at what's happened to ServiceNow. How does this go on for so long? - Fund managers keep buying Cloudflare stock with their clients' money to fill the gap. We used to call this a 'burn rate.' Now it's just a cycle that never ends. Turns out you don't actually have to make money - as long as there's always someone new willing to buy the stock. - Every company wants to be in passive indices, so passive money is forced to buy any shares they sell to the market. And you wonder why SaaS is suddenly trading so poorly? How long can a system go on where a 16 year old company (a leader in its space) still isn't GAAP profitable?
-
Andrejco (@builtbyfaithh) reportedToday I finished setting up Cloudflare for a website. It was my first time doing the full process, so I was definitely a bit scared. Not because Cloudflare itself looked impossible. But because DNS is one of those t hings where one wrong record can break something important. • The website • Email • Webmail • cPanel • FTP So I was extra careful. I reviewed the DNS records Cloudflare automatically imported. Then I compared them with the original records from cPanel. Then I checked them again. And again. I made sure only the main website records were going through the Cloudflare proxy. • Mail • cPanel • FTP • webmail and other service-related records stayed DNS only so they would not break. I also found one small mismatch in the MX record and corrected it to match the original DNS setup. After that, I confirmed the A record IP matched the server IP in cPanel, collected the Cloudflare nameservers, changed them on the domain, and waited for propagation. At the start, it felt scary. By the end, it made a lot more sense.
-
pokie-the-goat (@pokiegoat) reported@bally44025 @olascobimson @chi87675 Are you a child? Make your research. CloudFlare is used by tons of sites like discord, Reddit, twitch etc. and they are all working fine. They told you bank issues yesterday and CloudFlare today and you still believe. Use your head!
-
BootStepper (@BootStepper) reported@tan_stack I am trying to use TanStackAI and `cloudflare/tanstack-ai`, but its giving type check issues about "gpt-5.4" not being a valid model -- Is this something you or @Cloudflare can fix?
-
Sr Carlos ²³²U (@CJavierSaldana) reportedFor me, the best usage of the Codex /goal command: /goal Fork this MIT-licensed xxxxx/yyyy repo and convert it to a Cloudflare-native version. Be destructive: remove all external service dependencies and follow our cloudflare-clone-security-check.
-
SONIKKU🤑 (@Sonikku_Blue_) reported@TheClone_17 sorry i dont know how to do that, but i read a comment, from someone with the same issue, by downloading smth called Cloudflare Warp?
-
PaymentExecutive (@pymtexecutive) reported7/ THE ANNOUNCEMENT MOST PAYMENT PROS MISSED @Cloudflare launched NET Dollar — a US dollar-backed stablecoin designed specifically for the agentic web. The thesis: traditional payment rails fail for AI-agent micropayments. Card networks and cross-border fees make micropayments economically unviable. NET Dollar enables sub-cent, machine-to-machine payments at internet scale. Cloudflare already processes 1 billion HTTP 402 "Payment Required" responses per day on its network — the infrastructure backbone for the x402 agentic payment protocol. NET Dollar + x402 + Cloudflare's global network = the closest thing to a production-ready agentic payment rail that exists. Tomorrow's CLARITY Act markup makes this infrastructure's legal status clearer.