1. Home
  2. Companies
  3. Cloudflare
Cloudflare

Cloudflare status: hosting issues and outage reports

No problems detected

If you are having issues, please submit a report below.

Full Outage Map

Cloudflare is a company that provides DDoS mitigation, content delivery network (CDN) services, security and distributed DNS services. Cloudflare's services sit between the visitor and the Cloudflare user's hosting provider, acting as a reverse proxy for websites.

Problems in the last 24 hours

The graph below depicts the number of Cloudflare reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.

At the moment, we haven't detected any problems at Cloudflare. Are you experiencing issues or an outage? Leave a message in the comments section!

Most Reported Problems

The following are the most recent problems reported by Cloudflare users through our website.

  • 42% Domains (42%)
  • 33% Cloud Services (33%)
  • 17% Hosting (17%)
  • 4% E-mail (4%)
  • 4% Web Tools (4%)

Live Outage Map

The most recent Cloudflare outage reports came from the following cities:

CityProblem TypeReport Time
Istanbul Domains 1 day ago
Greater Noida E-mail 4 days ago
Paris Domains 5 days ago
Crisfield Domains 6 days ago
Noida Hosting 10 days ago
Augsburg Domains 10 days ago
Full Outage Map

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Cloudflare Issues Reports

Latest outage, problems and issue reports in social media:

  • im_usamakhalid
    Usama Khalid (@im_usamakhalid) reported

    New on @userscom_com: You can now reply to customer tickets directly from your email. Thanks to @Cloudflare Email Routing! No need to open Userscom anymore. Just hit reply on the email notification and your response gets added to the ticket automatically. Your customers can do the same too 👀 Support should feel like email. Fast, simple, no extra tabs. #buildinpublic #customersupport #saas

  • SatyaNaaksh
    SatyaNaaksh (@SatyaNaaksh) reported

    @atmoio The scariest part of the Cloudflare layoffs wasn’t the layoffs. It was hearing human beings described like software upgrades: "100x productivity", "electric screwdriver", "support roles." Feels like we’re entering an era where the peoplewho sound most "replaceable" are the ones speaking about humans this way.

  • repalash
    Palash Bansal (@repalash) reported

    @CherryJimbo Doesn't work on mobile also. Cloudflare is embracing AI so much that it's going in slop territory. For those saying humans also make mistakes - yes, but most don't approve broken things for **** by lying about it and calling it done and stable.

  • CherryKnightley
    Cherry Knightley (@CherryKnightley) reported

    @konkonxo @K_K__G It’s not usually the advertisers causing problems like this, it’s payment processors like Mastercard or site hosts (think Cloudflare, although historically they’re not shutting down porn sites)

  • rhiyddun
    Rhiyddun (@rhiyddun) reported

    @CopBlaster @RightSideR3bel the IPs point at cloudflare stuff, which I wouldn't expect a regular user website to be. Idk, the webserver was probably easier to breach and redirect than a godaddy login, and far more destructive things than a simple redirect could have happened at domain level. Could be someone using the cPanel exploit... depends on a lot of things I don't know.

  • bally44025
    Baby Ruckus (@bally44025) reported

    @pokiegoat @olascobimson @chi87675 Nahh It's cus cloudflare is down at the moment And that's what powers the website and many other websites

  • ayushagarwal
    Ayush Agarwal (@ayushagarwal) reported

    @dotmanish @dodopayments @Cloudflare yeah I was also shocked - stripe's website is the worst for agents.

  • Techegic
    Techegic (@Techegic) reported

    This isn't isolated. Meta, Microsoft, Amazon — same pattern in recent quarters. Revenue up. Headcount down. AI cited as cause. Cloudflare just said it out loud. 2/6

  • PryvitKyle
    Kyle DH | pryvit.eth (@PryvitKyle) reported

    @CloakdDev IMO it’s early. As you point out most sites and data already make content free. x402 will primarily grow as Cloudflare turn their CAPTCHA systems into a toll booth and Google seems to be attempting to go down the ID route so they can place ads only for humans

  • buildxrajesh
    Build X Rajesh (@buildxrajesh) reported

    Google is slowly locking down parts of the free web while Cloudflare keeps blocking AI crawlers. Meanwhile half the AI agent startups are basically: “bro please let my bot read your website” 😭 The open web AI depended on for years is starting to disappear.

  • TransformLabsHQ
    Transform Labs (@TransformLabsHQ) reported

    Cloudflare just eliminated 1,100 support roles while posting record revenue. This is the moment AI stops being your copilot and starts replacing the headcount. 🧵

  • TheWizardTower
    Merlin (@TheWizardTower) reported

    @Ruff681368 If improper use isn't obvious, it's hiding a bug. The point of an abstraction is to make invalid states unrepresentable, and improper use obvious. This is what Rust/Haskell/Scala do by using Maybe/Optional instead of a universal null type. Even when CloudFlare had the outage because they called .unwrap() on a pointer, the meaning of that function is "You solemnly swear that this is a Some(x), because if it's a None, I'm blowing up your runtime." And, indeed, when that came up, the problem was *immediately* obvious. The go code in question here was code that looked reasonable at first glance, but wasn't. That's a problem!

  • cshekhar
    Shekhar (@cshekhar) reported

    @daytonaio runs containers, which share one Linux kernel across tenants. Vulnerable. The container boundary itself was not breached. The shared kernel underneath was. Patched in 12 hours. Credentials rotated. Signups paused. @Cloudflare also runs containers on their own edge. Shipped a kernel-level filter within hours. Kernel patches in five days. No customer-facing exposure advisory ever published.

  • LukeYoungblood
    LukeYoungblood.eth 🛡️ (@LukeYoungblood) reported

    @curtjg1971 @Cloudflare We put in a limit request Monday but still no resolution. They have really good service otherwise, but I hope the people handling limit increases are eventually going to resolve it.

  • CJavierSaldana
    Sr Carlos ²³²U (@CJavierSaldana) reported

    For me, the best usage of the Codex /goal command: /goal Fork this MIT-licensed xxxxx/yyyy repo and convert it to a Cloudflare-native version. Be destructive: remove all external service dependencies and follow our cloudflare-clone-security-check.

  • jayhemz
    Johnmark Obiefuna (@jayhemz) reported

    Proxying through Cloudflare solves almost all Wordpress security issues.

  • timagixe
    timagixe (@timagixe) reported

    i remember the first time I bought domain on NameCheap the first thing I did in 10 minutes - transferred domain to CloudFlare luckily to me it was .com - so no issues with that

  • TheEightBitLink
    The Eight-Bit Link (@TheEightBitLink) reported

    Message @Cloudflare as a paying customer regarding billing, get told to talk to the community after three weeks and my ticket has been resolved. Great job, guys.

  • kenAI_domains
    KenAi (@kenAI_domains) reported

    @NameBio great support, thanks again for solving my cloudflare problem today, Michael! Greetings from Italy

  • ba_niu80557
    DataDan|AI Data Engineering (@ba_niu80557) reported

    A function receives a webhook, validates it, queries a database (150ms network round-trip), and returns a response. Total wall-clock time: 170ms. Actual CPU time: 5ms. AWS Lambda bills you for 170ms. Cloudflare Workers bills you for 5ms. Same function. Same result. 34x billing difference — because one platform charges for time your code spent waiting, and the other charges only for time your code spent computing. (Source: Morph, "Cloudflare Workers vs AWS Lambda 2026", April 2026) That billing model difference is the most underappreciated shift in backend infrastructure in 2026. And it's quietly reshaping how production systems are built — not just for edge use cases, but for everything. Here's the thesis I keep arriving at after watching teams migrate over the past 18 months: The "deploy to a region, scale with containers" model that dominated backend engineering from 2015-2024 is being replaced by an "deploy everywhere, scale with isolates" model. And most backend engineers haven't noticed because the migration is happening one function at a time. Baselime reported 80% lower cloud costs after migrating from AWS to Cloudflare. Not 8%. Eighty percent. (Source: Morph, April 2026) The numbers are that dramatic because three structural differences compound: Difference 1: Cold starts don't exist anymore. Lambda cold starts: 100ms to 3,000ms depending on runtime, package size, and VPC config. Java Lambda in a VPC? You might wait 3 full seconds before your code runs a single line. Cloudflare Workers cold starts: under 5 milliseconds. Effectively zero. Because Workers don't spin up containers. They run V8 isolates — the same lightweight sandboxing technology that runs your Chrome browser tabs. For a web API serving human users, 100ms cold start is noticeable but tolerable. For an AI agent making 200 API calls per session, cold starts compound catastrophically. 200 calls × 500ms average cold start = 100 seconds of dead time per session. The agent is waiting for infrastructure, not computing. (Source: Morph, April 2026) This is why every serious AI agent infrastructure team I know is evaluating edge-first deployment. Not because edge is trendy — because their agents are burning money and latency on cold starts that V8 isolates eliminate entirely. Difference 2: Global distribution is the default, not the exception. You deploy a Lambda function. It runs in us-east-1. A user in Tokyo hits your API. Their request travels 11,000 km to Virginia, your function processes it, and the response travels 11,000 km back. Round trip: 200-400ms of pure network latency before your code does anything. You deploy a Cloudflare Worker. It runs in 330+ cities worldwide. A user in Tokyo hits your API. The request reaches a Cloudflare edge node in Tokyo. Your code runs there. Response returns from Tokyo. Round trip network latency: effectively zero. This isn't edge computing as a niche optimization. This is "every function is global by default" as the deployment model. You don't choose a region. There is no region. Your code runs wherever the user is. For a traditional CRUD API, this reduces TTFB by 60-80%. (Source: Digital Applied, "Edge Computing: Cloudflare Workers Dev Guide 2026", January 2026) For AI agent endpoints that serve users across time zones — a customer support agent used by a global company, a coding assistant used by distributed teams — the latency reduction is the difference between "feels instant" and "feels slow." Difference 3: The ecosystem became a full stack. The reason edge computing stayed niche from 2018-2023 was simple: you could run code at the edge, but your data was still in a region. Every edge function that needed a database round-tripped to us-east-1 anyway, killing the latency advantage. In 2026, Cloudflare solved this by building an entire data layer at the edge: → D1: SQLite at the edge. Global read replication. Your queries run where your users are. → KV: Key-value storage with edge caching. Sub-millisecond reads globally. → R2: Object storage. S3-compatible. Zero egress fees. (This alone saves thousands/month for media-heavy applications.) → Durable Objects: Stateful computing at the edge. Strongly consistent, globally coordinated state — the thing that was impossible at the edge until 2024. → Queues: Message queuing with guaranteed delivery. → AI inference: Run ML models on serverless GPUs at the edge. → Vectorize: Vector database for semantic search at the edge. (Sources: Cloudflare Workers docs; Calmops, "Edge Computing with Cloudflare Workers", March 2026) This changes the calculus completely. In 2022, edge was "run your CDN logic there." In 2026, edge is "run your entire application there" — database, storage, queues, AI inference, state management. Full stack. The framework ecosystem caught up too. Hono — under 14KB, zero dependencies, Express-like routing — became the standard routing framework for Workers in 2026. You write code that looks almost identical to Express/Fastify, but it runs globally with zero cold starts. What this means for how you should think about your next backend project: The decision tree has changed: Is your workload I/O heavy (API calls, database queries, webhook processing)?→ Workers bills CPU time only. You pay for 5ms of compute, not 170ms of waiting. The cost difference is 10-34x. This is most web backends. Does your application serve users globally?→ Workers runs in 330+ cities automatically. No multi-region deployment to manage. No cross-region replication to configure. Global is the default. Does your application need zero cold starts?→ Workers uses V8 isolates: sub-5ms startup. Lambda uses containers: 100ms-3s startup. If you're serving real-time AI agents, chatbots, or latency-sensitive APIs, cold starts are unacceptable. Does your workload need heavy compute (video transcoding, ML training, data processing)?→ Lambda. Workers caps at 128MB memory and has CPU time limits. For compute-heavy tasks, Lambda's 10GB memory and 15-minute execution are necessary. Are you deeply integrated with the AWS ecosystem (DynamoDB, SQS, S3 triggers, Step Functions)?→ Lambda. Workers can't trigger on S3 events or consume DynamoDB streams. Migrating away from Lambda means migrating away from the AWS event-driven ecosystem. The honest assessment: 80% of web-facing backend functions are I/O-heavy, globally distributed workloads where Workers is structurally cheaper and faster. 20% are compute-heavy or AWS-locked workloads where Lambda is the right choice. Most teams are running 100% on Lambda because that's what they learned in 2018. The AI angle that ties this back to my usual topics: Every AI agent infrastructure pattern I've written about — MCP servers, tool endpoints, RAG retrieval APIs, model routing gateways, cost tracking middleware — is an I/O-heavy workload that serves global users and needs zero cold starts. These are exactly the workloads where edge-first architecture delivers the largest improvement over traditional serverless. An MCP server on Lambda: cold start + regional latency + wall-clock billing = slow and expensive. An MCP server on Workers: zero cold start + global distribution + CPU-only billing = fast and cheap. The infrastructure layer beneath AI agents matters as much as the orchestration layer above them. Most agent architecture discussions focus on LangGraph vs CrewAI and ignore the fact that the function layer underneath is adding 100+ seconds of dead time per session to cold starts. Three uncomfortable questions for any backend team in 2026: 1) What percentage of your Lambda invocation time is your code actually computing vs waiting for I/O? If you're not measuring this — you're paying for wait time. For most web APIs, CPU time is 3-10% of wall-clock time. The other 90-97% is network round-trips that Lambda bills you for and Workers doesn't. 2) Where are your users, and where is your code? If users are global and code is us-east-1 — you're adding 100-300ms of pure network latency to every request. Workers eliminates this by running your code where your users are. Automatically. 3) When was the last time you evaluated whether your serverless architecture is still the right one? If "when we set it up in 2020" — the infrastructure landscape has fundamentally changed. Edge-first wasn't viable in 2020. It is in 2026. A 2-day migration experiment on a single non-critical endpoint will tell you whether the cost and latency improvements are real for your workload. The thesis: → 2016-2020: "serverless means Lambda" → 2021-2024: "edge is interesting for CDN logic but not real backends" → 2026: "edge-first is the default for I/O-heavy global workloads, and traditional serverless is the fallback for compute-heavy regional workloads" The inversion already happened. Most backend engineers are still deploying to Lambda because that's what the tutorials taught them in 2018. The teams that re-evaluated are running the same functions at 60-80% lower latency and 80% lower cost. Same code. Different infrastructure. Dramatically different bill. The boring infrastructure migration wins. It always does. Especially when the exciting AI agent is waiting 100 seconds for cold starts nobody measured.

  • ArsiHoxha_
    Arsi Hoxha (@ArsiHoxha_) reported

    @adahstwt Namecheap for years then switched to Cloudflare and never looked back. no markup, no upsells, no drama 🫶

  • devfredy
    Fredy Sandoval⚡️ (@devfredy) reported

    @asaio87 Will he secure your app behind a Cloudflare bot protection and set a Tailscale network for zero trust configuration, ensuring the database is unreachable for public internet?

  • PikaSim_esim
    PikaSim (@PikaSim_esim) reported

    Had to refund and cancel 50 customers from Malaysia buying big esim plans for Oman. with $7,000 in card testing payments. No idea why Stripe doesn’t catch these They even passed Cloudflare CAPTCHA?

  • builtbyfaithh
    Andrejco (@builtbyfaithh) reported

    Today I finished setting up Cloudflare for a website. It was my first time doing the full process, so I was definitely a bit scared. Not because Cloudflare itself looked impossible. But because DNS is one of those t hings where one wrong record can break something important. • The website • Email • Webmail • cPanel • FTP So I was extra careful. I reviewed the DNS records Cloudflare automatically imported. Then I compared them with the original records from cPanel. Then I checked them again. And again. I made sure only the main website records were going through the Cloudflare proxy. • Mail • cPanel • FTP • webmail and other service-related records stayed DNS only so they would not break. I also found one small mismatch in the MX record and corrected it to match the original DNS setup. After that, I confirmed the A record IP matched the server IP in cPanel, collected the Cloudflare nameservers, changed them on the domain, and waited for propagation. At the start, it felt scary. By the end, it made a lot more sense.

  • Wallage
    Philip Wallage (@Wallage) reported

    3 weeks ago Cloudflare published a beautiful blog post about redesigning that little widget you click to verify you're not a robot. WCAG 2.2 AAA. Rigorous user research. Eight participants from eight countries, blinded testing. They wrote that "when visual consistency conflicted with readability, readability won. Every time." Today they launched their new marketing homepage. No blog post about it. No press release. No tweet from Matthew Prince. For a company that announces a quarterly Forrester report and individual API changelogs, the silence on a full marketing site relaunch is loud. The reaction on X has been brutal. Some of what's being flagged: - Login button goes to the sign-up page - "View docs" link on the careers page points to R2 storage - Multiple users with no colourblindness saying the contrast hurts their eyes - Broken scrolling on Safari - Doesn't render properly on mobile - An em-dash in the hero headline, days after a whole blog post about removing em-dashes for readability A Cloudflare engineer replied to the thread: "expect fixes in the coming days." I'm not piling on Cloudflare. Shipping at their scale is hard and they'll fix it. The contrast between the two artefacts is the lesson. The blog post about the human-verification widget is what design teams want to be true about themselves. Process. Research. Accessibility as a value. The marketing homepage is what actually ships under deadline pressure when nobody owns the QA pass. If you look at most e-commerce sites I audit, the same gap exists. The brand book says "accessible, considered, customer-first." The product detail page has 11px grey-on-grey microcopy, a CTA that disappears on hover, and a sticky add-to-cart that covers the price on mobile. The blog post you want to write about your design system matters less than the page where you take money from people. Audit what you actually shipped, not what you meant to ship.

  • bree_sharp
    Bree Sharp | Local SEO Strategist (@bree_sharp) reported

    After moving MWOV from SiteGround to Cloudflare: Ahrefs went from returning 1 URL per crawl to crawling the site normally. The culprit was LiteSpeed's bot management — aggressive enough to rate-limit Ahrefs into giving up after the first URL. GSC showed 59 pages indexed fine. The problem wasn't the site. It was the layer between the site and the crawlers. Sometimes the issue isn't your content or your configuration. It's your infrastructure making decisions you don't know about.

  • XanthousChariot
    _chariot (@XanthousChariot) reported

    @Diblox_ also german law doesnt forbid the kind of illustrations that cloudflare and payment processors take issue with either, but obviously there are always idiot, puritan organizations pushing for stricter censorship, in any country.

  • xojigsx
    xo jig (@xojigsx) reported

    @BorowskiKamil @TheDealMakerGuy @grok Does Cloudflare issue invoices through Polish KSeF?

  • Pollux2789
    Pollux {x} 🐻🪝☀️🇺🇸 (@Pollux2789) reported

    @ArgusForge @Acquired_Savant Cloudflare sucks! @EvernodeXRPL actually fixes this kind of stuff. Check them out. You could maybe build something decentralized cloud instead of centralized clouds $EVR But by telling by all your work overall I’m sure you’ll correct your team and make them better. Keep up the work!! 👊🏻

  • leaohut
    Gerrard (@leaohut) reported

    @JeffScottDev @Cloudflare They fired over a thousand people and according to Reddit most of the support team got obliterated. No one is responding to my tickets either