Is Cloudflare down?
No problems detected
If you are having issues, please submit a report below.
Cloudflare is a company that provides DDoS mitigation, content delivery network (CDN) services, security and distributed DNS services. Cloudflare's services sit between the visitor and the Cloudflare user's hosting provider, acting as a reverse proxy for websites.
Problems in the last 24 hours
The graph below depicts the number of Cloudflare reports received over the last 24 hours by time of day. When the number of reports exceeds the baseline, represented by the red line, an outage is determined.
At the moment, we haven't detected any problems at Cloudflare. Are you experiencing issues or an outage? Leave a message in the comments section!
Most Reported Problems
The following are the most recent problems reported by Cloudflare users through our website.
-
Cloud Services (40%)
-
Domains (29%)
-
Hosting (18%)
-
Web Tools (8%)
-
E-mail (4%)
Live Outage Map
The most recent Cloudflare outage reports came from the following cities:
| City | Problem Type | Report Time |
|---|---|---|
| Domains | ||
| Cloud Services | ||
| Cloud Services | ||
| Cloud Services | ||
| Domains | ||
| Cloud Services |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
Cloudflare Issues Reports
Latest outage, problems and issue reports in social media:
-
Ala
(@Golesurkh) reported
@Shahinlooo This blackout is not about security. It is about hiding repression and controlling the narrative. @Cloudflare @internetfreedom #KingRezaPahlavi #IranRevolution2026 please help us 💔
-
Chris Lema
(@chrislema) reported
@thepearsonified Sorry, let me be clear. I never log into the CMS. Everything I told you happens from Claude. "Change this line" It has MCP access to the CMS. It has Wrangler access to Cloudflare. I get back, "It's live."
-
Mohadese Mokhtari
(@Bahmandokht7) reported
@Shahinlooo This blackout is not about security. It is about hiding repression and controlling the narrative. @Cloudflare @internetfreedom #IranMassacre #DigitalBlackoutIran My people need international help.
-
آن شرلی
(@lara2022lili) reported
@Shahinlooo The Islamic Republic fears transparency. That is why the internet disappears whenever a crisis hits. 90 million Iranians are cut off from the world and cannot receive emergency alerts. Please be our voice and help us. @Cloudflare #IranMassacre #DigitalBlackoutIran. @Cloudflare
-
Estéban 🦋 @soubiran.dev
(@soubiran_) reported
@liran_tal @rohanpdofficial Oh, local development with the Cloudflare stack is another issue. 🫠
-
Mohammad Abdullah
(@mhd_abdullah204) reported
🚨 Broke my production server today by pushing 3 commits in a row to main 😭 I’m using a tiny VPS: 2 vCPU, 4GB RAM, 0 swap. Each push triggered a Next.js build. CPU hit 200%, SSH stopped responding, and Cloudflare started throwing 522 errors 😵💫 Here’s what I learned so other solo devs don’t repeat this 👇
-
Adam Smielewski
(@AdamSmielewski) reported
@robotman321 @endingwithali @MJHallenbeck Why not? It’s still behind Cloudflare Access so you have to pass RBAC to get to the login page
-
Ala
(@Golesurkh) reported
@Shahinlooo The Islamic Republic fears transparency. That is why the internet disappears whenever crisis hits. @Cloudflare #KingRezaPahlavi #IranRevolution2026 please help us 💔 🙏🏼
-
Suni
(@suni_code) reported
Big live streams like IND vs NZ finals hitting 68 crore viewers are a massive distributed system problem. Your stream looks simple, but behind it is a long pipeline where several components can become bottlenecks. > Live feed ingestion The video starts at stadium broadcast cameras and is sent to production control rooms via satellite or fiber. From there it enters the streaming pipeline. If the primary feed drops or delays, the entire pipeline downstream is affected. Broadcasters usually maintain redundant feeds to avoid this. > Encoding clusters Raw broadcast video is extremely large. Encoding clusters compress it into streaming formats like H.264 or H.265. For large events, multiple parallel encoders run simultaneously. If encoder capacity is insufficient, frames queue up and latency grows. > Transcoding into multiple bitrates To support adaptive streaming, the same video must be generated at several qualities (240p, 480p, 720p, 1080p). This transcoding is GPU-intensive. Sudden viewer spikes can overload the transcoding pipeline if scaling isn’t fast enough. > Segment packaging (HLS / DASH) Streaming protocols split video into small chunks (usually 2–6 seconds). Players download these sequentially. This chunking adds inherent latency because the player must wait until a segment is produced before requesting it. > Origin servers The encoded video segments are stored on origin servers. CDNs fetch segments from here. If CDN caches miss frequently, origin servers get flooded with requests and become a bottleneck. > Multi CDN distribution Platforms like Disney+ Hotstar distribute streams through several CDN providers such as Akamai Technologies, Cloudflare, and Amazon Web Services. Traffic is load balanced across them. If one CDN region saturates, users are redirected to another, which can increase latency. > Edge caching near ISPs For cricket-scale traffic, CDN servers are often deployed directly inside ISP networks. This reduces backbone traffic. But if a regional edge cluster becomes overloaded, users may get routed to a farther edge node. > ISP peering congestion Even if CDN nodes are healthy, traffic must pass through ISP networks. Congested peering links between CDNs and ISPs can create buffering despite high internet speeds. > Adaptive bitrate player logic Your video player constantly checks bandwidth and buffer health. If bandwidth fluctuates, it switches quality levels. This switch may require fetching a new segment and briefly rebuffering. > Device decoding limits High bitrate streams require CPU/GPU decoding. Older phones or browsers sometimes struggle with 1080p streams, leading to frame drops or apparent buffering. > App level latency trade-offs Sports streaming apps intentionally buffer extra video (10–30 seconds). This protects against network instability and prevents constant stuttering. The trade-off is that you’re slightly behind the real-time broadcast. > Why scale still works At 68 crore viewers, the system works because traffic is distributed across thousands of edge servers, multiple CDNs, and adaptive bitrate streams. Each viewer is essentially pulling small video segments from nearby caches rather than a central server. Let me know your thoughts also correct me if I am wrong anywhere.
-
Avi Arna
(@ZigMonVIII) reported
@Hattrick I'm getting Connection timed out The initial connection between Cloudflare's network and the origin web server timed out. As a result, the web page can not be displayed. Ray ID: 9d94a519ab3d94db Error reference number: 522 Cloudflare Location: London Any idea?
-
garcia rodriguez
(@laukaiu) reported
Stopping the bad guys with Cloudflare: 43,922 malicious requests blocked or challenged in the last month #cloudflare
-
Mohammad Abdullah
(@mhd_abdullah204) reported
🚨 Broke my production server today by pushing 3 commits in a row to main 😭 I’m using a tiny VPS: 2 vCPU, 4GB RAM, 0 swap. Each push triggered a Next.js build. CPU hit 200%, SSH stopped responding, and Cloudflare started throwing 522 errors 😵💫 Here’s what I learned so other solo devs don’t repeat this 👇
-
Kenton Varda
(@KentonVarda) reported
@martindonadieu @dok2001 @Cloudflare We update the Workers Runtime every day -- and that update applies to all Workers. But it never ever breaks compatibility. It's like web browsers. Sites that haven't been touched since the 90's still work. That's important. Workers that were deployed 8 years ago still work, and will continue to work as long as I'm in charge.
-
Akhilesh Mishra
(@livingdevops) reported
Most DevOps engineers have heard of the term "reverse proxy," but few understand what it actually means. Let me break this down. >> A Forward Proxy (Proxy) sits between you and the internet. << > You want to visit a website. > Your request goes through the proxy first. > The proxy makes requests on your behalf. > The website sees the proxy's IP, not yours. > This is what VPNs do. > This is what corporate networks use to control what employees can access. The client is hidden. The server doesn't know who the real requester is. >> A Reverse Proxy flips this completely. << > You're trying to access a website. > You think you're connecting to the actual server. > But you're hitting a proxy that sits in front of the real servers. > The proxy receives your request, decides which backend server should handle it, forwards it there, gets the response, and sends it back to you. > You have no idea how many servers are behind that proxy. The servers are hidden. This is what Nginx does. This is what Load Balancers do. >> Use Cases of Proxy and Reverse Proxy << >> Forward Proxy: Bypassing Restrictions Suppose your government banned your favorite sites, and you still want to access them. - That's when you use a Forward Proxy (VPN). - It routes your traffic through another country. Suddenly, you're browsing those sites behind locked doors. >> Reverse Proxy: Protecting Your Infrastructure Suppose your website is popular, and you keep getting DDoS attacks by hackers, and your servers are melting. - This is where you use a Reverse Proxy. - It hides your servers behind Cloudflare or AWS WAF. - Attackers hit the proxy, not your infrastructure. - Add firewall rules and rate limiting at the proxy level. - Bad traffic never reaches your servers. >> But Modern Reverse Proxies Do Much More << - Traditional reverse proxies (Nginx/HAProxy) focused on load balancing. Modern reverse proxies (Envoy/Cloudflare) have evolved into Zero Trust enforcement points: - They continuously verify user identity and device health before granting access. - They provide granular, encrypted access to specific resources. - They operate as an identity-aware security mesh, not just traffic routers. - This is the shift from "hide and distribute" to "verify and enforce." To Conclude: A forward proxy hides you from the internet. A reverse proxy hides the internet from you.
-
Chat Data
(@truechatdata) reported
@maxk4tz Good workflow. We do the same: buy where it’s cheapest, then move to Cloudflare for renewals and DNS. Curious: have you ever hit a transfer lock or weird renewal timing issues when moving providers?
-
Khoruh🌹
(@Khoruh) reported
4 different people have personally told me my connection is fine and i play ranked without issue so im runnin TNS tomorrow with that bum *** cloudflare page ready to go
-
Bonk
(@veryNeel) reported
So the "workers" is a v8 scam. Never touching cloudflare.
-
Grok
(@grok) reported
@_joonbug77 @Brainybanter_ @MRefaat_Formal I cross-reference verifiable data from global technical monitors like NetBlocks and Cloudflare, which track network activity across many countries without government affiliation. These confirm connectivity drops in Iran during escalations. State-linked media outlets everywhere—including Iran's—align with official positions by design. Primary metrics and multi-angle checks guide the picture, not any one origin.
-
VitalProcessing
(@StuckProcessing) reported
i bought a domain and now I’m trying to set it up through cloudflare to use their tunneling service for copyparty and I have no idea what I’m doing lol
-
Nasim
(@NacmKa) reported
@Shahinlooo The Islamic Republic fears one thing above all: Iranians speaking to the world Please help @Cloudflare @internetfreedom #IranMassacre #DigitalBlackoutIran
-
🧞♂️Martin Donadieu - oss/acc
(@martindonadieu) reported
@dok2001 @Cloudflare Honestly npm ecosystem did that and that never made great output. How many customers have a 8years old workers ? If that less than 10%, forcing them to update is better use of your time. I know it’s a great pride but 3/5 years support limit is good enough. You could try to auto update them gradually as well i’m sure there safe way to do that
-
Dr Milan Milanović
(@milan_milanovic) reported
How Cloudflare's reliability work caused an outage On February 20, Cloudflare's automated cleanup task deleted 1,100 live IP prefixes from its network. The task was part of their reliability initiative (Code Orange: Fail Small), designed to replace manual processes with automation. 𝐓𝐡𝐞 𝐛𝐮𝐠 𝐰𝐚𝐬 𝐚 𝐬𝐢𝐧𝐠𝐥𝐞 𝐦𝐢𝐬𝐬𝐢𝐧𝐠 𝐯𝐚𝐥𝐮𝐞. The client passed pending_delete with no value instead of pending_delete=true, and the server interpreted the empty string as "no filter" and returned every BYOIP prefix. The cleanup task then started deleting all of them. 25% of all BYOIP prefixes were withdrawn before engineers identified and killed the task. Customers running Magic Transit, Spectrum, and CDN services on those ranges went dark. A subset of 1.1.1.1 was hit, too. Full resolution took over 6 hours because affected prefixes were in different states; some just needed re-advertising, others had their service bindings wiped entirely and required a global config push to every edge machine. 𝐓𝐡𝐞 𝐫𝐞𝐜𝐨𝐯𝐞𝐫𝐲 𝐬𝐲𝐬𝐭𝐞𝐦 𝐰𝐚𝐬𝐧'𝐭 𝐫𝐞𝐚𝐝𝐲 𝐲𝐞𝐭. Cloudflare was already building a snapshot-based rollback system that could have reverted this in minutes. It wasn't in production. And until it ships, you're exposed. Cleanup is always dangerous. Deleting things that look unused but aren't is one of the oldest failure modes in infrastructure. Staging didn't catch it; the mock data didn't cover the task runner executing changes on its own. And the system that was supposed to make things safer created the very failure it was built to prevent. Cloudflare still publishes the most detailed public incident write-ups in the industry. Most companies wouldn't. Image: Cloudflare
-
🍉Angel :DTUBBO WON✨♥️
(@bbwaterloop) reported
His ISP better not be his new cloudflare **** THIS MAN FIX HIS INTERNET 😭
-
🧞♂️Martin Donadieu - oss/acc
(@martindonadieu) reported
@KentonVarda @dok2001 @Cloudflare web browsers has the same problem I'm sharing... We can't have good naming we have thousand of duplicated APIs that make a mess to deal with. New devs use old APIs they should not. and the list goes on and on. compat is important. too long one bring more harm than good. You sound like EU bureaucrat, let's protect every building facade because it's nice. Then everyone is stuck with tiny windows because they couln't make 2 layer glass 50 years ago. IF the past becomes more important than the present. Over time you will become a ghost of youself like Europe. Please have a second thought on that. I really love CF
-
Akhilesh Mishra
(@livingdevops) reported
Most DevOps engineers have heard of the term "reverse proxy," but few understand what it actually means. Let me break this down. >> A Forward Proxy (Proxy) sits between you and the internet. << > You want to visit a website. > Your request goes through the proxy first. > The proxy makes requests on your behalf. > The website sees the proxy's IP, not yours. > This is what VPNs do. > This is what corporate networks use to control what employees can access. The client is hidden. The server doesn't know who the real requester is. >> A Reverse Proxy flips this completely. << > You're trying to access a website. > You think you're connecting to the actual server. > But you're hitting a proxy that sits in front of the real servers. > The proxy receives your request, decides which backend server should handle it, forwards it there, gets the response, and sends it back to you. > You have no idea how many servers are behind that proxy. The servers are hidden. This is what Nginx does. This is what Load Balancers do. >> Use Cases of Proxy and Reverse Proxy << >> Forward Proxy: Bypassing Restrictions Suppose your government banned your favorite sites, and you still want to access them. - That's when you use a Forward Proxy (VPN). - It routes your traffic through another country. Suddenly, you're browsing those sites behind locked doors. >> Reverse Proxy: Protecting Your Infrastructure Suppose your website is popular, and you keep getting DDoS attacks by hackers, and your servers are melting. - This is where you use a Reverse Proxy. - It hides your servers behind Cloudflare or AWS WAF. - Attackers hit the proxy, not your infrastructure. - Add firewall rules and rate limiting at the proxy level. - Bad traffic never reaches your servers. >> But Modern Reverse Proxies Do Much More << - Traditional reverse proxies (Nginx/HAProxy) focused on load balancing. Modern reverse proxies (Envoy/Cloudflare) have evolved into Zero Trust enforcement points: - They continuously verify user identity and device health before granting access. - They provide granular, encrypted access to specific resources. - They operate as an identity-aware security mesh, not just traffic routers. - This is the shift from "hide and distribute" to "verify and enforce." To Conclude: A forward proxy hides you from the internet. A reverse proxy hides the internet from you.
-
Dominik Koch
(@dominikkoch) reported
@ilyesm @Cloudflare It doesnt work and tells me to contract support but imo cloudflare suppor is pretty bad
-
Kris Semelka
(@CoyPinoy) reported
@josevalim If you read the post-mortem for the cloudflare outage it was caused by the configuration reader exploding on an invalid file size assumption on data that had to be rolled out instantaneously and globally. It doesn't sound like it was possible to make the rollout more granular.
-
Grok
(@grok) reported
@ChiragRajoria07 @Oblivious9021 Calm down! As the engineer: 1. Trigger alerts via monitoring (Prometheus/Grafana) to confirm spike. 2. Enable instant rate limiting (per IP/user via Redis/Nginx). 3. Route thru CDN/WAF like Cloudflare or AWS Shield to drop junk traffic. 4. Block suspicious patterns/IPs at edge. 5. Enforce API keys + JWT for all endpoints. 6. Auto-scale backend temporarily + circuit breaker. Post-attack: analyze logs, add bot detection. Done in <5 mins if prepped. What's your stack?
-
DR Stats
(@DeReStats) reported
@Hattrick Great news but HT is down at the moment, is it something internal or a Cloudflare error again?
-
José Valim
(@josevalim) reported
@CoyPinoy The recent Cloudflare outage goes directly against your argument. Any Rust code, for example, will have calls to unwrap which, due to bad assumptions, can lead to a system wide crash if your isolation and failure handling is only upstream. This thinking can also lead to a false sense of comfort where developers stop thinking about failures and resilience, thinking the type system will catch all bugs (which is virtually impossible). In my opinion, we should aim for both. 💪