1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Yokohama, Kanagawa 1
Gustavo Adolfo Madero, CDMX 1
Nice, Provence-Alpes-Côte d'Azur 1
Brasília, DF 1
Montataire, Hauts-de-France 3
Colima, COL 1
Poblete, Castille-La Mancha 1
Ronda, Andalusia 1
Hernani, Basque Country 1
Tortosa, Catalonia 1
Culiacán, SIN 1
Haarlem, nh 1
Villemomble, Île-de-France 1
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Newark, NJ 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • GregorMakes
    Gregor (@GregorMakes) reported

    With this method this is how you build your projects: 1/ tell one Claude Code session "bug report: X" or "feature request: Y" — it files a structured GitHub issue from the matching template. 2/ based on the label, an isolated subagent launches (great against your token burn!). fix-bug-worker for bugs, implement-issue-worker for features. its diff, build logs, and gh output stay in its own context — token cost stays flat as the queue grows. 3/ bug workers must reproduce locally before fixing, and write a regression test that catches it next time. then they branch, implement, open a PR, and fix their own red CI until green. 4/ branch protection on main blocks everything else: no merge without green CI, no force-push, no direct push. even admins. 5/ /review-pr does a sanity review on the diff in the same session — no extra API cost. you click merge. only human step. 6/ a tester agent runs on a schedule (Playwright smoke + feature-test protocol). finds regressions, files new bug issues — the loop closes itself. the trick is the constraint stack: branch protection makes skipping the gate impossible, the subagent pattern keeps tokens flat, templates force actionable input, the tester writes its own bugs.

  • trknhr
    Terry (@trknhr) reported

    This TanStack npm compromise is scary. It does not look like a simple leaked npm token case, but a deeper supply-chain issue involving GitHub Actions, OIDC, and cache behavior. If you use @tanstack/*, check your lockfile and affected versions carefully.

  • abhijaymrana
    Abhijay Rana (@abhijaymrana) reported

    seeing these comparisons a lot lately, but despite the hype i'm not really sold 1) swe-bench verified is a broken benchmark > super contaminated bc all tasks are just public github patches from issues/prs, so its all in distribution > the evals are poor, 59% of hard tasks fail bc of bad tests with false-positives/negatives or are extremely contrived. this makes them either flawed or unrealistic. 2) these scores are pretty cherry-picked & use a custom harness, not the standard public swe-bench harness. this is why the scores are unreproducible + not on official leaderboards. > opus 4.7 with custom harness scored 87.6%, over 14% more than the harness-optimized qwen. *this* is the truly fair comparison 3) infra costs are still exploding which is already pricing out some “frontier” open source models. > does china have enough capex for a frontier buildout? research talent is obviously not the problem but this may be the constraint > i wonder how much china govt will subsidize here, bc i'm unsure how these models will make money and break even. china is notoriously cutthroat but even then, some may eventually move to closed-source just to stay operational

  • DarthPedro99
    Pedro Silva (@DarthPedro99) reported

    @tekbog GitLab has always been a worse option... even with GutHub issues, it's still better than default GitLab. I know there's been too many reported GitHub outages, but I haven't been affected by one yet. 🤷

  • killua_9102
    キルア (@killua_9102) reported

    @neogoose_btw not sure what happened but its not happening now - I don't think I also got the logs for the incorrect run because the timestamps were for a later run. I'll post an issue on github if I see it happening again I guess.

  • ugbahisioma
    MONARCH (@ugbahisioma) reported

    @adibhanna This isn’t a language or framework problem. It’s a supply chain issue, CI pipeline was poisoned. If anything it’s a GitHub vulnerability

  • iPariola
    P! (@iPariola) reported

    i created this mess on Github myself and now i have to fix it 😔 need to set up a github rule for PRs

  • Tech_girlll
    Mari (@Tech_girlll) reported

    Unless your product is Google, Netflix, Twitter, WhatsApp, Instagram, GitHub, YouTube level big, you probably have no business with microservices. Most products are adding distributed complexity to problems that do not exist yet.

  • ed_ceds
    Eduardo (@ed_ceds) reported

    Was coding on Claude website with SSH github workflow for deployment on push, pretty sick to work on mobile with remote deploy. Claude was working + pushing on any branch. Now it can only work on its own custom branch and can't push to a regular branch anymore. Someone else with this issue?

  • andreintg
    Andrea Intg. (@andreintg) reported

    The "Made with AI" flag feels so hypocritical to me. Good on Epic for finally talking about the issue. Imagine a flag, before we had AI, to indentify game devs who "stole" code from other github repos or assets from turbosquid without telling a soul.

  • cequalll
    C= (@cequalll) reported

    @corbscorner @NewAgeRetroNerd Okay this is making me think. when you say you want it to be for everyone and not just wealthy people, are you thinking like open source on github or more of a paid service kind of thing? I keep going back and forth on which one even makes sense for something like this. Because here's where my head gets stuck. If the bot really does what you say, why would anyone share it at all? Running it quietly on your own money seems like the obvious move. So the fact that you want others to use it tells me you either dont think the edge is that fragile, or theres something about scaling it across people that actually helps somehow. which one is it for you? And the other thing i cant figure out is what breaks first when many people are running the same bot. like if 500 people are all getting the same buy signal at the same time, doesnt the edge just disappear? or do you slow it down somehow?

  • rentierdigital
    Phil | Rentier Digital Automation (@rentierdigital) reported

    cloudflare just rebuilt next.js in five days for $1,100 using claude code. 67,000 lines, 94% api coverage, 7,000+ github stars. one engineer. one week this is not a story about open source licensing this is a story about what happens when the friction cost of cloning your backend drops from six engineers and a year to a single person and $1,100 in api tokens roritharr posted on hacker news six weeks ago about a client's engineer who reverse-engineered a saas backend in a week with claude code. shipped it. functionally identical. the replies were all about lawyers and copyleft. one comment cut through the noise: "if your backend is trivial enough to be implemented by a large language model, what value are you providing?" that question stopped being theoretical when vinext shipped here is what we have been calling a technical moat for fifteen years: reproduction friction. not code. not lawyers. friction when cloning took six engineers and a year, competitors did not bother. when it takes $1,100 and five days, they will yes there are bugs in the clone. hacktron found 45 vulnerabilities in vinext. 24 validated but that does not save you. it just means your competitor ships with bugs while they eat your lunch the moats that survive: switching costs that live in your users' heads, not your code. distribution you acquired before your product existed. network effects that compound faster than code clones everything else is rent you have been collecting from friction i build and ship daily with Claude Code. SaaS, tools, automations. ⭐ if AI can build it, I've probably broken it first. what works → link in bio

  • hoppingturtles
    Harshil (@hoppingturtles) reported

    The fall of @github is happening in front of our eyes 😔 I've had so many problems with them in the last week, and it's only getting worse

  • techedgedaily
    TechEdgeDaily (@techedgedaily) reported

    @jarredsumner The AI-assisted Rust rewrite of Bun is officially being merged. v1.3.14 is the last version in Zig. Ever. Passes the test suite on Linux x64, arm64, Windows x64 and arm64, macOS x64 and arm64. Closes roughly 200 GitHub issues. No benchmark where it is slower than the Zig implementation. Same codebase, better crash prevention tools. A week ago this was "Claude helped rewrite Bun in Rust and it passes 99.8% of tests." Today it is shipping to production. The rewrite that everyone said was ambitious is now the default. The most significant AI-assisted codebase migration in open source history just went from experiment to release in days.

  • therobertta_
    Robert Ta (@therobertta_) reported

    THE SIGNAL-TO-CONTENT BRIDGE The hardest part is not writing. It is knowing what to write about. Our signals skill identifies patterns: "GitHub activity spiked in auth module. Three Slack conversations about SSO. Two customer support tickets about login." The content skill takes that signal and asks: "Is there a thread here?" If confidence exceeds threshold, it drafts. Content starts with observation, not imagination.

Check Current Status