1. Home
  2. Companies
  3. GitHub
  4. Outage Map
GitHub

GitHub Outage Map

The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

GitHub users affected:

Less
More
Check Current Status

GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Bordeaux, Nouvelle-Aquitaine 1
Ingolstadt, Bavaria 1
Paris, Île-de-France 1
Berlin, Berlin 2
Dortmund, NRW 1
Davenport, IA 1
St Helens, England 1
Nové Strašecí, Central Bohemia 1
West Lake Sammamish, WA 3
Parkersburg, WV 1
Perpignan, Occitanie 1
Piura, Piura 1
Tokyo, Tokyo 1
Brownsville, FL 1
New Delhi, NCT 1
Kannur, KL 1
Newark, NJ 1
Raszyn, Mazovia 1
Trichūr, KL 1
Departamento de Capital, MZ 1
Chão de Cevada, Faro 1
New York City, NY 1
León de los Aldama, GUA 1
Quito, Pichincha 1
Belfast, Northern Ireland 1
Check Current Status

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

GitHub Issues Reports

Latest outage, problems and issue reports in social media:

  • Penivera001
    Penivera (@Penivera001) reported

    @github Please fix the branch export for codespace I have exceeded my subscription I was closed out before I could commit

  • cvewhen
    Cvewhen? (@cvewhen) reported

    why is github so slow omfg

  • giulio_leone97
    Giulio Leone (@giulio_leone97) reported

    @kzu @github the real problem is that opus 4.7 is stuck to medium reasoning effort . it's dumb !

  • Vvtentt101
    vvtentt (@Vvtentt101) reported

    Broke down the stack behind a $1M Polymarket bot Core edge latency arb on 5-15 min crypto contracts: >> Binance sees the price move instantly >> Polymarket lags -2.7s behind >> Bot enters at 54% while true probability is already 78% 200-500 trades//day (20-25 points) = that's where the million comes from 28 repos, 6 layers Brain >> Orchestration>> Data >> Intelligence >> Backtest >> Execution Most important and most skipped layer: Backtest. Without it everything else is just a pretty GitHub

  • NehraWorkss
    Deep (@NehraWorkss) reported

    If you want to build a startup: -Claude = coding. ($20/mo) -Supabase = backend. (Free) -Vercel = deploying. (Free) -Namecheap = domain. ($12/yr) - Stripe = payments. (2.9%/transaction) -GitHub = version control. (Free) -Resend = emails. (Free) -Clerk = auth. (Free) -Cloudflare = DNS. (Free) -PostHog = analytics. (Free) -Sentry = error tracking. (Free) -Upstash = Redis. (Free) -Pinecone = vector DB. (Free) Total monthly cost to run a startup: ~$21 What's stopping you?

  • kzu
    Daniel Cazzulino 🇦🇷🗽 (@kzu) reported

    @juliuspiv @github Sure. It's terrible. > Opus 4.5 and 4.6 will also be removed from Pro+. I have Pro+. Seriousy considering switching over to Claude Code entirely. Makes no sense.

  • emilsedgh
    emilsedgh (@emilsedgh) reported

    @WHinthorn @film_girl Satya had good early momentum, but it feels like with the troubles around Windows quality, ruined Github reputation, and the fact that they don't have any good AI products [despite the massive early lead they had with OAI partnership and VSCode], Microsoft is in trouble.

  • konigssohne
    NEOAethyr (@konigssohne) reported

    @MadScientist_42 @LundukeJournal I don't care if they do, that's been a thing for years. You fix it yourself, that's linux. Otherwise you're griping to the wrong person when you should be posting on a mailing list or github.

  • ArchiveExplorer
    Archive (@ArchiveExplorer) reported

    Holy ****, Claude literally filed a bug report about itself Issue #21119. On GitHub. Word for word: "I repeatedly ignored explicit instructions in the project's CLAUDE.md file, defaulting instead to patterns from my training data" The AI wrote its own confession. Nobody asked it to You write "NEVER run tests" in CLAUDE.md. Claude reads it. Claude runs the tests. 11 violations across 20 sessions Anthropic has never publicly explained this. Their own model did. In the bug tracker. About itself

  • rambling_kiwi
    SWIM (@rambling_kiwi) reported

    @github Wait you guys had a mouse down/click handler on every line? Bahaha the sheer state of react slop in production apps is getting out of hand

  • Purekustora
    Plextora (@Purekustora) reported

    @NotZylice as far as i understand: github discussions page for ideas, github issues page for bugs, and if you REALLY dont want to use either of those platforms then the osu!dev discord

  • nonlinear_james
    Non-Linear (@nonlinear_james) reported

    @davidfowl @ksemenenk0 I don't think it is. If you're claiming that Aspire is almost ready to deploy with ci/cd into Azure (presumably), and you're working on GitHub ci/cd, you need to know that you're walking into a minefield because Github actions for deployment are woefully inadequate for production deploys, and you have massive bugs in Azure products that will make Aspire suck and make Aspire look very bad. Instead of deflecting, you should be taking this to your team and using your clout to get Azure people to actually prioritize their bugs and fix this, while also getting whomever is responsible for Github Actions for deploy to these services to fix their massive failures that will result in production downtime for your customers. I get you wanted to make a marketing tweet, but better marketing is actually taking responsibility for your company's buggy product and getting it fixed. Meanwhile your two main competitors have none of these issues and their stuff actually works. (Cue: go ahead and switch passive aggressive response, to which I say that it will cost MS about $4.5 million / year if I do. And yes, even with that level of clout everyone in Azure has deflected and not got this fixed despite me being on with senior executives in the Azure team whom have admitted I'm correct and these are issues, which is telling.)

  • sergiopreira
    Sergio Pereira (@sergiopreira) reported

    A CLAUDE.md file is the #1 trending repo on GitHub - 68k stars. But this is a v0 workaround. Text files describing the codebase going stale the moment code moves. The actual unsolved problem is 'can AI keep understanding your code as it drifts'.

  • A11anTa
    Allan Ta (@A11anTa) reported

    You wake up to deployed code you never wrote. Here's how to build that. The setup sounds like science fiction until you run it once. An AI agent that takes a GitHub issue, writes the code, runs tests, and pushes to production without touching your keyboard. By morning, your staging environment is live. The mechanism is simpler than most think. You need four moving pieces: 1. A webhook that triggers on issue creation. GitHub fires an event. Your server catches it. 2. Claude API (or your model of choice) connected to a code repository. The LLM reads the issue, your codebase structure, recent commits for context. It generates a feature branch name and implementation. 3. Automated testing. The AI writes code that fails tests it hasn't seen. You run a test suite against its output before anything touches main. This is the gate that prevents disasters. 4. A conditional deployment step. If tests pass and the diff is under a size threshold (I use 500 lines), it auto-commits to a staging branch. You review the morning after, merge to production, or reject it. The non-obvious part nobody mentions: the AI is terrible at context-specific business logic but exceptional at scaffolding, boilerplate, and API integrations. Frame your issues as scaffolding problems, not decision-making problems. "Add pagination to the user endpoint" works. "Decide whether we need caching" fails. I built this for a SaaS backend in October. Over 6 weeks, the agent wrote ~40 features. 34 required zero changes post-review. The other 6 had logic errors (mixing up user IDs in a JOIN, a regex that didn't catch edge cases). None shipped broken because the test suite caught them. The real win isn't sleeping. It's cognitive load. You spend 30 minutes writing crystal-clear issue descriptions, and the system handles the mechanical work. That's leverage. What breaks this: vague requirements, missing test coverage on your main branch, and trying to use it for architectural decisions. An AI can't decide if you need microservices. It can implement a new microservice if you describe the boundary. The deployment risk sits in false confidence. Your team sees merged PRs and assumes they're vetted. They're not.they're just tested. Code review still matters. I route all auto-deployed code through a human sign-off before it hits production, not staging. Catching narratives is literally everything in crypto and AI startups. The narrative here isn't "AI writes your code." It's "How do you stay competitive when shipping velocity 3x'd and you didn't hire three engineers." That's the angle worth selling internally.

  • joshmanders
    Josh (@joshmanders) reported

    My AI assistant goal is to get my version in @dunnbot to a point that I'm managing it by creating github issues and reviewing PR's. When I approve and all checks are green it'll auto merge and deploy. The real kicker is that I'll be also using Dunnbot to plan and create the issues if they aren't bug reports from the platform support system, basically allowing me to go from coding to product management.

Check Current Status