GitHub Outage Map
The map below depicts the most recent cities worldwide where GitHub users have reported problems and outages. If you are having an issue with GitHub, make sure to submit a report below
The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.
GitHub users affected:
GitHub is a company that provides hosting for software development and version control using Git. It offers the distributed version control and source code management functionality of Git, plus its own features.
Most Affected Locations
Outage reports and issues in the past 15 days originated from:
| Location | Reports |
|---|---|
| Gustavo Adolfo Madero, CDMX | 1 |
| Nice, Provence-Alpes-Côte d'Azur | 1 |
| Brasília, DF | 1 |
| Montataire, Hauts-de-France | 3 |
| Colima, COL | 1 |
| Poblete, Castille-La Mancha | 1 |
| Ronda, Andalusia | 1 |
| Hernani, Basque Country | 1 |
| Tortosa, Catalonia | 1 |
| Culiacán, SIN | 1 |
| Haarlem, nh | 1 |
| Villemomble, Île-de-France | 1 |
| Bordeaux, Nouvelle-Aquitaine | 1 |
| Ingolstadt, Bavaria | 1 |
| Paris, Île-de-France | 1 |
| Berlin, Berlin | 2 |
| Dortmund, NRW | 1 |
| Davenport, IA | 1 |
| St Helens, England | 1 |
| Nové Strašecí, Central Bohemia | 1 |
| West Lake Sammamish, WA | 3 |
| Parkersburg, WV | 1 |
| Perpignan, Occitanie | 1 |
| Piura, Piura | 1 |
| Tokyo, Tokyo | 1 |
| Brownsville, FL | 1 |
| New Delhi, NCT | 1 |
| Kannur, KL | 1 |
| Newark, NJ | 1 |
| Raszyn, Mazovia | 1 |
Community Discussion
Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.
Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.
GitHub Issues Reports
Latest outage, problems and issue reports in social media:
-
dagz (@AstrayaNthemoon) reported@jkpgamer Haha well I was gonna say that I could spin up a GitHub, (Goose + Grok have been dying to let me let them open a GitHub with a key), and then open source the code… it’s like so minimal it’s laughable and then you can put your own location on the map but that’s even better to point to your own server rather than pull the API That’s how we initially had the design built actually but then I wanted it to be something anyone could use
-
Valentin Ignatev (@valigo) reported2026 GitHub is: >broken PRs >broken syntax highlight >broken code selection >broken CI >most stars are faked >won't get you a job >won't protect you from getting flooded by slop >won't protect you from scams >slower even than FreeDesktop's GitLab There's no point to it anymore
-
PsudoMike 🇨🇦 (@PsudoMike) reported@cursor_ai Having context right where you're working instead of switching tabs to GitHub and back makes a real difference. I still catch logic errors I'd have missed on a quick browser review. Curious how it handles large PRs in a payments service with lots of shared types.
-
Daniel Vermillion (@dxverm) reportedI've got 24 MCP servers wired into one session. Almost none of them are loaded right now. The "100 MCPs is too many" debate misses what's actually expensive. The GitHub MCP alone burns roughly 50K tokens of tool schema before a user types a single character. Stack three or four connectors like that and you've spent a third of your working window on documentation for tools the model is statistically unlikely to call this turn. The fix is not fewer servers. It's deferred schemas. The harness publishes a list of tool names in a small system reminder. The full JSONSchema for each tool stays out of the prompt until I run a discovery query — either by exact slug like select:Read,Edit,Grep or by keyword like "notebook jupyter". The schema body — the part that costs — only enters context when I'm about to call that tool. Same pattern works for skills. 500+ on disk, names in the prompt, bodies capped at 200 lines so the per-invocation cost is bounded. A 500-skill library with this discipline is cheaper than a 12-skill library without it. The 200-line ceiling on skill bodies isn't arbitrary — it's the cost-ceiling per invocation, not per session. This changes how you build the stack. Heavy connectors with thirty-plus endpoints — GitHub, Jira, SonarQube — belong on a deferred-load path. Light, every-turn tools like filesystem and memory stay resident. Treat tool definitions like a working set, not a manifest. The model doesn't read all the tools you give it. It reads the ones loaded into the prompt that turn. Tool count is a marketing number. Resident-token weight is the engineering number. Build for the second one.
-
Yashasvi Kapil (@iemyashasvi) reported@ChiragAgg5k @github @github is broken beyond repair
-
Elena Revicheva (@reviceva) reported🤖 Built a prospecting pipeline that finds leads on Hacker News, GitHub, and Product Hunt—then automatically sorts them into HubSpot. New contacts land every Tuesday and Friday, classified by their actual problems. Zero cost, fully automated. #AI #BuildInPublic #AIFounder
-
BrainMirror AI (@brainmirrorai) reportedPrivate Publishing on Replit blocks unauthorized requests at the network level, not inside the app. The problem was always integrations: if you needed GitHub webhooks or Slack callbacks to reach your private app, your only option was to make the whole app public. External Access Tokens remove that tradeoff.
-
Hot Aisle (@HotAisle) reportedi noticed today that our github actions usage over the last month cost us a whopping $0.88 to build our software. so, i decided to move it in-house onto an actions runner. setting this up properly is a pain because you really need to build your own isolation. for safety, your builds should run in ephemeral VMs, similar to how github actions works. i used Codex to build the whole thing for me. it gave me step-by-step instructions for setting up the GH App and private key, wrote the shell scripts, configured the systemd units, then debugged everything over ssh directly on the server. what would have taken me hours or days, along with filling my brain with a bunch of esoteric devops knowledge that i really don't care about, was done in under 30 minutes. now we have two idle runners. one takes a build job, runs it, then dies and gets reaped. the second takes over while the first resets. my mind is blown. if you're not all in on AI, i feel for you.
-
Aryan Pamwani (@aryanpamwanii) reportedThink about what this actually means — Codex can watch you work in your browser, understand the context of any web app, and fix bugs or write code without you leaving the tab. GitHub Copilot works in your editor. Codex is coming for your entire browser. The IDE just got a lot less important. 👀
-
Jeffrey Emanuel (@doodlestein) reported@dkubb Yeah, I do have a fuzzing skill. If you create an GitHub issue for that then the clankers will do it.
-
Fundl (@fundl_us) reported@apoorvdarshan is building small tools. 10k on LinkedIn. 100 GitHub followers today. The 100 felt better. On the way to the gym: first paying customer. "someone had the problem, found the app, trusted it enough, and paid. that's a better milestone than adding features."
-
Detour Ninja (@detour_squirrel) reported@RealVZer0 @NocontextRvB this is fake. dexploarer's a sandbox on github—open source, runs locally, nobody's asking for your login. sounds like you got duped by someone cosplaying the project. check the actual repo before spreading that.
-
Gabe (@gabebusto) reportedbro setting up an agent to do production work is so easy. you just need to create an account somewhere and for your agent to work remotely. cloudflare, hetzner, aws, digital ocean, etc. then pick the agentic tool, and the model, and get an api key or use oauth. then make sure in it's in a sandbox setup with the right permissions and access to your tooling like github, slack, linear, and maybe even some staging and production resources. you really need to be careful though because if agents have any write access to important stuff, it could do something really dumb like delete your database. also for the love of GOD backup your database frequently somewhere the agent can't touch. also prompt injections online can get your agent to leak sensitive env vars so you need to be careful about that. maybe limit network access or inject tokens/sensitive vars once requests leave the sandbox. you probably don't want the agent always on sitting idle, so either figure out how to give it work efficiently to always keep it busy or use some that can pause and resume with ease so you're not billed around the clock for idle resource usage. then you want guardrails in your codebase and deployment pipeline so the agent can't break things and you don't need to feel guilty not reviewing its code. because cmon, nobody wants to do that. you need to make sure your agents have as close to perfect context as possible. so maybe start building a knowledge base, move docs into the repo, or make sure your agent can easily search linear and slack and other places to build context for tasks to work on. and before each task, spend ~10-20+ mins typing things up and giving the agent as much context as possible. oh yeah and your agent ideally should be able to test its changes as completely as possible. so make sure the agent can start up the service(s) it's working on and test them. maybe you need it to open and run a browser, send screenshots, record a video, and so on of its test so you can easily review it in the PR. you also want a bugbot setup in github (if you're still using github at this point) to help scan each PR for potential issues the agent missed. and the agent should be able to automatically address any bugbot findings, fix them, run more tests, and push those changes, and run in a loop until no more bugs are found by the bugbot. i forgot to mention, you probably don't want your agent's code just yolo shipping into **** with no guards in place _after_ it deploys. allow the agent to setup it's new features and code behind feature gates or experiments and do a gradual rollout in case there are any catastrophic problems. then you'll want automatic rollback if issues are detected. and there's probably stuff i'm forgetting, but you get what i'm saying right? it's really not that hard. then you need constant vigilance of your codebase and create lots of skills to help deslop work the agents are doing, maybe create an anti-entropy agent (_another_ agent!) to hunt for growing complexity and auto-create PRs to try and fight to reduce the size and complexity of the codebase. then you'll inevitably have incidents caused by code written by agents that was never reviewed by humans, and either you or yet-another-agent will take a look at your production systems to help you figure out what's wrong because it's all becoming a bit more foreign to you. and you can just have the agent try to make changes on your behalf to fix things and hope to God that it doesn't make things worse. if all of this isn't exciting enough, you then give each engineer and even non-tech team members their own access to the ai tools and agents and models of their choice which easily costs an extra few hundred dollars per month per employee at best. in the worst case, you have someone on the team blow through the team's monthly AI spend by a significant margin by accident using the best models in fast mode because they were too impatient to just use the sota models at normal speed. and spend will likely only go up btw. and if you're not reading between the lines here, product work slows because everyone is playing with agents to learn how to use the agents more efficiently in the hopes that it's a magical bullet that solves all of the woes in software engineering and building production systems. and now you need this magical bullet to work because you're falling behind to teams who maybe aren't distracted spending all this time and money trying to make this all work. but you're definitely going to catch them. once you've figured this out, you'll 10x or 100x your output and leave them in the dust! or... you could just have engineers start coding by hand again before it's too late and becomes a lost art. you can even make modest and tasteful use of ai, but without doing all of the above. i actually miss the days of supermaven and early cursor. they were so simple and actually removed some friction and some of the annoying parts of coding.
-
Robert Ta (@therobertta_) reportedHOW WIF ACTUALLY WORKS Three steps at runtime: 1. Your identity provider issues a JWT to the workload (ambient on AWS, GCP, GitHub Actions, Kubernetes) 2. SDK exchanges the JWT for a short-lived Anthropic access token 3. SDK refreshes the token before it expires Your application code has no API key anywhere. The SDK handles the exchange transparently. Same client constructor, same API calls. The credentials are ambient and ephemeral.
-
Divyansh Chaudhary (@divvbiz) reportedIf AI can build what you built in a fraction of the time and cost, your product alone is not a defensible business. This is not a prediction. It's already happening. GitHub Copilot and Cursor have reduced development time by 30 to 50 percent depending on the task. The cost to build an MVP has dropped from hundreds of thousands to tens of thousands, sometimes less. Y Combinator recently noted that several companies in their latest batch were built almost entirely by non-technical founders using AI tools. The barrier to building is approaching zero. Which means the product, by itself, is approaching zero as a competitive advantage. When every competitor can ship faster, iterate cheaper, and access the same models and infrastructure as you, the product becomes table stakes. What cannot be replicated overnight is who you know and how fast you can reach them. Distribution has always mattered. It is about to matter more than it ever has. A founder with trusted relationships inside the right 200 companies will consistently outperform a better product with no market access. Not because the product doesn't matter. Because trust compresses the entire sales cycle in a way no feature set can. McKinsey's research shows that 80 percent of B2B buying decisions involve personal referrals or existing relationships somewhere in the process. That number doesn't go down when AI makes products easier to build. It goes up, because when product differentiation narrows, buyers fall back on the one variable they can still trust. Andreessen Horowitz has said it directly: in the AI era, distribution is the moat. I'd go further. Relationships are the moat within the moat. Because distribution can be bought. Relationships have to be earned. And a genuinely trusted network in a specific market takes years to build and is nearly impossible to copy. The question is not whether AI will commoditise your product. It probably will, or already is. The question is: if your product became replicable tomorrow, what would still make you the obvious choice? If the answer isn't clear, that's where the real work is.