Slack

Slack Outage Map

The map below depicts the most recent cities worldwide where Slack users have reported problems and outages. If you are having an issue with Slack, make sure to submit a report below

Loading map, please wait...

The heatmap above shows where the most recent user-submitted and social media reports are geographically clustered. The density of these reports is depicted by the color scale as shown below.

Slack users affected:

Less
More

slack Hero Image

Slack is a cloud-based set of proprietary collaboration tools and services. It's meant for teams and workplaces, can be used across multiple devices and platforms.

Map Check Current Status

Most Affected Locations

Outage reports and issues in the past 15 days originated from:

Location Reports
Creil, Hauts-de-France 1
New Delhi, NCT 1
Salt Lake City, UT 1
Miami, FL 1

Community Discussion

Tips? Frustrations? Share them here. Useful comments include a description of the problem, city and postal code.

Beware of "support numbers" or "recovery" accounts that might be posted below. Make sure to report and downvote those comments. Avoid posting your personal information.

Slack Issues Reports

Latest outage, problems and issue reports in social media:

  • chhopsky christina 死神 (@chhopsky) reported

    @PrinnyForever @SlackHQ i'm struggling to understand what this problem is so you do something like await response = body=body, headers=headers) and then you just? dont. get the response. something eats it, because the server returned a 400?

  • chhopsky christina 死神 (@chhopsky) reported

    @PrinnyForever @SlackHQ i'm struggling to understand what this problem is so you do something like await response = body=body, headers=headers) and then you just? dont. get the response. because the server returned a 400

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ To the best of my recollection, all issues that reach a level of "this is a pain to deal with" have been caused by client frameworks. Server rarely does anything more than make it harder to control the error code (500 codes are almost always automatic from other layers)

  • chhopsky christina 死神 (@chhopsky) reported

    @PrinnyForever @SlackHQ i'm gonna be honest i have no idea what you're talking about. sounds like user error or misunderstanding not using correct http codes is inting. "we just decided we're going to ignore the RFCs and do something that looks compatible but isnt" always ends up ******* people

  • chhopsky christina 死神 (@chhopsky) reported

    @PrinnyForever @SlackHQ i just described a bunch of main scenarios before that could be universally handled based on error code alone people dont do that. thats what im saying. if they do they tend to get fired. i dont know anyone who would hire someone who argued for overriding them

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ Yes, that's something I've experienced lol Granted, your primary intuition of > in order for this to increase problems, a terrible number of things need to have gone horribly wrong already is probably cogent The more I think about it the more it's starting to feel like CSS

  • chhopsky christina 死神 (@chhopsky) reported

    @PrinnyForever @SlackHQ 200 > i trust everything worked 207 > i trust some things worked, data issues 400 > dont retry, request was incorrect 429 > back off using header info 503 > dead letter queue, retry sooner 502 > dead letter queue, retry later

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ if success, process normally, if error, put into error pipeline, good to go. Instead of "where ******** is my request, why is it being handled by this interceptor over here, why did it throw an exception"

  • chhopsky christina 死神 (@chhopsky) reported

    @PrinnyForever @SlackHQ when you talk about being required to parse the body to handle the error, i honestly have no idea what you're talking about. generally that stuff is only useful if i, a human, am inspecting the response to tell how to write my program

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ This third post is specifically with respect to "regardless of how malformed the request is" So there's a bifurcation of the handling into 1) We can parse your request enough to tell you you did something wrong, return 200, fail, errors 2) This is ******* nonsense, 500

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ Sure, and that's all great for expectations and high level, but what if you want to actively interface with the response of the error? Some of those would absolutely go into the diagrams bottom normal path, others frequently would return actionable data

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ Being able to differentiate if the API was responding properly or royally ******* up without having to parse individual error messages has generally been easier to work with and debug as far as integrations is concerned

  • isugimpy Jayme "Danger" Howard (@isugimpy) reported

    @StagedGames @chhopsky @SlackHQ 4xx is not for when you can't establish communication. 4xx is a client-originated problem that causes the request to not complete as expected. Bad payload, lack of auth, incorrect method, rate limiting, etc. Returning a 200 with an error payload is incorrect by the RFC.

  • PrinnyForever Prinfection (@PrinnyForever) reported

    @chhopsky @SlackHQ From anecdote, most error codes that are not high level (like an auth failure or unreachable) usually require parsing the body anyway, at which point the body containing any code usually will contain the same useful information the code should

  • mejumba Panthère Noire (@mejumba) reported

    @CrnaGoraOne @SlackHQ Sorry to hear that. But what's the issue here? Connectivity?

Map Check Current Status