
Sign up to save your podcasts
Or
An outage occurs, a change is rolled back, and everything stabilizes. But what happens when the change is attempted a second time?
These second tries often go much more smoothly. While another outage might still occur during this “take two,” the impact is usually far less severe. The engineering team has learned from what went wrong the first time and is ready to stop at the first hint of trouble.
Slack recently experienced a pair of disruptions that appear to illustrate this “take two” scenario: a longer disruption resulting from a routine database cluster migration, followed by a much shorter outage a few weeks later that also involved database work, potentially indicative of related work that went more smoothly.
And for more insights, check out these links:
- The Internet Report: Pulse Update Blog: https://www.thousandeyes.com/blog/internet-report-pulse-update-slack-x-outage?utm_source=transistor&utm_medium=referral&utm_campaign=internetreportpulseep18
- Explore the Slack and X disruptions in the ThousandEyes platform (NO LOGIN REQUIRED):
Slack: https://afkmcwbeszwdtqqpvouwgjolywiugryx.share.thousandeyes.com/
X: https://adcsnhfupsardmzyocrxqdcvriengkew.share.thousandeyes.com
———
CHAPTERS
00:00 Intro
00:47 The Download
04:06 By the Numbers
05:41 Slack Disruptions
09:25 X Disruptions
20:20 Get in Touch
———
Want to get in touch?
If you have questions, feedback, or guests you would like to see featured on the show, send us a note at [email protected]. Or follow us on X (formerly Twitter): @thousandeyes
4.8
66 ratings
An outage occurs, a change is rolled back, and everything stabilizes. But what happens when the change is attempted a second time?
These second tries often go much more smoothly. While another outage might still occur during this “take two,” the impact is usually far less severe. The engineering team has learned from what went wrong the first time and is ready to stop at the first hint of trouble.
Slack recently experienced a pair of disruptions that appear to illustrate this “take two” scenario: a longer disruption resulting from a routine database cluster migration, followed by a much shorter outage a few weeks later that also involved database work, potentially indicative of related work that went more smoothly.
And for more insights, check out these links:
- The Internet Report: Pulse Update Blog: https://www.thousandeyes.com/blog/internet-report-pulse-update-slack-x-outage?utm_source=transistor&utm_medium=referral&utm_campaign=internetreportpulseep18
- Explore the Slack and X disruptions in the ThousandEyes platform (NO LOGIN REQUIRED):
Slack: https://afkmcwbeszwdtqqpvouwgjolywiugryx.share.thousandeyes.com/
X: https://adcsnhfupsardmzyocrxqdcvriengkew.share.thousandeyes.com
———
CHAPTERS
00:00 Intro
00:47 The Download
04:06 By the Numbers
05:41 Slack Disruptions
09:25 X Disruptions
20:20 Get in Touch
———
Want to get in touch?
If you have questions, feedback, or guests you would like to see featured on the show, send us a note at [email protected]. Or follow us on X (formerly Twitter): @thousandeyes
3,019 Listeners
361 Listeners
2,141 Listeners
898 Listeners
14,249 Listeners
32,260 Listeners
326 Listeners
3,667 Listeners
265 Listeners
590 Listeners
111,746 Listeners
56,180 Listeners
5,438 Listeners
28,304 Listeners
15,220 Listeners