For every major incident (Critical/High), we need to follow up with a post-mortem. A blame-free, detailed description, of exactly what went wrong in order to cause the incident, along with a list of steps to take in order to prevent a similar incident from occurring again in the future. The incident response process itself should also be included.
!!! warning "Don't Neglect the Post-Mortem" Don't make the mistake of neglecting a post-mortem after an incident. Without a post-mortem you fail to recognize what you're doing right, where you could improve, and most importantly, how to avoid making the same exact mistakes next time around. A well-designed, blameless post-mortem allows teams to continuously learn, and serves as a way to iteratively improve your infrastructure and incident response process.
The first step is that a post-mortem owner will be designated. This is done by the IC either at the end of a major incident call, or very shortly after. You will be notified directly by the IC if you are the owner for the post-mortem. The owner is responsible for populating the post-mortem page, looking up logs, managing the followup investigation, and keeping all interested parties in the loop. Please use Slack for coordinating followup. A detailed list of the steps is available below,
As owner of a post-mortem, you are responsible for the following,
- Scheduling the post-mortem meeting (on the shared MNX-OOO calendar) and inviting the relevant people (this should be scheduled within 5 business days of the incident).
- Updating the page with all of the necessary content.
- Investigating the incident, pulling in whomever you need from the team to assist in the investigation.
- Creating follow-up tickets or Asana Projects (You are only responsible for creating the tickets, not following them up to resolution).
- Running the post-mortem meeting (these generally run themselves, but you should get people back on topic if the conversation starts to wander).
- In cases where we need a public blog post, creating & reviewing it with appropriate parties.
Once you've been designated as the owner of a post-mortem, you should start updating the page with all the relevant information.
-
(If not already done by the On-Call Person) Create a new post-mortem page for the incident.
-
Schedule a post-mortem meeting for within 5 business days of the incident. You should schedule this before filling in the page, just so it's on the calendar.
- Create the meeting on the "Incident Post-Mortem Meetings" shared calendar.
-
Begin populating the page with all of the information you have.
- The timeline should be the main focus to begin with.
- The timeline should include important changes in status/impact, and also key actions taken by responders.
- You should mark the start of the incident in red, and the resolution in green (for when we went into/out of Critical/High Priority).
- Go through the history in Slack to identify the responders, and add them to the page.
- Identify the On-Call Person and any other people involved in the incident.
- The timeline should be the main focus to begin with.
-
Populate the page with more detailed information.
-
Perform an analysis of the incident.
- Capture all available data regarding the incident. What caused it, how many customers were affected, etc.
- Any commands or queries you use to look up data should be posted in the page so others can see how the data was gathered.
- Capture the impact to customers (Site Down, MySQL Issue affected select queries, etc...)
- Identify the underlying cause of the incident (What happened, and why did it happen).
-
Create any followup action tickets or Asana Projects (or note down topics for discussion if we need to decide on a direction to go before creating tickets),
- Go through the history in Slack to identify any TODO items.
- Label all tickets with their priority level and date tags.
- Any actions which can reduce re-occurrence of the incident.
- (There may be some trade-off here, and that's fine. Sometimes the ROI isn't worth the effort that would go into it).
- Identify any actions which can make our incident response process better.
- Be careful with creating too many tickets. Generally we only want to create things that are P0/P1's. Things that absolutely should be dealt with.
-
Write the external message that will be sent to customers. This will be reviewed during the post-mortem meeting before it is sent out.
- Avoid using the word "outage" unless it really was a full outage, use the word "incident" instead. Customers generally see "outage" and assume everything was down, when in reality it was likely just some alerts delivered outside of SLA.
- Look at other examples of previous post-mortems to see the kind of thing you should send.
These meetings should generally last 15-30 minutes, and are intended to be a wrap up of the post-mortem process. We should discuss what happened, what we could've done better, and any followup actions we need to take. The goal is to suss out any disagreement on the facts, analysis, or recommended actions, and to get some wider awareness of the problems that are causing reliability issues for us.
You should invite the following people to the post-mortem meeting,
- Always
- The on-call person.
- Other team members involved in the incident.
- Key engineer(s)/responders involved in the incident.
- Optional
- Customer liaison. (Only Critical incidents)
A general agenda for the meeting would be something like,
- Recap the timeline, to make sure everyone agrees and is on the same page.
- Recap important points, and any unusual items.
- Discuss how the problem could've been caught.
- Did it show up in canary?
- Could it have been caught in tests, or loadtest environment?
- Discuss customer impact. Any comments from customers, etc.
- Review action items that have been created, discuss if appropriate, or if more are needed, etc.
Here are some examples of post-mortems from other companies as a reference,