It reminds me of NTSB reports, particularly around aircraft accidents, where even if one person was definitely to blame for the accident happening (eg a pilot performed an incorrect action that least to the loss of a plane), the report will recommend things like better training and testing standards to make sure that a pilot who crashes through incompetence can be trained more, without blaming the pilot specifically
Swiss Cheese model, to prevents bad things from happening we need to focus on preventing situations where those bad things could even arise.
Britain's Maritime Accident Investigation Branch (EU rules required members to have such agencies, although I think the UK had several of them before that anyway) published a memorable report where, despite this usual practice they offer zero recommendations.
The accident was basically some guys took a fishing boat out, did a lot of heroin, got into trouble and all died. And there were no recommendations because heroin is already illegal, operating a fishing boat while on heroin is also illegal, so, yeah, we already told you this was a terrible idea, there's nothing to recommend.
That is the mindset, but you really see a general lack of this in the industry... even as the term has been so popularized.
At the end of the day if your intern can take down your production DB, about 5 other things went wrong first to put them in a position to be able to do so.
Systems are complex, and sometimes the holes in the Swiss cheese line up.
> The “blameless” aspect is crucial: a good postmortem avoids conclusions like “Dan wrote a bug and it brought down our service” and instead says “Dan wrote a bug and it brought down the service: we need to improve our testing and deployment processes to make sure that they catch this category of bugs in the future.”
As it should be. The purpose of post mortems is to prevent future incidents, and obscuring the facts of what happened by removing names detracts substantially from clarity of understanding and, therefore, defeats the point.
There are two important things that make something blameless: phrasing and culture. If you've phrased something in such a way that there's a clear value judgement, your phrasing isn't blameless. And if you're writing in a culture where, no matter how precise the phrasing, the simple existence of a name will make people blame them for what happened, then your culture isn't blameless. Both are required for a blameless post mortem.
Also, think of it this way: no amount of anonymization will prevent the people involved from knowing who did what. If they're privately blaming the person for the incident, it's still not a blameless post mortem.
No amount of verbal wallpaper can fix a broken culture.
The blameless aspect of post mortem's is paradoxical. I agree with the sentiment but at the end of the day, somewhere in there, the blame is placed on one or two human made oversights or errors. If the conclusion of the PM is "This error was caused when <This PR> was submitted", then everyone's natural instinct is to go look at who authored the PR.
I guess aiming for blameless is as good as it gets sometimes.
I've always seen it as specifically being blameless of individuals. E.g. it's ok for blameless post mortems to find faults in systems, but ideally not the people that use them.
I would have added some more things that you could have mitigated - like lowering your sail to half mast after the wind increase. Or only using the jib or even switching to engine power.
Which in the context of incident prevention translates into adapting to what is happening and maintaining the safety profile to prevent the incident.
Half mast sale - less force on the mast, more time to react to things when going solo.
I do something similar when interviewing, asking candidates to walk through a project they’ve worked on that didn’t go as planned, and what they learned.
Usually it’s work-related, but sometimes the personal stories like this sailing one give a better insight and show real understanding of systematic failings and that they really have the right mindset. Those real world examples speak volumes.
This is a cool idea! At first I thought it was that they give you notes about what happened, and you have to process the information real-time and suggest practical improvements.
It reminds me of NTSB reports, particularly around aircraft accidents, where even if one person was definitely to blame for the accident happening (eg a pilot performed an incorrect action that least to the loss of a plane), the report will recommend things like better training and testing standards to make sure that a pilot who crashes through incompetence can be trained more, without blaming the pilot specifically
Swiss Cheese model, to prevents bad things from happening we need to focus on preventing situations where those bad things could even arise.
Britain's Maritime Accident Investigation Branch (EU rules required members to have such agencies, although I think the UK had several of them before that anyway) published a memorable report where, despite this usual practice they offer zero recommendations.
The accident was basically some guys took a fishing boat out, did a lot of heroin, got into trouble and all died. And there were no recommendations because heroin is already illegal, operating a fishing boat while on heroin is also illegal, so, yeah, we already told you this was a terrible idea, there's nothing to recommend.
That is the mindset, but you really see a general lack of this in the industry... even as the term has been so popularized.
At the end of the day if your intern can take down your production DB, about 5 other things went wrong first to put them in a position to be able to do so.
Systems are complex, and sometimes the holes in the Swiss cheese line up.
> The “blameless” aspect is crucial: a good postmortem avoids conclusions like “Dan wrote a bug and it brought down our service” and instead says “Dan wrote a bug and it brought down the service: we need to improve our testing and deployment processes to make sure that they catch this category of bugs in the future.”
The offending dog's name is still there...
As it should be. The purpose of post mortems is to prevent future incidents, and obscuring the facts of what happened by removing names detracts substantially from clarity of understanding and, therefore, defeats the point.
There are two important things that make something blameless: phrasing and culture. If you've phrased something in such a way that there's a clear value judgement, your phrasing isn't blameless. And if you're writing in a culture where, no matter how precise the phrasing, the simple existence of a name will make people blame them for what happened, then your culture isn't blameless. Both are required for a blameless post mortem.
Also, think of it this way: no amount of anonymization will prevent the people involved from knowing who did what. If they're privately blaming the person for the incident, it's still not a blameless post mortem.
No amount of verbal wallpaper can fix a broken culture.
The blameless aspect of post mortem's is paradoxical. I agree with the sentiment but at the end of the day, somewhere in there, the blame is placed on one or two human made oversights or errors. If the conclusion of the PM is "This error was caused when <This PR> was submitted", then everyone's natural instinct is to go look at who authored the PR.
I guess aiming for blameless is as good as it gets sometimes.
I've always seen it as specifically being blameless of individuals. E.g. it's ok for blameless post mortems to find faults in systems, but ideally not the people that use them.
I would have added some more things that you could have mitigated - like lowering your sail to half mast after the wind increase. Or only using the jib or even switching to engine power.
Which in the context of incident prevention translates into adapting to what is happening and maintaining the safety profile to prevent the incident.
Half mast sale - less force on the mast, more time to react to things when going solo.
I do something similar when interviewing, asking candidates to walk through a project they’ve worked on that didn’t go as planned, and what they learned.
Usually it’s work-related, but sometimes the personal stories like this sailing one give a better insight and show real understanding of systematic failings and that they really have the right mindset. Those real world examples speak volumes.
This is a cool idea! At first I thought it was that they give you notes about what happened, and you have to process the information real-time and suggest practical improvements.
I think this would ultimately be the best approach as it creates an even playing field for all candidates