5

The Demanding Work of Analyzing Incidents

 1 year ago
source link: https://ferd.ca/the-demanding-work-of-analyzing-incidents.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

2022/11/01

The Demanding Work of Analyzing Incidents

A few weeks ago, a coworker of mine was running an incident analysis in Jeli, and pointed out that the overall process was a big drag on their energy level, that it was hard to do, even if the final result was useful. They were wondering if this was a sign of learning what is significant or not as part of the analysis in order to construct a narrative.

The process we go through is a simplified version of the Howie guide, trading off thoroughness for time—same as using frozen veggies for dinner on a weeknight instead of locally farmed organic produce even if they'd be nicer. In this post, I want to specifically address that feeling of tiredness. I had written my coworker a large response which is now the backbone of this text, but also shared the ideas with folks of the LFI Community, whose points I have added here.

First of all I agree with my coworker that it's tedious. I also think that their guess is a good one—learning what is useful or not takes a bit of time, and it's really hard to do an in-depth analysis of everything because you're looking for unexpected stuff.

You do tend to get a feel for it over time, but the other thing I’d mention is that the technique used in incident analysis—reading, labeling and tagging data many times over—is something called Qualitative Coding Analysis. In actual papers and theses, you’d also calibrate your coding via inter-rater reliability measures. Essentially, the qualitative analysis looks at all the data, waits for patterns to emerge, which they then label, then ask scientists to look at the labels and apply them to the source material. If the hit rate is high, then the confidence in the label is higher given different people interpret events and themes in the same way.

This process ensures their thematic analysis is solid and not biased, meeting the standards of a scientific peer review. Academics tend to pick their methodology, method, interviewing, and tagging mechanisms very carefully because you have to be able to defend your whole research. When we tag our incidents through a tool like Jeli, we do an informal version of this. Our version is less rigorous (and therefore risks more bias and less thoroughness) but can still surface interesting insights in a somewhat short amount of time, just not in a way that would survive peer review.

Still, that [superficial] analysis is demanding. It's part of something called the Hermeneutic circle, which Ryan Kitchens described as looping on the same information continually with compounding 'lenses'. This is cognitively taxing, but useful to gain insights that wouldn't have been visible from your own initial perspective.

Ryan also pointed out that incident analysts should recognize that they are taking on an additional, distinct burden that no one else in the incident has when doing the analysis, and that impacts the energy level an incident may have on you.

Eric Dobbs for his part states:

So many times I feel myself get lost in one forrest looking for specific trees, then distracted by all the fascinating flora and fauna—then something snaps me out of it and I can’t remember what tree I was originally looking for. Finding my way back… It’s so exhausting.

All these efforts are done to surface themes. Themes are what lets you extract an interesting narrative out of all the noise of things that happen in an incident. I like to compare it to writing someone's biography. Lots of things happen in someone's life, and if you want to make a book about it worth reading, you're going to have to pick some elements to focus on, and events to ignore or describe in less detail. That's an editorial decision that can remain truthful or faithful to the experiences you want to convey, while choosing to shine a light on more significant elements.

This whole analysis serves the objective of learning from incidents. But learning isn't something you control or dictate. People will draw the lessons they'll draw, regardless of what you had planned for. All you can hope for is to provide the best environment possible for it to take place. In environments like tech, a lot hinges on people's mental models. We can't implant nor extract mental models, so challenging them through experience or discussion is the next best thing, and exposing how people were making decisions, the various factors and priorities they were juggling, or the challenges they were encountering are all key parts of their experience you wish to unveil.

In short:

  • It's normal to find it tiring, this is a bit like doing science (but with a lighter process)
  • It does get easier as you get a better feel of the sort of interesting stuff worth surfacing and discussing
  • Keep in mind that we don't control the learning and most of it gets done in active discussions and comparisons of people's understandings (mental models)
  • Your task is then more easily defined as finding good jumping points and constructing a narrative that lets your coworkers tell their story with high psychological safety (questions, confusion, people working in opposite directions are all good markers of models not being aligned)
  • The story-telling and discussion will take care of doing the teaching; if you want to write a review, look at what people were telling you in one-on-one discussions when preparing questions, what people were saying in the meeting, or the sort of questions people were asking.
  • If you do have enough interesting facets to your incident to run a discussion for an hour (the longest meeting duration we tend to use), you may start letting go and skimming the finer points. The fact we run these with meetings puts a complexity/thematic upper bound. If you do a written report only, it's tempting to just keep going as deep as possible, but long texts make for shallow readings.

A final note on the editorial stance of a written review that follows up your investigation: focus on themes you think were interesting, be descriptive more than prescriptive. It may make sense to note insights or patterns people highlighted or felt were noticeable, but don't pretend to have answers or the essence of what people should remember. I feel I'm doing a better job of writing a report when I consider the task to be an extension of incident review facilitation. Set the proper tone and present information so people can draw whatever lessons they can, but from what is hopefully a richer set of perspectives with varied points of views.

You're not there to tell them what was important or worth thinking about, but to give the best context for them to figure it out.

Thanks to Chad Todd for reviewing this text.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK