written by
Luke Szyrmer

Why Process Behavior Charts help manage with metrics with Mark Graban

Metrics culture 42 min read

When managing remote teams, you need data to know what is happening. Yet once you start gathering a lot of data, it's hard to know what is useful and what is noise. Process behavior charts, an old yet very effective tool, help you know when there is an important change. You can separate out signal from noise and know exactly when you can safely ignore what’s happening, and when you need to take action.

About Mark Graban

Mark Graban is a serial author and lean healthcare expert. His most recent book is "Measures of Success: React Less, Lead Better, Improve More." He serves as a consultant to organizations through his company, Constancy, Inc and also through the firm Value Capture. He is also a Senior Advisor to the technology company KaiNexus. He has focused on healthcare improvement since 2005, after starting his career initially at General Motors, Dell, and Honeywell. Mark is the host of podcasts including “Lean Blog Interviews,” “My Favorite Mistake,” and “Habitual Excellence, Presented by Value Capture.” Mark has a B.S. in Industrial Engineering from Northwestern University and an M.S. in Mechanical Engineering and an M.B.A. from the Massachusetts Institute of Technology’s Leaders for Global Operations Program.

Topics

  • managing via systems: how do you help people who don't naturally think in terms of systems to accept that in fact the system is responsible for productivity?
  • What is a process behavior chart and why are they useful?
  • When is it worth looking for a root cause of a problem?
  • arbitrary target vs law of nature...how do you know you've arrived at a useful metric?
  • pressure without support results in distort the numbers, distort the system, or improving the system. What kind of support do you need to provide in order to improve the system?
  • 'People don't like to make mistakes. Change the system and the workers are suddenly a lot happier. People don't like to be blamed for what they have no control over'

Key Takeaway

'signals' vs noise, i.e. how to identify signals on an X Chart of a PBC:

  • any data point outside the limits
  • eight consecutive points on the same side of the central line
  • three out of four consecutive data points that are close to the same limit than than they are to the central line

Transcript

We need to shift performance up. So that range of typical random variation is always within the quote unquote green band. We're not going to get there if we're constantly reacting.

And blaming or punishing or rewarding people based on statistical fluctuation.

Welcome. Welcome. Welcome back. This is Luke Szyrmer from the managing remote teams podcast. We are kicking off season two, the first episode of season two with mark . Mark is a serial author and a leading healthcare expert. And we're talking today about his most recent book measures of success, which I found really helpful when running remote teams he serves as a consultant organizations through his company, constancy and.

And also through the firm of value capture and he also works with KaiNexus and initially he's been focused on health care improvements since 2005 after starting out initially at general motors, Dell and Honeywell, and mark also hosts podcasts the lean blog interviews, my favorite mistake and habitual excellence.

Mark, how did you get into lean? Healthcare specifically.

First off, thanks for having me here, Luke. And thank you for the kind words about the book. And as you alluded to my career started in manufacturing. I have a bachelor's in industrial engineering. So sitting in a operations management course in 1994, I was introduced to, pieces of the Toyota production system. It was being framed as production scheduling and inventory management, which was good knowledge, but that was a very incomplete view of what we would call the Toyota production system or lean.

As you mentioned, I started my career at general motors. Those two years were like an accelerated graduate program where the year one was working under a very traditional general motors plant manager. A regime of fear and blaming and shaming. And it was a pretty nasty environment. And I was looking to quit and get out that didn't take me long to realize that it was pretty toxic, but I was learning a lot.

I did have a couple of good mentors, but then year two of that general motors experience. I was under a Toyota trained plant manager. It was one of the original general motors people who was sent to the Toyota joint venture plant in California in the eighties. And that was such a world of difference.

The plant manager is pretty much, for when you've got 800 employees on site, the leadership behaviors and it flows downhill, right? So things really started to change dramatically where we were trying to implement lean tools, but the culture wasn't right for it.

And a new plant manager brought in a hugely different culture. So then fast forwarding a bit. I did leave after year two, I had a chance to go to MIT for grad school. And then I still thought my career path was going to be manufacturing. As you mentioned, Dell, then it was at a startup software company in Austin, Texas than Honeywell.

And then 2005, my wife was taking a new job and meant moving, which put me on the job market. And I got a call from somebody at Johnson and Johnson, where they had a, basically a lean healthcare consulting group that worked all out in the field with medical laboratories and hospitals. And that was my introduction into lean health care because at the time I don't think any hospital would have hired me, but the fact that I was part of this team at Johnson and Johnson, that was a mix of clinicians, former J and J manufacturing people, and outside manufacturing people, thankfully that Johnson and Johnson brand name was something that allowed enough trust for me to come in and start working with healthcare organizations.

And I thought this could be an experiment for a year or two because I was changing jobs a lot at that point in my career. But as it turned out, I've, it's been a real privilege to work in healthcare. And I'm still at it some 15 years, 15, 16 years later.

So with lean, the way that I understand lean and to insert a production system, it's very systems-based. And how, when you're, I guess when you're first working with people, Given that you view the world very much from the point of view of systems. How do you help people who don't actually naturally think in those terms to accept the fact that often the system of work is responsible for products

and the system is also responsible a lot of ways for safety and for quality and for patient satisfaction and other measures.

It's funny. It's. It's almost more, I would almost frame it more in terms of how do you help people unlearn things, I'll cite Barry O'Reilly, who's got a book called unlearn, right? So especially once we get exp education and then experience, and a lot of times I'm working with people in healthcare who have decades of experience, as let's say a nurse or as an executive, sometimes I may pay somebody.

Apparently got the job pretty young because they've been CEO of that hospital for 25 years, which I don't know if that's always a positive, but there are one reason it's not a positive is that there are often just old habits ingrained. I referred to general motors, this blaming and shaming environment.

I learned in healthcare they very frequently talk about the dysfunctional culture of naming blaming. When something goes wrong, they ask who screwed up instead of asking a more systems-based question around what went wrong, what allowed that to occur? So I think one of the great injustices of healthcare, and this is not just an American problem is when poorly designed systems or badly managed systems or systems that are not being improved with everybody involved.

It harms patients. Far too often. And then it also sadly, and I think again, unfairly, unjustly ends up ruining the careers of individuals who then get blamed for the systemic error. That's a problem. So how do you help people unlearn some of those have.

One exercise that I've found very helpful.

And I write about it and measures of success is something made very famous by w Edwards Deming back in the eighties and going into the early nineties called the red bead experiment or the red bead game and, and participate, not just reading about it, but participating in it in a hands-on way.

I found is actually very eyeopening for healthcare executives. Physician leaders. It's in a nutshell, it's a silly game where you have a plastic container full of a bunch of thousands of beads, and 20% of them are red. So that's part of the system as it exists. And you have a paddle that you dip into the beads.

And part of the fun of the game is that we have this kind of obnoxious, really precise procedure for how you're supposed to do this. Can you dip the dip, the paddle into the. And the paddle has 50 holes. So you, and I'll pose it to the listeners to think if there's 20% red beads on average, how many red beads out of 50 would you expect to have on the path on average 10?

That doesn't mean you get 10 every time. Like the variation is that you would typically get anywhere between three and 17 red beads on the paddle. So we have six different people, all dipping their paddles into the box. And you pretend as the facilitator role-playing a manager that the number of rugby, it's a, somehow an output of that individuals of how they did the job.

And then you can start rewarding people and punishing them and put someone on probation. And then you put someone on probation and guess what? They regressed to them. And then they do better the next time. And then you pat yourself on the back as a leader. Clearly you're threatening that person's job improved their performance.

It's all nonsense, right? It's all the motivated. Yeah. Motivated them and you offer incentives and you set a quota. And again, like the performance is absolutely driven by the system and people playing the game are in on the joke. People fall into it where I'm like, I fire the bottom three performers after a couple of rounds of this people do.

It's funny. I, it's hard to tell are they still role-playing or are they upset? They've been fighting. For what is essentially, but then when we draw connections then to the workplace, I facilitated one of these red beat games and a chief medical officer who wasn't playing the game, but he was in the front row.

Like he was very engaged. He wasn't just hanging out in the back of the room. And so in, in the debrief discussion, I'm asking people for their reflections and observations, and I could tell, he had really thought about this during the game and he stood up and. This has helped me see that all of our quality and patient safety metrics are essentially red beets.

One month, the number is better the next month, the number is worse. It doesn't, there's no cause and effect to ascribe to that variation and performance. It's just fluctuating around an average. And instead of reacting to the most recent data point, whether you think that's moved in the good direction or a bad direction, You need to improve systems.

And that's why the subtitle of my book measures of success is react less lead to better improve more. Cause a lot of that reaction, it's just, it's performative. It's a waste of time and it's a distraction from the real work to improve systems, which is by nature, less reactive.

And more systematic. So the red bead game is eyeopening. It sounds silly. It sounds like a dumb exercise, but when you go through it, it is eyeopening.

Yeah. Yeah. So in terms of a particular thing that you go into and measures of success the process behavior chart. what is it and what does it help you find out and why is it useful in this current.

Yeah. So there's the conceptual point. First off it says don't react to what we could call routine variation in a performance metrics. So if the chief medical officer was observing, if you look at a rate of say patient falls, some proportion that number last month was one point. The month before the number was 0.9, that changed from 0.9 to 1.2.

I can't determine this from two data points and that's another trap people fall into is just continually comparing two data points. So yes, it is a true statement. That 1.2 is higher than 0.9. It's true, but it could be meaningful. Especially when again, leaders say wait a minute. Oh, we need to form a committee.

We need to do some problem solving. We need to look for the root cause. There might not be a root cause because the same system of the same people doing the same work, the same way with similar patients is not always going to generate the same number as an output. There's always going to be some level of variation.

So our process behavior. Allows us to put a little bit of math to it. So we're not guessing like what is routine variation and what data points or data points might be an outlier, or you could call it, unusual variation or a special cause variation where there, there is a root cause to look for.

The system has changed significantly, so that's the time to react. But so anyway, process behavior charts, the technology behind it is a hundred years old. I don't think that makes it outdated. I think it's just there's something science discovers and that knowledge is good. Like the idea that germs spread and caused disease.

That's an old idea at this point, but it doesn't mean it's outdated. It's still true. There are some that will deny it, but it's accepted as true. So this technology more generally speaking could be called statistical process control as a methodology. A control charts is another term that would be is this is nowadays probably more often taught in what we would frame as six Sigma methodology, but it long predates six Sigma, but it is a statistical method.

So doing this in an audio format if we were to create a process behavior chart, first off, give me more than two data points. If we've got a monthly metric, give me at least the last 12, if not better. The last 24 data points. And then we plot those visually, like not a list of numbers, but it's sharp.

Start off with just a run chart and, or, Excel calls of the line chart. Even if you don't want to use the full blown process behavior chart methodology. And it's just a line chart just illustrate so much more. You can start seeing these patterns. Is there fluctuation around an average?

It does there seem to be a linear trend line? That's a true linear trend or its performance. What happens more often is that you look back over 24 months, that metric, whether it's falls or customer acquisition cost for a startup software company, that number could be fluctuating around an average.

And then at some point it takes a step function upward and is now fluctuating around a new average, help us visualize that. And then we can have a better chance of understanding cause and effect things that we've done to try to improve the metric or things that just happened. That, that weren't our actions.

So you draw a line. Friends of mine and the NHS England teach this methodology and they use a hashtag on social media, hashtag plot, the dots visualized the data, humans are better at looking at a run chart than they are looking at a list of numbers.

Yeah.

So then the other features of a process behavior chart is that, for a baseline time period I'll hold up.

For the video's sake, the step of the way. So I can still see and get not blocked my microphone. So you can see we've got data there. Then we calculate an average and plot that as a horizontal line. And then the red lines on the chart are calculated. We call these the lower and upper limits and I'll put this back up in a second.

These are calculated. So it helps answer the question of we establish our baseline period, how much variation around the average is? And we use that to calculate these lower and upper limits. And so then the one thing the chart tells us is that basically any future data points within those red lines, within those guardrails, there's no root cause for any single data point that falls within those limits.

So here's an example of a chart that shows that behavior like this is data about how many people visit my blog. Yeah, again, it would be a waste of time to try to explain any one of those single data points. It's just not a good use of time.

So then there, there are three rules that we use when evaluating a process behavior chart. So we look for any single data point that's outside of the limits. Yeah. Show this visually. So was the second rule that, that says, and there's a good statistical basis for this, that if you have eight or more consecutive data points, That are all above the the baseline average are all below the baseline average.

That is very unlikely to be random fluctuation. We'd be like flipping a coin and getting heads eight times in a row, like it's possible, but it's not going to happen very often.

And then there's a third rule where if you see a cluster of three consecutive or three out of four data points that are closer to an average than they are to the.

That would be indicative of a system change. So we're filtering out noise in the system, stop reacting to the noise and the metric, but make sure we do react when we see one of those statistical signals. Yeah. Yeah. So it's being able to know when it's random and when it's not.

But from a mental model standpoint, people do not like to think that the outcome of a system, the business measure is quote unquote random. Like I think a lot of times people think there's this deterministic mind, a mindset that says if we do the same work every day, we're going to get the exact same output and you can test this.

I use an example in a book one perfect way of illustrating random variation versus systemic change or unusual variation is I have to step on the scale every morning. Okay. You are not going to wait if you're, especially, if your scale has a decimal point after the number, you are not going to weigh the exact same number every single day, your weight, if it's stable is going to be fluctuating around an average.

Now, if you go out and have a really extravagant weekend where you eat and drink a lot, then you may see. Or if you've stopped exercising or you're drinking more because of the pandemic or there could be some system change then that would shift the average that your weight is fluctuating around.

So I think that example where you could measure your blood pressure, you could do all kinds of measures around your body as a system. And then thinking about that, how that translates to a workplace as a system or a collection of systems.

When do you know it's worth looking for, because

when any of those three statistical signals are detected, so the beauty of the process behavior chart is that, if we have some baseline period where the metric is just stable and fluctuating around an average, again, we calculate those lower and upper limits.

And then what the truck does in a way it predicts future performance. It says future performance is going to be centered around this average, within these limits, unless the system changes.

Yeah.

So process behavior chart is a way of detecting system change. Whether that's in the good direction or the bad, it's also a method we can use as we do improvement experiments. We have a hypothesis. If we do that, make this change to the system, performance will improve the process. Behavior chart helps us understand if there's a statistically meaningful change, that's been invoked in the system. Cause the last thing you want to do is make a change, look at one data point and say that data points better than average, hooray.

Based off of a two data point comparison, like it's not necessarily dishonest. It's just maybe misleading. Yeah, where people declare victory and they say we made this change to the system. The number of patient falls was lower last month and we throw a pizza party for the team punchline to the stories the next month, when the number flips back up and then leaders sometimes mistakenly think, oh, people must not be following the new standard anymore.

It could be that they were never following the new standard to begin with. Maybe there was no real system change or the attempt to. System change. Wasn't really a leverage point, right? So if we try to make a change to the system, we, you see the metric is still fluctuating around the old average. We haven't really changed the system.

So it's a really useful tool, but for a ongoing monitoring of your metrics to detect changes, but then secondly, to help prove or evaluate cause and effect, impact of system changes and system outputs.

The main thing I found these these, this tool useful for is basically to minimize spending a lot of time with status meetings, because yeah it basically, if there was a change and I knew it was. That it wasn't assistant change then I definitely said that it wasn't. So therefore there's no point in talking about.

Yeah. So at KaiNexus you mentioned earlier on there's a software company I've been involved with for the last 10 years as an advisor, and I've a ownership stake in the company. The management team there has learned process behavior charts for me, and they will apply. So these different metrics, again, I'll use customer acquisition costs, you chart that over time and that number is fluctuating around an average, and then there's some change to the system and maybe that customer acquisition costs dropped. And you see it. Okay. Now it's fluctuating around a lower average. Do we understand why our customer acquisition costs dropped? And the Connexus leadership team has gotten really good. And there are still times where I have to remind them, because again, habits are really hard to break.

When I hear the conversation of we're doing the review of the metrics and why is that metric better than last month? So wait a minute. Stop. That's the wrong question. That's number. Watch the around an average instead of asking what, what changed last month? Where the honest answer might be nothing. Yeah. Most organizations you can't say that. So the boss asks you, why did that number change? You cook up an answer. Yeah. Don Wheeler, who's one of my teachers and mentors on this and he wrote the foreword for the book. He calls it writing fiction. It is not a good use of time in the organization for people to be writing fiction, to explain average, a real routine fluctuation around an average.

It's just stopped doing that. Now, if you don't like the average that your metric is centered around. And if you don't like that calculated range of the limits, what do you do? Improve the system? The answer to how do we improve the system is not going to be found in asking why was last month better?

So we have to change some of the questions that we're asking. And to your point as we go through these reviews, stop spending time explaining every up and down in the metric and rededicate that time, either reacting to the one metric that shows the statistical signal. Or look at the metrics where you need to shift the level of performance and do your improvement work that would be focused on improving the system and then look for the results of that in future data points.

That's a great direction. So how do you know you've arrived at a useful metric if you're looking at, a whole spreadsheet pooled numbers, so there's this entirely. Different by incredibly important discussion of what do we measure my book doesn't really address that.

I try to build on insight, Eric Reese and the idea of vanity metrics. I think that's a useful input. That's a good discussion. That can measure what matters. John Doerr, okay. RS objectives and key results. And in lean, there's a methodology that people will either call strategy deployment, or if they would like Japanese terms, they'll call it Hoshin, Connery, this whole discussion. What do we measure?

Like what are the broad categories of measures that matter? I learned in the automotive industry and this translates well into healthcare, the broad categories, safety, quality delivery. So like on-time delivery and healthcare. We could refer to, waiting times and access to care. So I might say access and health, healthcare, safety, quality. Delivery costs and morale, like pretty much everything falls into those five categories. Now you have financial measures you're going to look at the, those five categories or maybe your operational measures, where if I'm improving measures in those five areas, I would expect to see that to flow through in the positive financial metrics.

But I think there are a few things that are true. Measuring more things. Isn't always more. So again, I'll draw a parallel to health. There are literally hundreds of laboratory tests. We could go and have done today to measure different enzyme levels and things related to different organs.

And we're not going to go do those lab tests every week because we don't need all of those measures all the time for different people. They may measure weight or blood sugar or blood pressure. Things like. So I think of, the smaller number of key health indicators for the organization.

And I like to remind people it's different acronym and OKR, but KPI that stands for key performance indicators. Sometimes we have a kajillion. Process indicators. So we have to think of what matters to the business. What are the key indicators? And we've had this hypothesis that says, if we move the needle on these metrics, then the business is doing better as a result.

So we don't want to think about vanity metrics. No. So I, I've read door's book even though I was having trouble remembering his name. I think that's a perfectly fine methodology. That book does not answer the question of what do you do with the metrics over time? So I've tried to propose to people that.

I'm no John Doerr, but measures of success I think is a perfect follow on to his book measure. What matters. So measure what matters in my book would be what to do with those measures that matter the process behavior chart methodology. Won't tell you what to measure, but it's a an amazing methodology for how you track and treat those metrics over time.

And th the door book. It doesn't address that it has enough that it does address. Yeah, of course. So I think one of the things that you talk about quite a lot is the importance of support from leadership when monitoring these metrics. So I think there's one passage where you're talking about like pressure without support, resulting in distorting the numbers, distorting the system or improving the system.

So how do you. How do you provide the kind of support that does make people happy to move in the direction of improving the system as opposed to one of the others?

So there, there are some people, there, there are different, laws or, quote-unquote laws thrown around.

There's one that says, anytime a metric becomes a target, it ceases to become a useful metric. I would challenge that to me. The problem is not setting a target. The problem is the behavior. Of leaders of what happens when the organization or the team does not hit the target. So when the reaction to your not hitting the target is naming blaming and shaming threats, punishment, what have you, or even if it's on the more positive side, if we offer incentives and rewards and bonuses, either of those can become very dysfunctional because oftentimes it's easy.

To distort the metrics or to distort the system. It's easier to do those things sometimes than it is to actually improve the system. So leaders need to make sure they're not trying to drive the organization through fear that they're not trying to drive the organization just through incentives and objectives.

Like we need to understand the systems of work and leaders need to not just support improvement. I think leaders need to be coaching. For improvement in different ways. So one other point I'll make, coming back to the process, behavior charts, those two lines that are the guard rails are again, calculated.

Those lower and upper limits are independently calculated based on the data. Those are different and independent from a goal or a target that we might. We can combine those two views. I've a process behavior chart and the typical range of variation and compare that to our goal, because what gets really dysfunctional is a metric where the average is very close to the.

Because then you're going to have not just the reaction to better or worse than the previous data point. Now you have the double reaction of the metric was green and now it's red. What happened? The answer is nothing. And I hear these rules of thumb. This'll sound like a rant. There, there is a method taught that's very frequently taught in lean management methodology.

So that I think is just, I think is outdated and insufficient. There will be these rules of thumb that said. Okay wait, we don't want to react to every right, but if you have two consecutive reds or three consecutive reds, then you need to start in a three and you need to do root cause analysis.

I'm like that's nonsense. The fact that the metric has shifted from red to green or that you have two or three reds in a row. Again, that could be just noise in the system. It's nothing worth investigating or explaining. No. The flip side of that is, and goals and targets are quite often arbitrary.

That's a whole different discussion, but let's say you're not happy because this metric is fluctuating between red and green. What do we need to do? Improve the system? We need to shift performance up. So that range of typical random variation is always within the quote unquote green band. We're not going to get there if we're constantly reacting.

And blaming or punishing or rewarding people based on statistical fluctuation. So that's one of the other organizational habits that we need to to change. We need real improvement, not fake improvement. Yeah. Yeah, because then the punishments and the rewards are just as random as the underlying processing and a good sign that the system is performance.

Stress. I'm sorry that the performance is system driven. A good way to know that performance is system driven is when you replace individuals or you replace a leader and performance is still basically the same. Like that in the red bead game, I could fire the bottom three and bring in three new workers.

The system has not changed adding new people. Isn't a meaningful change to the system. Would be maybe a better way of saying that. Again like these lessons from the red bead game do translate really well, I think in sort of real workplaces.

What's better focusing a larger organization on a goal or on the process and why

yeah, I'm not trying to dance around it. I think the honest answer to your question is both.

We need to look at the process or the processes or the system and the goal, what the output of that system, how that compares to the goal. And again we've gotta be careful. Don't read too much into one or two data points or the difference between those data. You need to look at data over time, again 12, 15, 18 24 data points, and use that to see how is our typical range of performance.

Stop talking about average performance. You also need to talk about the variation and the range. How does that compare against the, to the goal and how do we drive more systemic improvement? That's not just reactive. So that means studying the work, understanding the processes. For the people who do the work and engage them.

This process behavior chart methodology fits perfectly well into a lean management philosophy. We're not using charts to replace our knowledge of the system. We're not managing by charts so that we never need to go out into the real workplace, but charts can sometimes point us to when and where are we should go investigate and stop wasting the time.

Typically spent reacting to everything free metric.

So given the importance of time, , how do you see this applied in more of a startup environment where there just isn't that much history or is it just not as relevant? That's at that point?

A really good question.

So two, two thoughts come to mind. And while it's ideal, if you ha, if you can do a retrospective, if you have historical data, you made 24 data points, 12 would be fine. There's diminishing returns, statistically speaking, like when you dig into the PhD level work behind this, we're done. I'm not a PhD, I'm a practitioner having more and more data points that go into the calculation.

Like very diminishing returns after about 25 data. The minimum number of data points you would need to calculate a usable average and limits would be just for, so another way of making sure we have more data points, I don't know why, like there's certain metrics. If we could ask, why are we, why is that a quarterly metric?

Why not look at it monthly? If it's a monthly metric, why not look at it weekly? Some of it depends on like the cost of compiling the metric. A lot of cases, this is automated. So if we look at a metric weekly instead of monthly, One advantage is now we have more data points that we can use in the calculation to, we can detect signals more quickly.

If one bad week is going to lead to a bad month, we'd rather discover that sooner than later. You could calculate, you could do a process behavior chart with four data points, because again, that'll help establish the pattern of is this noise or is this signal in the middle? And then as we add more data points, we can refine the metrics until we hit a point where okay, now we have, let's say 20, 25 data points.

And now we lock in the average and the limits. We don't continually recalculate them once we hit a good statistically solid baseline. But then from that base on and we look and see, has the system changed. So for a startup, I use an example in the. With KaiNexus. There are some systems where the metric is not just fluctuating around a horizontal average.

It's actually fluctuating around a linear trend line. So there is a more advanced method where there's different math, but conceptually it's the same. If that number is fluctuating around a linear growth rates, everyone wants to start seeing exponential. Yeah, you can use a form of this process, behavior chart to see where maybe we are shifting from linear growth, into exponential growth.

That's really interesting. Or the slope of the linear growth has changed, which would be a good thing, even if it's not quote unquote exponential, mathematically speaking yet. But I think in a startup I would argue. This methodology of process behavior charts is so useful because as a startup, we should be doing a lot of experiments.

We should be testing hypothesis. We should be understanding cause and effect and learning from that. And I think if using process behavior charts deepens our learning where we're not. Creating these false assumptions of incorrect assumptions of cause and effect. We can, I think better determine that by using process behavior chart.

So I think I've seen it firsthand with Conexus super helpful for a high growth startup. Really interesting.

Have you given the last year and a half, have you seen any interesting. Uses of this in the context of remote or distributed companies and teamwork. Curiosity.

We need to think about the cause and effect and make sure we're not misleading ourselves. Sometimes with somebody with an agenda we'll cherry pick a data pointer to quota and per quote, unquote, prove the assertion they're making or the hypothesis that they're stating.

I'll use one quick example because this was a real example. A hospital in Ohio that I taught process behavior charts, and they were using it very broadly. And the B in the early days of the pandemic, March, 2020, they were measuring the rate of employees out sick every day. And they had that on a process behavior chart, and they had started to learn like the number of employees out sick jumping from 10 to 20, maybe that's noise within the process behavior chart, as the limits have been calculated.

But, the virus had not reached that part of Ohio. That they were looking for a leading indicator. If you started seeing more employees out sick and starting to look for a signal on that chart, then you might say oh, okay. Now let's get people tested. Has the virus reached this part of Ohio?

Sorry, I'm taking a pause to think there was a time when the virus had not yet gotten every reached everywhere. Yeah. But they were using process behavior charts to again, avoid. Wasting time trying to explain every up and down and looking for a signal where then you've got to go test your hypothesis.

It could be HR data like that. It could be agile metrics that are popular or it could be business metrics,

One of the insights that I really liked from your book is that this tool helps prevent a situation where people are blamed for things they have no control over.

Can you give an example or a story of where this kind of thing helps?

I think of, I've used this methodology sometimes it's an uphill battle. I've tried to introduce this methodology. I've used it with metrics in the business, even if other leaders. Interested in the method because one other reflection process behavior charts are a solution to a problem that most people are either unaware of, or they don't realize that it's a problem.

They don't realize there's a better way, but I think, you can apply process behavior charts for one for time series data. And if there's been a slight dip downward in in sales, don't go fire the VP of sales. No don't make a knee-jerk reaction. Don't make a huge change to the system based off of just the appearance of noise, so be, be careful about that.

There, there is also a different use of process behavior charts where it's not time series data. So there's an interesting application and I'll leave it again to the statistician Don Wheeler to prove this out statistically, cause I've learned this from him and he's a good source. If you are comparing people or teams or sites in a snapshot in time, a snapshot you could rank I would hate to think about productivity metrics for developers, but if you had some productivity metric for a hundred developers, you basically use the same methodology. You randomize the order. In which people appear. So it could be by last name might be a perfectly fine way of sorting and random way. Then, there, there may very well be an average level of performance. And if everybody, if every other developers performance of that snapshot, that's better or worse than average, if it's within those calculated limits, all of those developers performance is being driven by the system in which they work.

Now, people can participate in improving census. Yeah, organizations that love ranking sites, teams, organizations, like I've looked at daily. Within the United States veterans health administration, health care system. You look at sites within a region and NHS England does this. They love ranking hospitals.

And if you're in, the bottom performing group, you're on the naughty list and then coming to the extra attention and help. If you look at performance of these sites within a VA region you look at salespeople and like a top performer, one snapshot one month, one quarter, make S what be a bottom performer, the next period, because you know what, it's a real life version of the red beat game.

If people's performance over time and their performance within a peer group is within the same statistical realm, you can't start describing individual skill, individual effort to random calculation the system. And again, people don't like to think that the output of their work is on some, to some extent random.

Right now, if I was a salesperson who was just not making any sales calls and not doing my work, you would probably see them as an outlaw. On the chart. Now, if half of your salespeople were not doing any work and they're part of a system, like what kind of system is hiring and not managing them and what have you.

So it's again, like whether it's time series or even comparisons within a group we're trying to distinguish noise versus signals. Yeah. Is there a change in the system or is somebody in somehow? Working in a different system, quote unquote.

So talk to me about the podcasts. What are the differences between them?

Yeah. Thanks. So the lean podcast, which I started in 2006 and it's got a clunky name, cause I started a blog in 2005 called lean blog. It said lean blog.org. And I started doing a podcast about a year later. And podcasting was all still very new. And originally I just called it the lean blog podcast, which is fine.

It's not the name and choose today. I did tweak it a little bit, call it lean blog interviews because it is an interview style format. That's an offshoot of my blog. So there, we talk about lean in different settings and I have a lot of guests from manufacturing. A lot of them from healthcare sometimes, T to me, software is not a.

Topic, but I invite people. Eric Reese has been a guest on the podcast. A couple of times, Jim Benson, who's very well known for the personal Kanban methodology and all there's been, like the common theme is lean leadership, even if people are using slightly different names for it.

And then as a pandemic project, I a lot of people wrote a book. Maybe I should have written another. I decided to start a new podcast and using my coffee mug with the logo here, it's called my favorite mistake. This is I think, more a broader business podcast where I interviewed people entrepreneurs, CEOs of established companies retired professional athletes, entertainers people from all different types of work who start off, telling a story about something they consider to be their favorite.

Mrs. It's not the podcast isn't called my biggest mistake. That might be sad. Sometimes the favorite mistake is a big one, but a favorite mistake is big enough. It's meaningful enough to us where it's something we remember. It's something that we've learned from it's something we've reflected on and people share those stories.

And then we also talk about mistakes and their domain of work, or how do we create an internal. Where it's safe for people to admit mistakes or to point out mistakes without naming, blaming and shaming being the response. It's not a quote unquote lean podcast. Some of my guests do come from but it's really, it's a broader reflection on realizing, we all make mistakes.

But the key is recognizing them and learning from them instead of being stubborn or reactionary or blaming in some way. So that's been fun. It's almost 120 episodes now comes out twice a week because originally I was going to do a weekly podcast, but I was surprised there were so many people willing to support their mistakes.

. And just been reflective, the, one of the guests in episode number one was Kevin Harrington, who was on the first season of the show shark tank for people who watch that program. And so that's been a lot of fun. And then one of the other podcasts you mentioned is called habitual excellence.

That's a podcast that I do on behalf of the firm value capture. Lean health care. So that podcast is also interviews mostly people in healthcare healthcare executives, healthcare leaders, advocates for patient safety improvement. People like that, that, that podcast comes out every two weeks.

So on, on average, on training these different podcasts, putting out two episodes a week, to me, it's a lot of fun. It's Hey, doing a podcast, maybe, like you, you might be experiencing Luke. It's great for networking. It's great to be able to meet and talk with people and have a conversation and know, by the way, I'm going to share that with other people, a few other people.

Yeah. Great. It's been it's been great fun. The book again is measures of success and Thank you for it's available in both paperback and Kindle formats listeners outside the United States can find it either through Amazon in your country or the book is also distributed more broadly outside of Amazon.

But the only electronic book format is the Amazon Kindle format. Okay, great. Cool. Thank you very much.

data