Key Measurements in Implementing Andon
I recently received a question about measuring andon, which I’ve extracted here: Based on your experience, when a company is trying to implement an Andon system, what are the key measurements you would use to evaluate the effectiveness of the escalation and help system?
Ideas we have are:
- Number of pulls (healthiness of the culture to highlight problems)
- Response time vs planned response (effectiveness of the system design regarding span of coverage in escalation process)
- % of issues resolved at each level of escalation (technical capabilities of each level)
What other criteria have you seen, and what would you advise us to take into strongest consideration to ensure we have designed an effective issue escalation and response process?
My response is as follows.
First, let me state that there is a difference between a measure and an indicator. A metric is something you would be willing to state a goal for. An indicator is something you should just pay attention to so that you understand the current state. It might go up, it might go down, but you’re not necessarily going to see that as anything more than a change. I don’t see that many organizations trying to measure their andon systems, but that doesn’t mean that they shouldn’t, as I was recently visiting one company who had an andon system set up but I saw the andon light going off for what felt like forever, with no response apparently coming. At another company, they told me that their andon lights were “on order” as if they had not taken the big step required of ordering lights.
With that being said, I believe number of pulls is a good indicator, but not a good measure. The number of andon pulls might go up, which either means we have more problems or we are doing a better job of actually pulling the cord when we’re supposed to. Only direct observation of what is really happening in the process will be able to help you know the difference. If the andon pulls go down, maybe there are fewer problems and maybe we’re slipping on properly using the system. Therefore, pay attention to it but don’t set goals to it.
Response time is a crucial element of the process and can be controlled by a wide range of decisions and process designs. In general, faster is always better, but what you really care about is the reliability of that connection. If the responder shows up sometimes and not others, that turns into distrust in the system. This isn’t a bad measure, but I might suggest a better look would be the standard deviation of the response time. If the response has low variation, it is reliable, even if it isn’t that fast. If it is reliable it can be trusted and can be improved. But be careful, the mechanics of how to actually collect this data may outweigh the benefits.
I don’t like the idea of measure the resolution rate. There are no trends here that aren’t likely to be read too much into. There are so many factors that affect at what level issues are resolved that I believe this might be more distracting of a measure than anything.
I do think one measure that is useful is looking at the ratio of andon pulls to actual quality events. Those actual quality events might be defects in the field, rework, scrap, or whatever best represents poor quality in your operation. The quality problems and contributors to bad quality that are supposed to be caught through andon is only a filter. If poor quality goes up in general, you should see both andon increase and defect increase. If costs of bad quality start going up and your andon signals are not, then something is broken in the filter.
Lastly, no measure or indicator will tell you half as much as being on the floor, in the process, observing how people are using the system. You need to test people’s understanding and use of the processes. You need to see the responders methods and capability.
What are your thoughts? How do you or have you measured the effectiveness of your andon process?