A Case In Multiple Data Representations (or: How to Beat a Dead Horse) 1

The primary purpose of using data in decision-making is to ensure that the best outcome (driven by data, without interference from personal bias) is achieved.  However, this is a very utilitarian perspective that doesn’t take into account the human factors involved in process change.  Because of these human factors (biases, fear of change, need to “save face”, etc.) it can be beneficial to display your data in different ways.  By using different methods to display your data you are able to tell a more complete story that both reduces fear of change and also nullifies arguments against change based on personal biases.

So how do you tell your story?  Your data and the problem you are working to solve will guide you.  What is most important is that you understand the problem and that you understand a wide variety of tools that can help you translate your story into language that your audience will understand.

I will use one of my organization’s ongoing projects to illustrate what I’m talking about.  And clocks.  I’m going to use a lot of clocks.

Help! I'm trapped in a room full of clocks!

Help! I’m trapped in a room full of clocks!

One of our machines is experiencing intermittent faults when removing the packaging from a sleeve of raw materials.  Multiple sleeves full of stacked metal disks (imagine them as long, thin paper rolls of coins, with one end of the sleeve folded over and glued closed) are loaded into a hopper and individually conveyed into a debagging machine.  This machine uses a razor blade to cut open the bags longitudinally just before a dual roller mechanism mechanically pulls the bags away.  This frees the disks to continue their journey into the manufacturing process and on to other machines.

The problem is that sometimes the sleeve isn’t completely removed from the disks, which results in a conveyor jam, followed by a machine fault, followed by excess manual intervention and slowed production.  Early project work discovered some preventive maintenance solutions, but they only reduced the frequency of occurrence; the root cause still remained.

In order to identify possible causes it was necessary to conduct observations of the process.  A hypothesis was developed that proposed the positioning of the sleeve’s “tail” (the part that is folded over and glued closed) was a contributing factor.  It made sense to test this: The problem was intermittent and the sleeve’s tail was positioned randomly with each load in the debagger.  It was conceivable that the tail may have been difficult for the razor to cut.

The problem with this proposed root cause is that the most likely resolution is expensive (involving a reengineering of the debagging machine in order to allow it to sense the sleeve’s tail and automatically reposition the sleeve to the ideal tail position.  Simple observation would not be enough to convince leadership to approve the expense involved; we needed hard proof that would speak to everyone in the conversation and remove any doubt as to the true root cause.  “5 Whys” and 6M simply wouldn’t be enough.

So began our hypothesis testing.  We established a null hypothesis stating that the position of the sleeve’s tail had no effect on the debagging process, and performed observations to either prove or disprove that hypothesis.

320 observations were conducted over the course of two days.  During the course of these observations, 32 fault conditions were observed.  The fault conditions were recorded using the position of the sleeve’s tail correlated to clock-face position as a reference.  Some sleeves were partially torn or otherwise damaged prior to being loaded into the debagger; these conditions were recorded categorically.

The sleeve's faults are recorded in reference to a clock's face.

Faults Recorded in Reference to a Clock Face

 

An initial look at the data appears to disprove our null hypothesis.  A graphical representation provides further validation:

The sleeve's faults are recorded in reference to a clock's face.

Faults Broken Down By Cause

The concentration diagrams above show the location of sleeve tails on each fault condition.  The first diagram maps all faults that were observed.  The second removes the faults associated with the presence of damaged or torn sleeves.  The third shows only those faults associated with damaged sleeves (regardless of tail position).

These diagrams appeal to a concrete/visual/“common sense” audience.  It is easy to see the association between tail position and faults.  It is also easy to temporarily set aside concerns about damaged sleeves (an issue to be addressed further upstream in the value chain).

But what about our audience members who are more data-driven?  How can we tell a story that evokes emotion and initiates action?

Look! A Pareto Chart!

Look! A Pareto Chart!

This Pareto Chart is good, but it isn’t great.  It tells part of the story but doesn’t really drive the message home.  The image isn’t convincing; it doesn’t say anything that the concentration map doesn’t already provide.  We could drill down deeper into our “long pole” (the 12.0 position) to identify a major cause, but we didn’t collect a lot of data that would support such an analysis.

So how can we better tell the story?  Let’s look back at what we are trying to do: we are either proving or disproving our hypothesis.  And our null hypothesis is that there is no correlation between sleeve tail position and fault occurrence.  Sorting our fault percentage data from lowest to highest:*

Sorted by Fault Count

Sorted by Fault Count

*Sorting the data in this case does not modify the content; it simply makes it easier to understand visually.  There is no obscurantism or misdirection achieved by doing this; the correlation test results remain the same with both sorts.

We are now able to do a simple Scatterplot to provide a visual display of the correlation:

Connect the dots (just like being a kid again!)

Connect the dots (just like being a kid again!)

This looks like it disproves our null hypothesis; it looks like the tail position has a strong positive correlation.  However, let’s do a simple correlation test just to make sure:

Correlation Test Results

Correlation Test Results

As the saying goes…  “if the ‘p’ is low, the null must go”!  And with a correlation of 0.933, it looks like we have found a major contributor to our machine faults!

To borrow from the disturbingly popular phrase…  We’ve found that there are indeed quite a few ways to skin a cat.  But I’m sure there are more: What other ways would you use to visually describe the data from this hypothesis test?  More importantly: why did I feel the need to harm both a horse and a cat with this post?  Leave a comment below and share your thoughts and ideas!

One comment on “A Case In Multiple Data Representations (or: How to Beat a Dead Horse)

  1. Reply Wesley Oct 28,2018 10:09 pm

    I have read so many articles concerning the blogger lovers but this post
    is really a pleasant paragraph, keep it up.

Leave a Reply