Truly understanding data requires more than looking at spreadsheets
By Theodore P.A. Haenlein
Too often we resort to densely populated spreadsheets to tell us what we really need to know about our business. This is especially critical for small businesses since miss-reading the signals from data for even one day can be a fatal failure. So how do we really understand what our data is telling us?
Those who understand the need to track/measure performance over time will be a step ahead of mere single-date spreadsheet analysis. They will often setup a run chart based on time so they can look for any performance trends around the key performance indicators. The problem is that the high or low points on the run charts almost demand a reaction from leadership. Often, we respond without any knowledge of whether we will be or were effective in changing the performance when we do react. We learn over time, that most often, there is no cause and effect relationship between what we think the data is telling us, how we respond to it and the changes that may or may not be noticed in subsequent data points. Yet there is a simple way to become very confident and knowledgeable in what the data is really telling you.
Scrap Levels of a Manufacturing Plant
An example might help to better understand the challenge for small business owners. Let’s imagine you are CEO and owner of a manufacturing plant and scrap levels are a critical performance indicator for your business. Too much and you are losing money, too little and you might be sending out poor quality product which will lose you customers and ultimately lose you money or even lose your business.
In reviewing your run chart of scrap level percentages we see that:
1. In April of 2008, your scrap level is at an all-time low and so you present your employees with an award. They are not sure what they did to achieve that great result but happily accept the award.
2. In July of 2008, your scrap level is at an unacceptable high and you want to take back the award and begin doing a lot of grumbling to your leadership team.
3. In October 2008, your scrap percent is at an all-time high and you decide a team meeting is needed with some thoroughly “tough-love” manager wisdom being dispensed. Your team does not know what to do, so they do nothing but look really concerned and promise to work harder and better.
4. In May of 2009, you notice what appears to be a sustained trend that seems to be much lower than the last quarter of 2008 and you are feeling better and conclude that tough management works. You decide to keep the pressure on the team to keep the scrap levels from rising.
Understanding the Data
Now the fundamental question is whether anything you did in response to the data in the run chart actually caused any improvement or was even appropriate. In order to answer that question, we need to accept that a couple of things are required for actually understanding what the data is telling you.
Data Criteria. Firstly – we must ensure that the data we are collecting meets all three of the following: (a) the thing we are measuring is specific; (b) the thing we are measuring is reliable; and (c) the thing we are measuring is actually measurable. If your data does not meet all three of these characteristics you should not go any further with trying to analyze whatever it is you are looking at.
Business Systems. As someone once said, “your business is perfectly designed to achieve the results you are getting.” That truism also means that any aspect of your business can be considered as an entity (system) that has a range of performance results. By understanding what is normal for the system, you can make decisions on responses that have a cause and effect on things that are impacting any non-normal results.
Statistical Process Control
Statistical process control states quite cleverly that 99.98 percent of all normal results for a system occur within three standard deviations above and below the average performance of the data over time. I don’t need to understand statistics – I just need someone to calculate (you can do it in Excel with a simple formula) the values that are 3 standard deviations above and below the average of the data I am looking at (and I need at least 30 consecutive data points) and plot them on my run chart. These are called “Upper Control Limit” (UCL) and “Lower Control Limit” (LCL).
There are a number of easily understood indicators of “non-normal” performance that anyone can learn and look for in your data when displayed in this fashion. You notice a couple of things immediately:
1. The UCL and LCL are substantially above and below any of the actual data points. This is because there is a lot of variation in the process/system that produces scrap. That, in and of itself, is a big learning from this presentation of the data and one that the run chart did not clearly show before.
2. The UCL and LCL reflect the range of normal performance of the manufacturing processes and the scrap they produce. This range is from above 3 percent and below 1 percent. That means that any results that lie between the UCL and the LCL lines are normal performance results that should neither be celebrated/awarded nor punished with tough management.
3. There are no non-normal indicators in your data. That means that your processes are producing the exact results that you should expect them to for how they are designed. Anything within this range of results (UCL to LCL) is completely normal.
4. You can predict with a high degree of confidence that your future manufacturing processes will produce between about .9 percent and about 3.1 percent scrap. This is called being stable. This can be factored into your future value of sales, into your operational capacity planning, and into all your customer orders.
5. Likewise you can now be confident that there are no non-normal things to fix in your manufacturing process. Instead, if these boundaries of normal performance are too broad and/or too high (UCL); then you will have to do a major process transformation – not tough management.
6. Remembering that most employees are good people trying to do their best – it is clear that the performance is controlled by the system you have placed them in.
You now have a lot more knowledge and understanding from the same data set, simply because you understand that your processes are a system that has a normal range of performance, which no amount of tough management on your people will improve. That is a level of wisdom at least one step higher than what you showed before.
Let’s look at a different example – one where the CEO of a large non-profit organization has an internal organization whose mission is to provide revenue generated by operating a temporary staffing operation.
In this example again, understanding whether the process is stable or not, is the critical parameter that tells what the process is really doing. Understanding this is especially critical when creating annual goals because otherwise the risk of assigning unattainable goals is extremely high. This risk can lead to the team’s failure to buy-in to the goals, which usually leads to failure to achieve the goals.
A Case Study: Temp Staffing
The actual revenue results from 2015 for the temp staffing internal organization were significantly higher than expected with unusually large temp placement volumes in August, September, and October of that year. From a process stability perspective, these results were shown to be statistically non-normal and the process was not stable during this period.
These stellar results had a severe negative impact on the process discipline of the team. They were forced out of their normal process steps and into three months of surge hiring activity. Fortunately, they had the crosstraining to accomplish this. However, this surge activity resulted in the drying-up of the November through January sales pipeline producing dismal results December 2015 through February 2016.
Because of this non-stable stellar performance, the organization’s leadership increased the revenue goals for 2016 to $3.14 million based on this non-stable 2015 result ($2.4 MM) – an increase of 31 percent without any improvement in the process.
Not considering the non-normal nature of the 2015 results led to setting goals that the process was not capable of achieving. In addition, in developing the monthly sub-goals for 2016, the non-normal results for August, September, and October of 2015 became embedded in the new monthly goals for 2016; in effect normalizing”non-normal results.
The impact on the Temp Staffing Sales Team was immediate and negative when they received their new yearly goals. They intuitively understood the flawed nature of this goal setting approach but could not communicate why it was flawed. Their professional experience led them to understand that the process as designed and operated was not going to achieve these goals.
Four years of sales revenue data was available but had not been used in deciding the 2016 revenue goals. An analysis of stability revealed that the three exceptional months of performance in 2015 were substantially outside the 3-standard deviation UCL and therefore non-normal.
Not taking stability and capability characteristics of the process into account when determining yearly goals created real conflicts and stress within the organization. The impact was significant and almost immediate.
Failure to consider actual stability parameters of the process did drive “abhorrent behavior” on the part of the performing team to attempt to achieve results that the process could not normally produce under the best of circumstances.
Understanding the process stability would have allowed the CEO and senior leadership to:
1. Recognize non-normal stellar results and give critical recognition and support to the team.
2. Engage with the team in a grounded development of future goals.
3. Use the knowledge of these characteristics to drive deliberate improvement projects rather than forcing the impacted team to attempt to fix it themselves through extraordinary behaviors.
4. Prevent burnout of a critical revenue-generating team.
It is an essential management competency to be able to understand and review the company’s processes as systems that have inherent ranges of normal behavior and performance results. Using simple run charts with upper and lower control limits plotted on them allows one to understand real performance at a much deeper and broader level than only using spreadsheets or time-based run charts. While the term “statistical process control” sounds inhibiting – it usage is quite simple, powerfully revealing and allows senior leadership to focus on how the process/system is actually performing.
This results in a substantially higher probability that any actions you take will positively impact the real cause and effect of the results you are seeking to improve or change or even remove. This approach, even if it needs refining through multiple iterations, will be significantly more impactful than lording it over your people through uninformed application of tough management. Learning and applying a few simple statistical process control techniques ASAP can prevent you from doing something ignorant and negative to your company and your people.
Theodore P.A. Haenlein, Lean 6 Sigma Master Black Belt is a Project Director at Cogent Analytics. He has worked with CEOs, Owners and various levels of leadership and operational teams across multiple industries to build ironclad processes for industry-leading results. Ted has led many Lean Six Sigma transformations and has been directly involved with mentoring and coaching at all levels of an organization. He has also developed several assessment programs for executives and practitioners that are relevant and customized to uncover current business issues leading to effective gap and root cause analysis. Ted is an experienced Master Black Belt with over twenty years of experience in Lean Six Sigma deployment and project work.