Saturday, August 22, 2015

How to Measure Anything (Douglas W. Hubbard)


"Anything can be measured.  If a thing can be observed in any way at all, it lends itself to some type of measurement method.  No matter how "fuzzy" the measurement is, it's still a measurement if it tells you more than you knew before."  So claims author Douglas Hubbard in this book How to Measure Anything.  He proceeds to lay out his method for finding the value of "intangibles" in business settings.

There's a lot of good in this book; it gave me many points to ponder.  It was dry and 'textbookish,' which wasn't always appreciated, and there was a lot of math that I skimmed over.  Still, the concepts presented are valuable.  I attempt to concisely recap the book below.

Rating: A-

------------------

Why are metrics such a hot topic?  Metrics are simply measurements, and "management cares about measurements because measurements inform uncertain decisions."  Hubbard argues that "for any decision or set of decisions, there are a large combination of things to measure and ways to measure them- but perfect certainty is rarely a realistic option . . . therefore, management needs a method to analyze options for reducing uncertainty about decisions."  A well-defined and executed measurement method is what's necessary, where a measurement is defined as "a quantitatively expressed reduction of uncertainty based on one or more observations."

Hubbard believes
There are just three reasons why people think that something can't be measured.
1. Concept of measurement (the definition is misunderstood)
2. Object of measurement (the thing being measured is not well defined)
3. Methods of measurement (many procedures of empirical observation are not well known)
He also claims
1. If it matters at all, it is detectable/observable.
2. If it is detectable, it can be detected as an amount (or range of possible amounts).
3. If it can be detected as a range of possible amounts, it can be measured.
Hubbard proceeds to lay out a 5-step process to measure anything:

1. Define the decision and the variables that matter to it.
A problem well stated is a problem half solved. - Charles Kettering
It's not uncommon to lose sight of why we want metrics in the first place- what are we trying to accomplish with them?  "What problem are you trying to solve with this measurement?"  A common understanding here is key.  Ambiguity will result in disaster.  "It is . . . imperative to state why we want to measure something in order to understand what is really being measured."  To start, "if a measurement matters at all, it is because it must have some conceivable effect on decisions and behavior."  Therefore, one should answer the following clarification questions:
1. What is the decision this measurement is supposed to support?
2. What is the definition of the thing being measured in terms of observable consequences?
3. How, exactly, does this thing matter to the decision being asked?
2. Model the current state of uncertainty about those variables.

Remember, measurements are about reducing uncertainty.  To reduce the uncertainty, you need some idea of the current level of uncertainty.  How much do we know now?  A fourth clarification question (continued from the previous section):
4. How much do you know about it now (what is your current level of uncertainty)?
Here, we need to establish a range for our measurement that is likely to contain the right answer.  It's a starting range, necessary for subsequent analysis.  We often know more than we think- we have an idea of range.  For example: we want to determine how many people actively use our software.  We don't know the exact amount (obviously), but based on downloads we know there are between 10,000 and 20,000 users.  That's a large spread, but it's much better than nothing.  "Initially, measuring uncertainty is just a matter of putting our calibrated ranges or probabilities on unknown variables."  By 'calibrated range,' we mean a 90% "Confidence Interval" (90% CI)- the range that has a 90% chance of containing the correct answer.  Hubbard discusses various ways to train or 'calibrate' employees to produce reasonable probability calibrations, which I won't elaborate upon here.  The idea is that "when you allow yourself to use ranges and probabilities, you don't really have to assume anything you don't know for a fact."

3. Compute the value of additional measurements.

We often focus on what's easy to measure vs. what's important to measure.  Remember, "if some information being requested has no bearing on decisions, it has no value."  A final clarification question (continued from previous sections) needs to be asked:
5. What is the value of additional information?
We want to obtain more information to reduce uncertainty (shrinking our 90% range and producing a clearer picture of the way to proceed).  But what information do we collect?  Where do we focus our efforts?  "If you don't compute the value of measurements, you are probably measuring the wrong things, the wrong way . . . understanding the value of information tells us what to measure and about how much effort we should put into measuring it."
There are really only three basic reasons why information ever has value to a business:
1. Information reduces uncertainty about decisions that have economic consequences.
2. Information affects the behavior of others, which has economic consequences.
3. Information sometimes has its own market value.
Hubbard presents his "Expected Value of Information" formula:
Expected Value of Information (EVI) = Reduction in expected opportunity loss (EOL)
In other words,
where

4. Measure the high-value uncertainties in a way that is economically justified.

Once we know the EVI (see last step), we have to measure the high-value uncertainties.
In business cases, most of the variables have an "information value" at or near zero.  But usually at least some variables have an information value that is so high that some deliberate measurement effort is easily justified.
To begin, we must recognize that "many measurements start by decomposing an uncertain variable into constituent parts to identify directly observable things that are easier to measure."  The has several advantages, and in some cases, we can stop after decomposition due to the
Decomposition effect: The phenomenon that the decomposition itself often turns out to provide such a sufficient reduction in uncertainty that further observations are not required.
After a problem has been decomposed into its simplest, observable parts, there are several techniques to actually perform a measurement.  Hubbard talks about various sampling methods, Bayesian analysis, the Monte Carlo method, and using secondary research (what others have done before) to get ideas for other techniques.  I talk more about some shortly.

It's important to remember that measurements, at their core, are observations.  There are several basic methods of observation that will result in meaningful measurements:
- follow its trail (the information has left footprints- you see them, and just have to follow them)
- use direct observation to start looking (the information has left footprints, but you don't see them- you have to find them, then follow them)
- add a tracer so it starts leaving a trail (the information has not left footprints, but could if you put sensors in certain places)
- create conditions to observe (an experiment) (the information has not left footprints, and won't 'in the wild'- so you have to create a controlled environment for observation)

Sampling

As mentioned above, Hubbard discusses several measurement methods.  I'll discuss only sampling in more detail, as that's of most interest to me.  What is sampling?  "In effect, sampling is observing just some of the things in a population to learn something about all of the things in a population."

"We may need only a very small number of samples to draw useful conclusions about the rest of the unsampled population."  For example, the Rule of Five states that "there is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population."  That's pretty cool!

Sampling plays a large part in our lives, whether or not we realize it.  ". . . Everything we know from "experience" is just a sample.  We didn't actually experience everything; we experienced some things and we extrapolated from there.  That is all we get- fleeting glimpses of a mostly unobserved world from which we draw conclusions about all the stuff we didn't see."

Done well, sampling can tell us a lot and significantly reduce our uncertainty.

Measuring Soft Concepts

How do you measure those intangibles that can be so difficult to quantify?  Things like "quality" and the like?  Here's an interesting concept: "Valuation, by its nature, is a subjective assessment."  Yes- many things we treat as objective (like monetary value, the gold standard, etc.) are based largely on human preference.  "All quality assessment problems . . . are about human preferences.  In that sense, human preferences are the only source of measurement."  So, how do we observe preferences?
Broadly, there are two ways to observe preferences: what people say [stated preferences] and what people do [revealed preferences].
Revealed preferences are often more valuable, and "two good indicators of revealed preferences are things people tend to value a lot: time and money."

5. Make a risk/return decision after the economically justified amount of uncertainty is reduced.

Hopefully, this should be the easy part.  If your measurement reduced uncertainty significantly, the way forward is a risk/return decision based on better information than you had initially.  "Keep the purpose of measurement in mind: uncertainty reduction, not necessarily uncertainty elimination."  The cool thing is that if you started with very little information (and thus had a lot of uncertainty), "you don't need much new data to tell you something you didn't know before [and thus to reduce uncertainty significantly]."

Measurements will not and cannot guarantee the way forward . . . but they can make the way reasonably plain.

No comments:

Post a Comment