Vityl Dashboard: Creating Metrics for Your Metrics: The Importance of Measuring Data Quality
Successful business value dashboard (BVD) strategies do more than just collect and aggregate your businesswide IT data. BVDs also measure their own effectiveness in gathering that information, ensuring that you’re basing your decisions on high-quality data.
There’s no shortage of famous proclamations in history that turned out to be catastrophically wrong: “I think there is a world market for maybe five computers,” declared Thomas Watson, chairman of IBM, in 1943, according to Scientific American. If IBM’s current Watson had a face, I suspect it would smile at that prediction.
Such retrospectively absurd guesses — like the 1959 announcement that we were on the cusp of “rocket mail” — are common, but these decisions don't happen because the people making them are incompetent; rather, they happen because smart people can make farcically bad judgements when given bad information, according to Forbes.
Which is why bad data is such a pernicious threat to businesses. What if your metrics management strategy actually boiled down to guesswork? Indeed, many businesses can be cavalier about forecasting their futures with collected IT data, but that data is often wrong, no matter how much time, energy, and capital it took to gather.
As such, companies must be able to evaluate the quality of their data with metrics management if they want to guarantee any measure of success. There’s nothing worse than doubting yourself for your failure only to find out that it wasn’t your reasoning that was ungrounded — it was the information you based your decisions upon.
Business value dashboards (BVD) are so valuable, in theory, because they offer deep insight and clarity across the entire span of IT, to leaders at any enterprise level. They turn the everyman into an oracle.
But these devices must make sense of tens of thousands of non-linear data points across dozens of disparate areas, from the cost-efficiency ratios of servers and virtual machines, to the ability of processes to deliver reliable service. Even if you’ve got top-of-the-line software on your side, this range of relevant information can be problematic.
An infinite number of small factors means more chances for smaller errors to be multiplied into large ones as data is aggregated, or worse, that unhelpful metrics were selected in the first place. This kind of “snowball effect” is fairly common, and many dashboards avoid this by simplifying their metrics, the reasoning being that if data is digestible, it can be controlled.
But this choice is unwise, if not wasteful. What use are dashboards if they can’t exhaustively account for the complexity of your organization? Dashboard presentations are what should be bite-sized — not your data.
The most successful metrics management strategies assume that data will be wrong, at least some of the time. But by measuring its across-the-board predictions against actual results on a daily, weekly, or even yearly basis, those metrics systems can gain automatic, reflexive feedback about their performance. This turns fallibility into a strength, allowing companies to tweak any number of suspect metrics.
A good analogy for this is car-crash testing, which batters otherwise robust cars in many different and intense ways to expose condition-specific flaws before those weakness are carried into the real world. Naturally, those tests — the metrics for your metrics — will change as your objectives do, but by compiling a full spectrum of demonstrated failures, it becomes easier to predict where your data will falter and prevent any serious damage before it’s too late.
The benefits of this testing are clear: in TeamQuest’s case, decades of careful evaluation and refinement have enabled our recently-released Vityl Dashboard to report on IT health and risk with a 95% accuracy rate. Still, we continuously plumb for errors with automated predictive analytics, putting algorithms to work assessing their own flaws and failures.
In business, revealing those assumptions is the key to true insight. Indeed, Thomas Watson would probably have made a better prediction if he didn’t (understandably) assume that computers would always be the size of rooms. But rarely is bad data obvious in the present moment — for instance, Thomas Watson never actually made such a claim, according to Gizmodo. Always remember to check your data before you base any decisions on it!
(Main image credit: Wikimedia)