November 11, 2014

    By Vernon Johnson

    Heres’ the question: How can IT leaders take existing business information and make better informed and more rapid decisions that will allow them to really cost and performance optimize their entire infrastructure?  Because at the end of the day, the use cases of IT are always gonna be different than the use cases of business in the context of analytics.

    Business use cases of analytics would be for things like, “we want to rollout a new marketing campaign in a new geography and we want to understand what a reasonable expectation of sales penetration will be based on past campaign behaviors and our similar demographics. We can correlate it with past sales activities and demographics and we can forecast that we will have a change in demand for our products or services by X amount”.  So, it’s all business metrics, right?  Business people are never going understand how to optimize performance and capacity of IT. They aren’t wired to think that way! They don’t - directly - care. IT is a resource, they care about price, and performance. Period.

    So, they want IT’s resource costs to be optimized in the support of their business… they just aren’t going do the work.  And, of course, they still require IT to ensure a successful end-customer experience, performance-wise!  IT are still the people who are going do it, the people who have to make the decision about how to setup configurations, how much is needed and when it is needed, what rules to put in place around configuration automation, decommissioning and all of that.  Performance optimization, how do you optimize throughput and response time, how do you do that cost effectively?  All of that is still, at the end of the day, somebody in IT making a decision and that decision has to be informed by a bunch of data.

    I’m suggesting we need to more fully close the loop - it can’t just be about putting IT metrics into general purpose Analytics… we need to define ITOA even more broadly.  It’s more than just IT informing the business.  It’s the business informing IT.  It’s closing that circle.  And by closing that circle, enable a continuous improvement process that spans both business and IT.  I’m waving my hands here, but it’s funny because there have been business analytic tools for a long time.

    General purpose analytic tools are designed to answer general purpose business analytic questions, and while they potentially can be integrated with IT metrics to better inform business decisions,  they’re not designed around the use case of how do I actually discover what the root cause of a service impacting performance or capacity issue is?  When will it happen, with which resource(s), and how do I best prevent it cost-effectively?

    Another complication, that is increasing ever more quickly, is that these IT resources are increasingly becoming abstracted, whether through things like server virtualization, storage virtualization and/or network virtualization, but also via various cloud approaches… Finally, everything is also getting much more dynamic, with configurations constantly changing.  These realities make traditional approaches to IT management - typically based on resource utilization or availability, increasingly obsolete and ever more difficult to scale in scope and reach.

    As one example, we can use our TeamQuest Performance Indicator (TPI) or our TeamQuest Risk Predictor (TRP) as scalable “proxies” for true business service performance and use them to then inform a set of automated IT-facing analytics that walk the tree to figure out what’s the resource that either is broken or is gonna break and when it’s gonna break such that it will impact service.  A business analytic tool being fed by IT is never gonna do that!  “What kind of proxy is that?”, you ask. Simple, the best one there is: one based on how much work is getting done by your IT resources, and how efficiently is it getting done and how fast.  If you’re transactions are never waiting for an IT resource, then by very definition, they cannot perform any better!  And if/when they are waiting, you need to know when they will wait, and on what resource. TPI and TRP answer those questions. Quickly. Scalably. Automatically.

    Other examples would be to inform such automated, predictive IT analytics with additional customer-experience metrics, such as end-user response time, KPI transactional counts and metrics such as resource costs (both CapEx and OpEx)... inform them with DCIM information about power consumption and cost, floorspace usage, cooling utilization and capacity, and so forth… the list is virtually endless.  It should be about having as much information as possible available to inform decisions relating to IT resource performance and capacity, in the hands of those  making those decisions.

    At the end of the day, the traditional IT disciplines of performance analysis and resource planning have to adapt to this very new, very abstracted, and very dynamic techological reality. We have to make IT management and optimization more business aware! We have to feed the results of any business analytics process into an IT analytics process to truly cost and performance optimize IT resources in support of the business. We call this IT Service Optimization (ITSO).  And, while certainly not required, in a perfect world it will go both ways. It’s almost a six-Sigma, Deming-like closed loop continuous improvement methodology.  The more business learns about IT, the better business decisions are going to be made, the better business forecasting is going to exist.  You can then take that better business metrics and forecasting information and use them to feed into IT resource performance and forecasting decisions and configuration automation rules, etc. Win- win!

    Category: cloud-computing