Forrester Research analyst James Staten highlighted the fact that businesses today tend to focus too much on adjusting to what they know is changing in their market when they should be investing far more energy into adjusting to what they don’t know. Consequently, business visibility is limited as strategy is guided by a small set of CRM and sales data.
Yet there is a whole world of data that is largely unknown by the business such as how well the various services are performing and why customer experiences are poor. This information is often being tracked by IT in logs and files, but is either unknown or ignored by decision makers.
Staten, therefore, considers that business decisions today are based on a tiny amount of data compared to what is possible. He characterized this as a pea compared to an entire planet in terms of data available.
Forrester Research findings show 2.7 zettabytes are currently available (more than 4 billion years of music). This amount doubles every two years. Ignoring this treasure trove of potential information is an expensive proposition. According to the Data Warehouse Institute, the cost to the business for making bad decisions on limited data is $600 billion per year.
Why isn’t that data leveraged? There are several reasons for this. In the first place, it’s a mess. While the information may exist in IT, it can’t be imported into data analysis tools and can’t be aggregated easily. The data exists in disparate systems that don’t talk to each other, there is no standardization, the data isn’t easily presented and understood, and hasn’t been massaged or cleaned.
Despite these shortcomings and challenges, Forrester studies demonstrate that executives are prioritizing the use of data analytics to improve business decision making over the next 12 months. As a priority, it even comes before improving IT project delivery and budget performance. In other words, executives care more about better decision making than being on budget – and it’s obvious to anyone who has been in IT more than a few weeks that they care an awful lot about budget.
Mobility and security also come up as lower priorities than harnessing analytics to upgrade decision making. This survey by Forrester, therefore, showcases that any IT department that is working overtime to put smartphones in every employees hand might be better served by placing more attention on analytics.
Take a look at what existing IT data can do to better inform the business. Current logs and databases are already collecting data that shows when and where customer experience breaks down, where inefficiencies lie in key business processes, when and where performance problems really affect the bottom line, and when legal issues arise (and how to prevent them).
IT metrics can help find root causes that result in customer dissatisfaction. Such data, for example, is recorded about customer interactions via the web, mobile apps and instant message chats. But sometimes the orientation of IT in this regard is in the wrong area (i.e. IT zeroes in on how the Oracle system is performing rather than looking deeper to determine if customers are loyal and will come back). The latter is a far more valuable statement to make to business executives.
Being able to provide that insight takes knowing the data well, being comfortable with it and understanding the context. Armed with that knowledge, IT is better able to find problems that matter and ignore those that don’t.
For example, almost every organization has its own collection of aging servers and applications. This old code may be needed to keep certain systems alive and kicking. However, much of it may not be mission critical. Yet some IT departments get so mired in internal metrics that they waste valuable time slaving to keep these systems up as they fail so often. A broader look by IT at true business metrics would lead to these systems being assigned a lower priority.
They key here is to aggregate IT metrics against business process workflows in order to help the business identify any breakdowns in processes, how to deliver more benefits to the customer, and how to catch issues proactively before they reach the customer. Yes, raw performance is closer to the comfort zone of IT. But IT greatly expands its value by seizing opportunities to improve workflows and heighten the customer experience.
A California healthcare provider, for example, unified business and IT data into a central system for Business Intelligence (BI) purposes. As well as making analysis and reporting easier, this increased emergency room efficiency, streamlined the flow of pharmaceuticals through the facility and helped ensure the right patient received the right treatment at the right time. This achievement required a simple and effective way to organize files, differentiate between data sources and integrate them. But the result was unified data across hundreds of IT systems to improve hospital process efficiency and patient care.
The term analytics is, of course, a generality. The most popular types of IT analytics are:
Management tools are notorious for generating a flood of events that may or may not be useful. The vast majority usually turn out not to be. Correlation tools process these binary events to filter out the false alarms from the stream and yield qualified events that are relevant and should trigger action.
This type of analytics maps relationships among the components involved to narrow the scope of processing, and evaluate upstream and downstream impacts. These relationships are key to a multitude of analytics needs.
This maps cause-effect phenomena. Take the example of a mobile app that needs to look at an online catalog. There are four or five systems involved (perhaps more) in making an order. Therefore, it is important to understand how apps talk to each other and where they connect in the workflow. If the system as a whole is performing poorly, you can trace what is downstream that is impacting the user
Statistical pattern analytics infers the existence of relationships where explicit (topological) relationships are either weak or missing. Statistically, this approach compares performance patterns to identify common behaviors and implicit relationships. Simpler forms identify anomalies from established patterns of normal behavior. And normal patterns are deduced from historical behavior.
Statistical analytics uncovers performance patterns. For example, a primary service has a particular performance pattern and the analyst detects a similar graph from amongst dozens of related services to isolate where the problem lies.
Textual pattern analytics sifts through a stream of textual data such as log files to find patterns that can be used to identify conditions and behaviors overlooked by more traditional numerical collection technologies.
Configuration analytics analyzes configurations for discrepancies from adopted policies. This is useful for identifying systems and services that violate standard configurations. It is also valuable when it comes to changes and policy violations. For example, a policy template can be developed that checks configurations.
Economic modeling analytics examines supply and demand factors to determine optimum utilization of resources.
To be able to implement analytics effectively, IT needs to change. The current silo orientation of IT means that finger pointing is inevitable among storage, server, network and application teams. If a situation is always someone else’s problem, then little gets resolved. Therefore, a unified look at IT is necessary based on a common data set, a common reporting system and unified data access.
A good example of this is a consumer goods maker that set up an internal data portal. Everyone in IT and within the organization has to go to the portal to create and manage their own reporting. Dashboards are pre-built so that executives can choose what they need and create standardized reports. Once created, these reports and templates are available to everyone else in the company.
The first place to start is to unify data. This leads to the centralization of analysis and being able to view issues by role. This also makes it easy for IT to build business-relevant reports.
The next step is to map the business to IT systems and core business processes. Start with key business processes which will demonstrate to the business how IT processes impact revenue. With that established, you can map business efficiency and then hand the business a valuable tool for their own analysis.
While the section above focused on how IT already has much of what the business needs, there is another side to the coin. The business may already have what IT needs. What is meant by this is that there is plenty of data available in the business that IT ignores for various reasons. These reasons include:
TeamQuest tools can assist IT to overcome these obstacles and deliver what the business needs and wants in order to become a trusted strategic partner. A couple of new tools have been designed with this specific purpose.
TeamQuest Performance Indicator is a means of managing the performance that matters. Instead of managing all servers whether mission critical or not, for example, you take a workload and service view so you only pay attention to those server issues that are impacting the important applications and workloads within the enterprise.
TeamQuest Performance Indicator runs analytic queuing modeling against performance data at the workload, and virtual and physical server level to come up with a metric that answers whether you are queuing on a resource and if so in which area (CPU, IO, etc.). Remember that it doesn’t matter on utilization if you are not queuing as the perceived utilization issue is actually NOT impacting anyone at the user or business level.
Instead of emphasizing IT metrics, TeamQuest Performance Indicator manages and views everything on a workload basis. Once queuing is detected, it automatically performs an analysis. Through the use of this tool, It can automate proactive latency analytics, exception-based analytics, queuing root cause analysis (RCA), and conduct latency holistic RCAs on servers, workloads, VMs and storage.
Case in point: A large insurance company making a multi-year transition to a dynamic grid and virtualized infrastructure used TeamQuest Performance Indicator to improve resource utilization by over 400%. As a result, it saved $25 million over three years while maintaining SLAs.
Another example: A large financial services corporation deferred a massive, planned data center build out by 18 months which saved $20 million. Meanwhile the same staff manages five times more servers.
Another new capability with strategic business value is TeamQuest Risk Prediction. It has the goal of assuring awesome future performance. By using this tool, you can know how long there is until response time suffers, understand how long until you hit a bottleneck (and when and what will be the resource involved).
Basically, it is exception-based latency prediction by workload, VM, system and more, combined with auto-predicted queuing analytics. TeamQuest Risk Prediction helps you discover what you don’t know by showing where to focus your attention to avoid future problems.
See how you can avoid future problems. Connect with TeamQuest and ask for a personalized demo.