TeamQuest Corporation

Automated Predictive Analytics – Latest Software Release

We are happy to announce significant extensions to our TeamQuest Risk Prediction solution — adding more automation for proactively and predictively optimizing business services, IT performance, and cost.

We just made IT Optimization even easier

Designed to enable organizations to predict and prevent current and future IT performance issues and automatically diagnose the root cause of performance issues across the IT infrastructure, TeamQuest Risk Prediction now automatically understands the business service demand lifecycle, determining the appropriate baselines for its automated predictive modeling.

Before TeamQuest Risk Prediction, IT organizations needed a difficult-to-gather and too-detailed-to-scale understanding of not only IT resource performance, but also cross-functional service capacity requirements in order to make confident decisions about future performance. TeamQuest Risk Prediction, with its new levels of automation and embedded intelligence, now fully automates the highly specialized performance and capacity analysis and modeling that previously required teams of cross-discipline experts.

By automatically understanding the business service demand relationship to the IT resources needed for successfully delivering acceptable service — at acceptable costs — TeamQuest Risk Prediction eliminates costly and risky guess work and manual efforts. Comprehensive policy-based administration ensures the ability of any IT organization to rapidly deploy, configure, and realize all the benefits of truly proactive and predictive performance analysis and modeling, with any level of staff expertise or organizational process maturity.

Our latest software release also has:

  1. Simplified capacity modeling of AIX LPARs with shared resources
  2. Analysis of CPU processing capacity and waste
  3. Performance and capacity analysis of VMware datastore clusters
  4. Several new out-of-the-box reports showing predicted and measured performance and capacity


Optimizing for the Software Defined Data Center – Part II

The result of having the ability to answer the questions accurately provides the foundation of continuous IT Optimization.

  1. Reduce initial CapEx and ongoing OpEx, so you will make, and keeping making, more money!
  2. Optimize resources for systems of customer engagement
  3. Deploy and refresh new applications faster
  4. Respond faster to business spikes
  5. Prevent business-impacting outages and slowdowns.

The methodology allows for aligned business and IT analytics for enterprise IT optimization.

  1. Correlate business and IT performance
  2. Insight into how business process changes impact IT
  3. Understand and optimize costs by business unit / process and technology
  4. Insight into business performance across the technology stack.

Automated, Aligned IT Analytics:

  1. Analyze – optimize decision making, proactively manage true performance, predictively optimize performance and capacity, continuously cost-optimize IT resource decisions.
  2. Integrate – with the business, business KPI and financials feed IT decisions, IT performance and capacity in business context.
  3. Automate – repeatable, standardized, normalized, flexible, scalable

Our approach federates existing data into purpose-designed optimized process:

  1. Technology data (server, network, storage, etc.)
  2. Service data (catalog, metrics, tickets, etc.)
  3. Financial data
  4. Business data (analytics, KPIs, plans, TXNs, etc.)

Automating the analytics across all of these data sources creates a flexible and adaptive management platform for dynamic SDDC environments ultimately transforming raw or commodity data into actionable information for IT. It’s a single pane of glass for all of your IT optimization needs.

Automated, Predictive Analytics – A health forecast embedded analysis gives you a continuous rolling prediction with complete components of response time (latency) that tells you which services will run out of resources, when, and why.

Automated Financial Optimization – Forecasting total costs for applications is valuable for pre-purchase validation, repurposing, and consolidation efforts. Integrated with risk and service management data and fully automated.

IT Power Capacity Optimization – Be proactive on your power optimization – find your actual usage versus data center limits. Allocate power by application / service and project savings via optimization. You can analyze server / virtual server performance, power consumption from DCIM tools, service catalog and CMDB, and asset database together.

Automated Root Cause Analysis – analyze application response time against your SLA’s – spanning servers, virtual servers, network, storage, etc. Using real time application monitoring data, service catalog and CMDB data, you’ll be on top of where you stand versus your SLA.

Financial Optimization Reporting – automatically balance current and future costs with performance and capacity using server / virtual server performance, asset costing (CapEx / OpEx), service catalog and CMDB with power costing data.

Here’s the answers to the questions when you take your SDDC into continuous IT optimization:



Optimizing for the Software Defined Data Center – Part I

Today’s infrastructure situation is increasingly becoming virtualized everything (servers, storage, desktops, networks) and the growing importance of cloud computing has given way to the term Software Defined Data Center (SDDC). The SDDC has its own challenges. Challenges like whether or not to include legacy or non-virtual resources, interoperability of multiple vendors’ converged infrastructure systems, and the management of SDDC remains a mystery.

Management is often misinterpreted as monitoring. Monitoring is nice, but managing the SDDC requires the ability to make informed decisions based on the monitoring that may or may not already be in place. In fact, SDDC management places a premium on new management models – real-time data collection, embedded analytics, and the ability to span multiple data sources intelligently. Analytics will have different goals – efficiency, cost, root cause analysis. The goals will include workload-centric optimization, global cost and energy efficiency, global availability and risk management – all of these are very tough for traditional management vendors.

You hear a variety of excuses explanations why IT doesn’t analyze business data…

  1. Existing tools don’t accept the data
  2. ETL into acceptable formats is a lot of work and costs money
  3. Manual efforts don’t scale…Don’t adapt to dynamic, virtual everything realities
  4. IT has no expertise in business data
  5. Data isn’t easily presented or understood
  6. We don’t have the time to deal with it
  7. I don’t know what can we really DO with it
  8. I don’t know where to start
  9. I’m unsure of the real value

So, what is the desired state of managing SDDC? I described continuous optimization in another post as optimizing the financial impact of the SDDC. Always know when and where performance problems will affect the bottom line. Identify cost and performance inefficiencies in support of business processes and eliminate them. Continuously optimize the customer experience. Understand when, where, and why customer experiences fail so you can resolve, predict, and PREVENT poor customers experiences.

How do you get to the desired state of managing SDDC? Stay tuned. I’ll answer these questions in another blog post next week.



3 Ways Optimization in a Box Solves YOUR IT Problems

Need to increase IT service efficiency? Responsible for ensuring service levels are met? Hampered by limited personnel, time and expertise?

TeamQuest Director of Global Services Per Bauer explains how customers can manage services in relation to servers, storage, network, power and floor space. Understand costing data, incidents, business transaction volumes, and demand forecasts.

Bauer will help you deliver automated, predictable results at a predictable price.

  1. No upfront capital investment
  2. No need to train and dedicate personnel
  3. No long-term commitment

Watch this short, 11-minute video to learn more about Optimization in a Box and immediately improve your ability to optimize business services today.



Are Capacity Planning Skills Lost to Retirement?

We’ve all heard a lot about the “aging workforce” with the start of the baby boomer generation beginning their retirement journey. With this generation leaving the workforce, are certain skills retiring with them? One of the skills borne with the advent of mainframes, and coincidently the birth of baby boomers’ careers, was capacity planning.

So, are we losing capacity planning skills with retirement? Are the basic tenets of performance analysis and capacity planning being abandoned with the implementation of more adaptive and cheaper computing technologies? Is good enough good enough? Right or wrong, are these new technologies and compute delivery mechanisms being perceived as making capacity planning obsolete?

Applications or services used to be comprised of a single machine – so capacity planning was fairly straightforward. Now you have multi-tiered applications with heterogeneous system types supporting the various tiers of an application. The explosion of cloud computing (private, public, hybrid) has added even more complexity to the equation. Don’t these complex service delivery inner workings require even more focus on capacity planning than before?

I’m interested in hearing what you are seeing in your organizations. Are new people acquiring an appreciation for the art of capacity planning and have they been trained? Is capacity planning being abandoned, or is it simply evolving?



Using Surveyor to Make Capacity Management More Proactive

Check out the latest customer success story. Here’s a link to the full article.

A major retailer uses TeamQuest Surveyor to analyze growth trends of thousands of systems containing tens of thousands of file systems each week. A few of these file systems will require intervention. However, finding out which ones would take IT staff many hours of laborious action, accessing each file system and individually determining the state it was in. Surveyor provides IT with a report on each file system automatically so that IT staff assigned to those file systems can check the report and zero in on the specific file systems that demand some attention. That way, they spend their time actively inspecting likely candidates for action rather than wasting time looking in places that are in good shape. The value of this approach is considerable. Every avoided outage saves thousands of dollars, and some might even save as much as tens of thousands of dollars. By focusing attention where it provides the most benefit, the Surveyor file system report allows IT to act more proactively in other areas.



Understanding the Real Value of IT and Proving it to the Business

As IT professionals, I’m sure you have felt under-appreciated at times by the business that you are supporting. A great way to avoid that lonely feeling is to prove the value of IT to the business. It’s important the budgets aren’t wasted in times when costs are continually being squeezed. There is more pressure than ever to demonstrate the value of technology.

How can you prove the value of IT to business? An important first step is to begin the journey toward chargeback. The ability to measure costs is key, but having the ability to measure the business results that come from the use of IT services (private cloud environments, for example) will drive better business conversations with IT management.

“Unless IT can prove IT is providing cost-effective and relevant business support, it will lose the sponsorship of various lines of business and become irrelevant,” Bob Tarzey, Quocirca.

Focusing on business goals and understanding how the use of IT services contribute to business results provides the best basis for planning of future services.

The majority of CIOs believe the IT department can increase the value it delivers to the organization by improving cost measurement, according to a recent survey by the Service Desk Institute.

Some 82% of respondents said that improved reporting of activities and metrics from IT, enabling the business to gain a clear understanding of IT’s output and results, was significantly important towards achieving this goal.

The study showed that 93% of CIOs believe better collaboration with business units to improve understanding of their goals and how they deliver value is necessary for delivering real value. Some 69% want clearer communication from executives about business plans and objectives.

The Demonstrating business value report also found that only a quarter of CIOs (27%) said senior executives view IT as contributing to strategic business goals such as growth or diversification, while 30% view it as a necessary trading expense that needs to be tightly controlled. Some 43% view IT only as a means to increase efficiency.

Half of IT teams either have no formal reporting mechanism for IT performance (17%) or focus on metrics such as downtime (33%) rather than anything more strategic.

Nearly all respondents (98%) said IT can play a bigger role in supporting the goals of the business as a whole.

For more information on the study, read ComputerWeekly’s report on proving the value of IT and visit our website.



How Should the Value of IT be Measured and Optimized?

With the massive uptake in virtualization and more recently varying types of cloud computing, the pace of change in today’s IT environments has never been faster. Under the covers, commoditization of underlying hardware and even software platforms promises to drive computing costs ever lower. Offsetting this dynamic is the increased level of complexity, creating new service risks as well as increasing the likelihood of overspending.

Traditionally, IT costing efforts have been done, if not all, at a higher or macro level. For example: total capital costs for data center construction along with associated annual operating costs for things like power floorspace cooling etc… Or, budgeting for server or storage resources on a yearly basis based on forecasted business growth scenarios. In today’s distributed systems world, any type of cost allocation has been, in most cases, coarse at best. Sometimes IT costs will be equally shared by all organizations using the total infrastructure but this approach leads to, at best, political tension, and at worst drives organizational behaviors towards acquiring access to resources outside the influence and control of IT policies and procedures.

Most Financial organizations have (somewhere!) some type of asset database that includes information on all data center resources, when they were purchased, the price, some type of amortization schedule, and some level of annual operating expenses associated with these assets. Typically this information is owned and controlled by the financial side of the organization. Additionally, there is typically some source of information that relates these assets to business units, services, and/or applications that they are used to support.

Most IT Operations organizations have multiple tools that monitor and measure the availability and performance of all IT technology resources. Furthermore they have one or more sets of tools and approaches by which they are measuring their ability to successfully deliver service to their various lines of business as well as and customers.

Most data center management teams have a fairly complete understanding of their data center floor: power capacity, equipment footprint layout, total cooling capacity, and costing information such as cost per square foot.

To date these three disciplines within organizations have traditionally never enter operated in coordination with anything other than anecdotal, ad hoc, or manual communications. But there is huge opportunity for value added through close collaboration, the goals of which should include:

  1. Finance wants to measure the value of IT but has no way of putting a currency value on the business work that IT resources are actually accomplishing.
  2. Data center management wants to optimize the cost of the data center but has no good way of understanding how much work the data center is or could support over time.
  3. IT operations want to cost-effectively ensure the delivery of acceptable service within their ever declining budget constraints.

Each of these three main domains have a very large multibillion-dollar solution ecosystem built around optimizing use cases within each domain individually. For example there are hundreds of server, storage, and network management and monitoring solutions for performance and availability management of IT resources. There are many dozens of DCIM solutions for data center management of the physical data center. And there are a plethora of solutions for financial and asset management. All of these solutions were designed around use cases that were solely within their domain and therefore capable of accepting only metrics and data sources from within those domains.

Until very recently software solutions have not existed that would allow or facilitate a more seamless and productive collaboration across these organizations in support of achieving these goals. However, recently new technologies in data access and analytics are lending themselves to productively attacking this challenge of intelligent and proactive collaboration across these disciplines and toolsets. There are two main philosophical approaches, with concomitant benefits and downsides: data warehousing, big data type approaches, and federated data access type approaches.



IT Service Optimization Summit – Plan to Attend!

April 27 – 30, 2014
Rancho Las Palmas
Palm Springs, CA

May 13 – 14, 2014
Mövenpick Paris Neuilly
Paris, France

IT Service Optimization Summit

Helping you impact IT for your business and your career

Advance your business and your career by attending ITSO Summit, the only event devoted to helping you achieve impressive results by optimizing IT service cost, capacity, and performance. You’ll learn techniques for keeping IT infrastructure running smoothly and efficiently. You will gain insights from experts and hear new ideas from industry leaders. Hear about how others deal with real-world IT performance challenges and successes, and see innovative new solutions in action. With separate sessions for c-suite, directors and techies, ITSO Summit is a great opportunity to lock arms with your entire team and attend the event together.

In sessions, networking events, and discussions with industry analysts and influencers, we’ll be talking about things like:

  1. increasing IT efficiency,
  2. reducing service delivery risks,
  3. solving real performance problems, and
  4. managing the performance of data center services on a daily basis.

Careers have been launched and businesses have saved millions through IT Service Optimization.

Plan to attend in 2014 and learn how!

April 27 – 30, 2014
Rancho Las Palmas
Palm Springs, CA 

May 13 – 14, 2014
Mövenpick Paris Neuilly
Paris, France



IBM Enterprise2013 Presentation: Optimizing AIX Enterprise Environments

Carlos Toscano, a TeamQuest Enterprise Performance Specialist, delivered a presentation at the IBM Enterprise2013 conference last month to a standing room only audience – twice. I thought we should share his presentation with everyone.

Toscano’s presentation outlined three key points for everyone concerned with optimizing their IBM AIX environments:

Automation:

  1. Automate performance management
  2. Automate detection of capacity constraints based on anticipated growth
  3. Automate correction of overprovisioning (made in some cases)
  4. Automate custom report generation and distribution

Data Integration:

  1. Integrate multiple performance data sources
  2. Integrate performance metrics and business KPIs
  3. Overlay the environmental data

Predictive Analysis:

  1. Predict future service performance

View the presentation on Slideshare.

How are you addressing the needs of your AIX environment?



« Previous Entries