TeamQuest Corporation

IT Optimized…Business Future Assured

IT Optimized…Business Future Assured. Continuously operate at the intersection of IT efficiency and business performance. Prevent future problems by predictively focusing on what really matters.

You CAN rapidly optimize your IT environment by continuously, automatically, and predictively connecting technology and business metrics – past present and future – across the entire IT environment. Don’t take our word for it – click here to hear how organizations are doing it.

Here is what you’ll see:

  1. Verizon Wireless ensures a smooth iPhone launch
  2. Safeway presents IT data in business terms
  3. Enterprise drives success with automated forecasting
  4. Pink Elephant’s George Spalding details the benefits of predictive analytics for the Software Defined Data Center


Memo to CFO: Innovate more, brake less

Recently a group of CFOs met in Paris to share their thoughts around the changing demands placed upon them.

One attendee focused on the perception that CFOs are seen as “no” people who anticipate and identify problems and raise red flags. This brought about a comparison between CFOs and high performance race cars.

A seasoned CFO in attendance said many CFOs simply hit the brakes of their proverbial high performance race cars when they see a few bumps in the road, slowing what may have been perceived as progress, growth and the success of the company. This is about risk management. As one CFO pointed out, the brakes are there to allow the race car to achieve and maintain very high speeds. As a CFO, do you go around the bumps at full speed, slow down a bit to avoid the bumps, slow down to gently go over the bumps so you don’t wreck the expensive car (you get the picture)?

I’ve simplified it, but how can a CFO innovate without slowing down progress?

Suggestion: Identify risk (obvious), provide creative alternatives, open up brainstorming, and reach innovative solutions that limit and manage risk while allowing business objectives to be accomplished.

I know. It’s not that simple, but this is a good starting point and well worth the effort. After all, the CFO has virtually equal responsibility for the success of the company as the CEO, albeit without much of the glory.

If you’d like to know more of what your CFO peers discussed, check out this 5-page summary (You Don’t Have to Be Superman to be a CFO – But it Would Help).

 



Can the uncontrollable be controlled?

Human beings are distinct from the rest of the living beings in a very unique way. Every human being wants to have every thing that any other human being has in possession. There are definite exceptions to this characterization, like the people who will dedicate their lives in feeding the poor or helping others at the cost of their own life. At the same time, almost every human being wants to spend the rest of their lives vacationing. Either a human being has to control the temptation or in this surrounding of activity for productivity, one will need to control the others to have it all in one’s possession. This resulted into  kings and nobles and barons and billionaires and presidents, prime ministers, law makers etc. Presidents, Prime ministers and law makers claim that they are serving the people.

I wonder how many of the lawmakers, presidents, ministers and secretaries will be in a cluster, labelled public servant, if a clustering algorithm is executed for all of them with data about their involvement in scandalous deals, voting records, fund raising parties, extramarital affairs etc. What if Bayesian principle is applied on the voting records and bills passed by them to verify the probability of true public benefit of those bills or benefit of campaign donors and lobbyists? This will show how many of the self-proclaimed servants truly want to serve or want to control others to have it all. I just gave a new application for Big data analytics in the political science.

Cloud computing and virtualization seems to be out of control. The blogospheres, linkeinians, twitteratzies are inundated with advice, warnings, suggestions, guidance, wisdom, caution regarding the benefit and challenge of cloud computing and virtualization. In this crowd, one person is still relevant and his name is Dr.Neil J Gunther.

In 1993, Dr.Gunther proposed a model, called super-serial scalability law (SSL), and in 2006 he proposed a similar model, Universal Scalability Law (USL) that allowed control of scalability by controlling the concurrent load. Cloud computing had not yet become the reality of every day routine. Many businesses were maturing in adapting virtualization in the datacenters. Between 1993 and 2008 businesses were mostly running their applications on big servers (smaller than main frames) with many CPUs running many applications, controlled by an operating system installed in the server box. SSL or USL could not handle the many application situation but it could and still can deal with many concurrent instances of same application very well.

Concurrent loads vs scalability

A simple best practice suggestion for virtualized systems is to have multiple VMs of same configuration with single VCPU in a host instead of one VM with several VCPUs and to run only one application in each VM. Applications that were developed for execution in physical machines with many CPUs are moved to VMs with several VCPUs for the lack of better management idea. Mostly the number of VCPUs match the number of CPUs of the physical system, which was hosting the application.

Many articles will suggest to “appropriately” configure the VMs to give them “enough” resources to run the application “safely” and thus do overprovisioning efficiently in the virtualized systems for smooth operation of datacenters. Most of those articles and blogs do not explain what is an “appropriate” configuration or how much is “enough” resources or what defines “safe” operation of the data center.

If many VMs with same configuration and single VCPU is deployed in a single host and each VM is running a single application, then it is a situation that can be handled by SSL or USL. Safe operation of datacenter means no incident like the disaster with “healthcare.gov”. Configuration of VMs will depend on the number of VMs sharing the host resources. A high priority VM will need a higher “share” value if it has to share resources with many lower priority VMs. That raises the question of how many VMs can run in a host concurrently before the host is saturated? What if the host will need to run in over saturated mode when some other host fails? The graphs in the figure can shed some light.

In the figure, the boxes are measured data points of a system running a database. The horizontal axis shows the number of concurrent connections handled by the database and the vertical axis shows the scalability. The scalability is computed as the ratio of throughput or transaction rate for n connections or T(n) and throughput for 1 connection or T(1). The solid line is a prediction of scalability by SSL based on historical data or load test data up to 30 concurrent connections. The dash and dot line is a prediction of scalability by USL based on historical data or load test data up to 30 connections. These two lines show a distinct point of maximum scalability. The measured data shows that maximum scalability occurs for 43 concurrent connections. This means that the throughput will degrade for more than 43 concurrent connections. The predicted maximum concurrency by SSL is very close to the measured value but USL prediction is not so well. This shows that a working estimate of how many VMs can run until the host saturates can be found using SSL or USL.

Prediction of the system behavior in the post saturation phase is not handled well by either SSL or USL. The predicted scalability by SSL and USL are much lower than the measured test data. System can be in over saturation phase if automatic vMotion is allowed in a virtualized computing environment even if not for long. Do we want to have control over the over saturation phase or do we want to ignore?

If we want to control the over saturation phase then we need to do better than SSL or USL. Here comes the improvement of SSL. It is called Asymptotically Improved Super-serial Scalability Law or AISSL, in short. The dotted line passing through the boxes, representing measured data points, is the predicted model by AISSL. If you are an efficiency freak, who wants to squeeze the maximum out of a host, then you might get into the oversaturation phase more often than other dovish commanders of the datacenters. With AISSL, you can be a hawkish efficiency freak datacenter commander with confidence. This is almost as good as it gets.

This brand new modeling technology will be presented during session 252 of Performance and Capcity 2014 on November 3, 2014 in Atlanta, GA, USA. If you are the hawk, who is efficiency freak,  then I would like to see you in Atlanta on November 3rd and together we will soar high above the doves.

 



IT Delivering on the Big Data Promise

Big data isn’t new. There’s just more of it and it’s getting harder and harder to figure out how best to use it. I’d like to share a few thoughts about big data and what I learned from a talk about how Formula One racing teams use big data.

Let’s put today’s data in perspective. One study estimated that by 2024, the world’s enterprise servers will annually process the digital equivalent of a stack of books extending more than 4.37 light-years to Alpha Centauri, our closest neighboring star system in the Milky Way Galaxy. That’s a lot of data to analyze.

According to Gartner analyst Svetlana Sicular, “Big data is a way to preserve context that is missing in the refined structured data stores — this means a balance between intentionally “dirty” data and data cleaned from unnecessary digital exhaust, sampling or no sampling. A capability to combine multiple data sources creates new expectations for consistent quality; for example, to accurately account for differences in granularity, velocity of changes, lifespan, perishability and dependencies of participating datasets. Convergence of social, mobile, cloud and big data technologies presents new requirements — getting the right information to the consumer quickly, ensuring reliability of external data you don’t have control over, validating the relationships among data elements, looking for data synergies and gaps, creating provenance of the data you provide to others, spotting skewed and biased data.”

While I don’t have THE solution, I do have one suggestion. Find a way to analyze the disparate data coursing through your environment and give it meaning – business context.

For example, focus on the right information by asking what’s important to the business. In “The Data Driven Business of Winning” Managing Director of CMS Motor Sports Ltd. Mark Gallagher, shared how Formula One teams successfully analyze data to ensure the safety of drivers and win races.

Gallagher explained how a team of data engineers, analyzing reams of information in real time, can help make strategic decisions for the business during the race. “In 2014 Formula One, any one of these data engineers can call a halt to the race if they see a fundamental problem developing with the system like a catastrophic failure around the corner.”

It comes down to the data engineers looking for anomalies. “99% of the information we get, everything is fine,” Gallagher said. “We’re looking for the data that tells us there’s a problem or that tells us there’s an opportunity.”

TeamQuest Director of Market Development Dave Wagner likens this right data approach to a Moneyball concept he’s discussed in a recent webinar.

Much like the Formula One example above, Wagner believes IT must understand what’s important to the business in order to be successful and be able to deliver accurate, strategic advice – sometimes in a matter of seconds.

Teams can be successful if they’re able to look at the right data in combination with powerful analytics, according to Wagner. In fact he likens it to the equation:

Good data + powerful analytics = better results

Just ask Formula One about the power of good data and analytics. A Formula One driver’s steering wheel is basically a laptop, providing him with the data needed to make the best decision available. Drivers can scroll through a 10-point menu – while driving – and adjust parameters that affect the performance of the vehicle. This happens because the driver is able to get to the right data when needed to get a desired outcome.

Lots of data is collected by IT, which shares data that’s important to the customer (business), and together they use that data to gain an advantage and be successful in the marketplace.

That’s a nice checkered flag ending and a nice way to end the day.

 



3 Ideas to Show Immediate Value from Capacity Planning Activities

What usually happens when IT talks capacity planning to the business? Eyes roll, interest drops, and the c-suite changes the subject. Some of the problem is due to a language barrier. Think response time versus utilization. Another issue is capacity planning has no value in the eyes of its beholder – your business customers. HSBC head of capacity planning Tim Collins suggests a few “hidden value adds” that IT can use to show immediate value from capacity planning activities. They’re even listed in a way to catch the attention of your business customers.

Outage Avoidance

Proactive analysis of performance data can help identify issues such as filling file systems, memory leaks, and looping processes. If these can be done far enough in advance (e.g., 14 days+), then the issue can be fixed with a planned change, rather than reactive support.

Cost Avoidance

The capacity planning team can identify where existing capacity is available on already provisioned hardware if they are involved in solution design and hardware provisioning. This can save considerable costs on both hardware and software.

Reduced software costs

Capacity Planning would be in the ideal position to advise when platforms need to be updated, and can calculate the optimum layout of virtual servers. This can save proprietary software costs (e.g. RDBMS, WebSphere, etc.), and also reduce/eliminate the need for extended support on existing hardware platforms. In addition, direct savings on COTS package cost by reducing the number of servers / CPUs in the environment would be available.

Give these three tips a try in your environment and share some of your tips, too.

 



IT Service Optimization Summit Preview – Paris

Tomorrow marks the beginning of our second #ITSO14 event for the year. This time in Paris. Be sure to follow along if you aren’t able to attend. Here’s what you can expect on Tuesday and Wednesday.

13 May – Tuesday
This is a day focused on IT efficiency and cost optimization:

  1. Mark Gallagher of Formula One on Technology driven business performance
  2. Executive peers from various companies on how to leverage IT efficiency and quality of service to drive business growth
  3. A panel discussion on ‘what makes a great CIO – keeping the lights on, or driving business growth.
  4. Networking opportunities with business and IT executives that face the same challenges you face
  5. Enjoy a relaxing dinner with great food and interesting conversations

14 May – Tuesday
This is a day of inspiration and break-out sessions:

  1. James Staten of Forrester Research on ‘Just because it´s in the cloud doesn´t make it cheaper’
  2. Industry peers from various companies sharing techniques for keeping IT infrastructure running smoothly and efficiently
  3. Networking opportunities with data centre peers
  4. Break-out sessions, each providing in-depth knowledge on how to manage performance and capacity in daily IT operations.
  5. Panel discussion with industry analyst, TeamQuest executives and data centre peers


ITSO Summit: Day 2

Today’s track is all about response time, throughput, and latency while finding the cross section of IT efficiency and business productivity. Take a look at the agenda.

We have speakers ranging from LCDR Rorke Denver, Navy Seal to Safeway to TeamQuest Engineers and Global Services to storage pro Intellimagic to Volkswagen to Enterprise Holdings. Yesterday was all about the business. Today will still be on that same note, but we will be digging in deeper to the “why” and “how” of optimizing IT.

Don’t forget to follow along – here’s how to stay in touch.

Here is a video that encapsulates what ITSO Summit is all about. Enjoy!

 



ITSO Summit 2014 – How to Stay in Touch

We wanted to let you know the ways we will be sharing information from #ITSO14 this year. There are several ways to stay up to date with the event, even if you aren’t able to attend.

We just made IT Optimization even easier

Twitter: Follow our corporate twitter account as well as the hashtag #ITSO14 for updates throughout the day – this will be where the majority of the information will be shared.

Google+: another way to put us in your circles to get live updates for ITSO14 and more.

Facebook: we will be broadcasting information from our Facebook page as well.

Look for the presentations to be shared shortly after the event is over.

We hope you like what you see and hear. This year’s line up promises not to disappoint. Keynote from Gartner’s Dave Cappuccio and Navy Seal Rorke Denver. IDC Analyst, Mary Johnston Turner and The Virtualization Practice’s Bernd Harzog will also be presenting on Monday afternoon. We have customers from Fidelity Investments, FIS, Enterprise Holdings, Safeway, Volkswagen; as well as a myriad of TeamQuest experts sharing a wealth of knowledge and real life experiences.



Automated Predictive Analytics – Latest Software Release

We are happy to announce significant extensions to our TeamQuest Risk Prediction solution — adding more automation for proactively and predictively optimizing business services, IT performance, and cost.

We just made IT Optimization even easier

Designed to enable organizations to predict and prevent current and future IT performance issues and automatically diagnose the root cause of performance issues across the IT infrastructure, TeamQuest Risk Prediction now automatically understands the business service demand lifecycle, determining the appropriate baselines for its automated predictive modeling.

Before TeamQuest Risk Prediction, IT organizations needed a difficult-to-gather and too-detailed-to-scale understanding of not only IT resource performance, but also cross-functional service capacity requirements in order to make confident decisions about future performance. TeamQuest Risk Prediction, with its new levels of automation and embedded intelligence, now fully automates the highly specialized performance and capacity analysis and modeling that previously required teams of cross-discipline experts.

By automatically understanding the business service demand relationship to the IT resources needed for successfully delivering acceptable service — at acceptable costs — TeamQuest Risk Prediction eliminates costly and risky guess work and manual efforts. Comprehensive policy-based administration ensures the ability of any IT organization to rapidly deploy, configure, and realize all the benefits of truly proactive and predictive performance analysis and modeling, with any level of staff expertise or organizational process maturity.

Our latest software release also has:

  1. Simplified capacity modeling of AIX LPARs with shared resources
  2. Analysis of CPU processing capacity and waste
  3. Performance and capacity analysis of VMware datastore clusters
  4. Several new out-of-the-box reports showing predicted and measured performance and capacity


Optimizing for the Software Defined Data Center – Part II

The result of having the ability to answer the questions accurately provides the foundation of continuous IT Optimization.

  1. Reduce initial CapEx and ongoing OpEx, so you will make, and keeping making, more money!
  2. Optimize resources for systems of customer engagement
  3. Deploy and refresh new applications faster
  4. Respond faster to business spikes
  5. Prevent business-impacting outages and slowdowns.

The methodology allows for aligned business and IT analytics for enterprise IT optimization.

  1. Correlate business and IT performance
  2. Insight into how business process changes impact IT
  3. Understand and optimize costs by business unit / process and technology
  4. Insight into business performance across the technology stack.

Automated, Aligned IT Analytics:

  1. Analyze – optimize decision making, proactively manage true performance, predictively optimize performance and capacity, continuously cost-optimize IT resource decisions.
  2. Integrate – with the business, business KPI and financials feed IT decisions, IT performance and capacity in business context.
  3. Automate – repeatable, standardized, normalized, flexible, scalable

Our approach federates existing data into purpose-designed optimized process:

  1. Technology data (server, network, storage, etc.)
  2. Service data (catalog, metrics, tickets, etc.)
  3. Financial data
  4. Business data (analytics, KPIs, plans, TXNs, etc.)

Automating the analytics across all of these data sources creates a flexible and adaptive management platform for dynamic SDDC environments ultimately transforming raw or commodity data into actionable information for IT. It’s a single pane of glass for all of your IT optimization needs.

Automated, Predictive Analytics – A health forecast embedded analysis gives you a continuous rolling prediction with complete components of response time (latency) that tells you which services will run out of resources, when, and why.

Automated Financial Optimization – Forecasting total costs for applications is valuable for pre-purchase validation, repurposing, and consolidation efforts. Integrated with risk and service management data and fully automated.

IT Power Capacity Optimization – Be proactive on your power optimization – find your actual usage versus data center limits. Allocate power by application / service and project savings via optimization. You can analyze server / virtual server performance, power consumption from DCIM tools, service catalog and CMDB, and asset database together.

Automated Root Cause Analysis – analyze application response time against your SLA’s – spanning servers, virtual servers, network, storage, etc. Using real time application monitoring data, service catalog and CMDB data, you’ll be on top of where you stand versus your SLA.

Financial Optimization Reporting – automatically balance current and future costs with performance and capacity using server / virtual server performance, asset costing (CapEx / OpEx), service catalog and CMDB with power costing data.

Here’s the answers to the questions when you take your SDDC into continuous IT optimization:



« Previous Entries