## What is Nonlinear Regression: Linear Least Square Error Regression (PART I)

This is a post in a series of posts that dives deep into the mathematics involved in capacity planning. We have incredibly smart people figuring out the hardest problems so you don’t have to. This is a closer look at what goes on behind the scenes in a capacity planning tool.

The method of estimation of mean of constant coefficients of a nonlinear function using the mathematical rules of least square error (LSE) principle is often named as nonlinear regression. The mathematical rules are same as the LSE for the estimation of constant coefficients of a linear function in most of the cases. Let’s elaborate on what is linear regression.

Need to estimate the average of the constant parameter of the function of x in (1) from a set of collected measurements of pairs for  for a given function :

Mathematically, the collected measurements of pairs of  is written as

Note that  is symbolically the measurement of . The constant coefficients of expressions, used for modeling some practical phenomena or process, are almost always estimated by using LSE. This type of estimation approach is adopted if the model is intuitive and actual dynamics of the phenomena are unknown but certain factors are assumed to be influencing the phenomena. This means that the model of Eq.(1) is an intuitive approximation based on measured data.
A common sense idea is to estimate the constant parameter such that some expression of errors between the modeled  and measurements of is minimum. So, it is assumed that for the measured , the modeled  will deviate from the measurement of . The errors are the difference between and the measurement of for .

Most commonly known difference expression would be

The total error from Eq.(3) would be

Instead of estimating  such that the expression in Eq.(4) is smallest, a slightly different approach is used.

The sum of square of error (SSE) function is minimized in the process of estimating . The SSE function, , for the function of Eq.(1) for the measured data of Eq.(2) is

The expansion of the right hand side (RHS) of the equation of Eq.(5) is

The RHS of above implies that is a 2nd degree polynomial function of . This seems contradicting the definition of as constant. In fact, it is no contradiction at all. is a constant in the model of Eq(1) but in Eq.(5) is the only unknown. All the quantities involving sum of and and combinations in Eq.(5) are known as and are known from Eq.(2). This explains why the left hand side (LHS) of equation in Eq.(5) is written as a function of . A 2nd degree polynomial function is also called quadratic polynomial. Some terminology and characteristics of polynomial functions are:

1. The degree of a polynomial function is determined by the highest exponent of the variable among the terms of the series summations. The highest exponent is 2 in Eq.(6) and so it is 2nd degree polynomial.
2. The term with highest exponent is called the leading term. The first term is the leading term in Eq.(6).
3. The coefficient of the leading term is called the leading coefficient. If the leading coefficient is positive then the graph of the polynomial is of a bowl shape as in Figure 1, else if the leading coefficient is negative then the graph of the polynomial is an inverted bowl shape as in Figure 2.

Notice that the function in Figure 1 has a minimum at and no maximum is possible. This allows using a simple first order derivative operator or where x is the independent variable to find the minimum point. According to the theory of calculus, the solution of

will provide values of at a point where  of Eq.(5) is either minimum or maximum. From Figure 1 and the sign of leading coefficient of the RHS of Eq.(6), we know that in Eq.(5) is going to have a minimum only. Thus we can get the for which the SSE is minimum. From Eq.(7) and Eq.(5)

Figure: 1. Quadratic polynomial with positive leading coefficient

From the above expression, the solution is

Figure: 2. Quadratic polynomial with negative leading coefficient

So, of Eq.(5) will have least value if the estimate of computed by Eq.(8) from the measured data set of Eq.(2) for the model of Eq.(1).

We will see how the estimated in Eq.(8) is an average in the next post . The LSE estimation of in Eq.(8) is extended to functions of multiple independent variables and then will solve the problem for polynomial functions of degree 2 or higher in the post after the next post of this series.

## Capacity Management Maturity Model Level 4 – Value

If you’re at this level, you probably…

1. Measure and report on IT in business terms;
2. Have a set of tools and processes enabling you to align IT with the business; and
3. Are one of the very few organizations that have reached this level of maturity.

This is the final endpoint of Capacity Management maturity. At this point, there are
no additional technical steps to take. Rather it is a matter of taking advantage of
the earlier work to fully integrate IT into the business. Everything is looked at from a
business point of view, rather than a technical standpoint. The focus is on actions that
will benefit the company as a whole.

At this level, the organization realizes the full value, not just of implementing Capacity
Management, but of IT as a whole. Since IT’s actions are fully and seamlessly aligned
with those of the business, they are directed toward helping the business achieve its
goals.

IT and business performance are tied together. In addition to IT services, business
processes are measured and audited for efficiency and effectiveness. It is possible
for IT and business units to accurately determine the cost of IT services and weigh
those costs against business benefits and risks in order to make the best decisions
regarding the use of IT.

A central process in organizations on this level is Continual Service Improvement. Even
though the highest level of maturity has been reached, there is a continuous need to
realign IT processes to the changing business needs.

## Capacity Management Maturity Model Level 2 – Proactive

If you’re at this level, you probably…

1. Focus on the workload analysis rather than the performance of technical components; and
1. Rely on trending to give you early warnings regarding future incidents.

However, you also probably…

1. Have a hard time predicting the outcome of complex scenarios with enough confidence.

Organizations in lower maturity levels are triggered by events, without putting much
effort into understanding the cyclicality or seasonality of applications. At this level you are taking a more forward-looking view to try to predict and avoid those problems. Tools are in place for quicker diagnosis of performance problems and trending is used to predict performance bottlenecks or incidents in the near future. When detected, work is commissioned to mitigate the risk. This enables you to predict and avoid downtime in many cases.

In order to do trending you need historical information, not just a view of current
performance or during the last few days. The tools used must be able to continuously collect and aggregate data and put it into a historical database. Another key attribute of this level is the focus on workloads instead of just the compound activity of a technical component. Rather than looking at infrastructure components as entities, the activity of a component is broken down by application or service. These streams of activity are then the focus for analysis and reporting.

Combining the trending capabilities with the notion of different workloads makes for anew level of proactivity. This is the first stage where an organization truly automates the prevention and resolution of problems that are now known to recur. This is where the IT department starts caring about the availability of applications, not just equipment.

To move to this stage, you need to see the full spectrum of an application, not just one layer or one tier of a multi-tiered application. This requires the use of a standardized toolset that covers the complete technology stack of a server, as
well as the diversity of different platforms when looking across numerous servers.

In order to keep it all together, this stage requires a stricter approach to monitoring. If you are stuck with a continuous flood of unrelated alerts and events from a multitude of sources, it’s hard to prioritize and optimize the effort. You need to identify the key factors beforehand and monitor, store and analyze that subset of information. Of course, one still might need to put out any fires that occur. But if you already know where the fires are likely to occur and which fires will hurt the most, you can prioritize actions to minimize damage.

## Capacity Management Maturity Model Level 1 – Reactive

If you’re at this level, you probably…

• Have rather detailed information on how components are performing.

However, you also probably…

• Still lack an overview;

• Have a hard time planning since you’re busy reacting to events outside your control;and

• Misdirect your efforts sometimes, focusing too much on less important incidents.

Companies operating at this level are more mature than those at the Chaotic level, but tend to have a fragmented view of what is taking place within their environment. Typically, they have an assembly of different tools to monitor the activities of different pieces of hardware, applications or services. These tools are not very well integrated and will only give an isolated view of a particular component. These silos of information do not correlate to one another and do not provide a comprehensive view of the full spread of a service or application offered to customers.

When a situation is discovered, whether through an alert from a monitoring tool or through a customer complaint, procedures are in place as to who is responsible for resolving the issue. The tools sometimes allow the IT department to respond to problems before getting customer complaints, but the response is based on information at the component level. They do not identify what services are being impacted or how it affects the business. IT, therefore, can misdirect its efforts by addressing problems that are not necessarily important to the business.

This reactive approach worked better some years ago, when there was less complexity in the IT environment: only a few tiers; simple applications and services. Since there were only a few easily defined technical silos, it was easier to localize the problem. Now, with cloud and virtualization adding new levels of abstraction to the technology, it is misguided to think of technology in terms of distinct silos or even tiers. Most components are thoroughly interconnected and interdependent. That increase in complexity calls for a more mature approach to capacity management.

Want to see where your organization falls in the maturity model? Take the 15 question quiz now!

## Capacity Management Maturity Model Level 0 – Chaotic

If you’re at this level you probably…

ï¿½ Have no centralized service desk;
ï¿½ Donï¿½t predict any kind of incidents (they all come as surprises); and
ï¿½ React to events in an ad hoc manner.

The most primitive level ï¿½ Chaotic ï¿½ is characterized by an overall lack of operations
management discipline. Rather than trying to operate a step ahead and acting before
users are affected, the IT organization only learns of performance problems when users
call in to complain. At the time of such events, performance tools and techniques are
assembled ad hoc. Typically, the people involved in the troubleshooting only have
access to snapshots of the most recent activities and lack information about the period
that led up to the incident.

To make matters worse, the organization often lacks a centralized service desk for
reporting of incidents and end users donï¿½t necessarily know where to turn. Without a
service desk, thereï¿½s no focal point for coordinating the problem-solving process with
various technical teams.

Companies operating at this level of maturity do not have a clearly organized approach
to solving performance- and capacity-related problems. They do not have any standard
or uniformity as to what toolsets and processes to use; itï¿½s all run on a ï¿½best effortï¿½
basis. If they get something right, it is more by luck than by analysis.

Want to see where your organization falls in the maturity model? Take the 15 question quiz now!

## Risk Assessment – Another Important Aspect of Capacity Management

Managing risk is a constant balancing act for successful businesses. Choices need to be made because business decisions are often constrained by cash flow, credit availability and amount of risk. IT can be just a small part of these decisions. IT needs to be able to provide risk-related information to the business so more informed decisions can be made. In many cases this work will involve the use of predictive modeling tools to perform sensitivity analyses on IT systems.

Why is this work important? IT processes the transactions, records them in general ledger or similar repository for financial/audit purposes, and then archives them at later date in compliance with regulations and laws. For example, IT can process a gazillion sales orders but if the supply chain can’t deliver the parts or the shipping department can’t ship the product in a timely fashion, IT’s work could actually be hurting the business by creating large numbers of unsatisfied customers. When trying to address the problems, sufficient cash or credit may not be available if computer system upgrades are needed at the same time the warehouse or supply chain needs to expand. Business executives will have to balance the costs of increasing business volumes versus the inherent risks of not being able to deliver their quality products or services. The executives may choose to expand the warehouse and have lesser performance on computer systems if that action provides the least risk and best opportunities for the future.

Capacity Management and specifically Predictive Modeling helps your business executives make those tough decisions. Using modeling tools and techniques, capacity managers can predict individual service performance at different transaction volumes. Costs can be applied to those different points in time and supplied to your executives. This work is no different than what the warehouse manager, supply chain director or loss manager is doing in other areas of your business. By providing this information, you become a partner, not just the “IT guy or gal”.

Until the next time

Ron

## Another Way to Deal with Intangibles

I frequently see IT professionals struggle when trying to quantify intangibles.  We deal with accurate, precise data on a daily basis so it goes against our nature to provide less precise information to our stakeholders.  We tend lose sight of the fact that we live in a fast-moving, imprecise business world (I know I am guilty at times).  When you really think about it, we often complain that our business leaders cannot precisely quantify their goals.  We forget that one cannot accurately predict what consumers will do next month much less next year.  Since it usually takes IT staff some time to gather data from real-time sources, business situations can substantially change during the time IT performs the analyses. In many cases that means that the project has to be in place and operating in order to gather accurate data.   In order to accommodate the shorter business turnaround times, IT professionals may need to find quicker and better ways to quantify intangibles.

The American television game show “Wheel of Fortune” provides a good example of a how one can deal with intangibles.  The show is based around a series of word games.  The contestants start with a blank set of tiles representing the letters in words; similar to the age-old game of “Hangman”. The contestants each spin a wheel to get the opportunity to ask if a certain letter exists in the word or set of words.  If it does, the contestant can try to solve the puzzle.  Sometimes a contestant can solve the puzzle with just one or two letters exposed; other times all letters must be revealed.

The game is not unlike the position business leaders often find themselves when trying to make a decision.  They have a lot of unknowns but each element of information they acquire gets them closer to an informed decision.  Today’s competitiveness means they may need to make those decision more quickly; before all information is available.

The game concept can be a good example of how IT can deal with intangibles.  Rather than being unwilling to provide imprecise information, sometimes providing management with just a portion of the total is sufficient for them to make an informed decision.  The leaders have more information than when they started, thus can better understand the risks involved in proceeding with the new venture. They will take your information and combine it with that of other departments, giving them a clearer picture of what they are facing, facilitating the decision-making process.  So the next time one of your executives asks for information that is not readily available, don’t say “no.” Consider ways to more quickly provide portions of data, perhaps at lesser precision than normal, and ask if that lesser level of precision satisfies their needs.  Think “Wheel of Fortune.”

Until the next time,

Ron

## Maturity: Responding to business needs appropriately

Maturity, as defined by Wikipedia, is a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate manner. At TeamQuest, we define maturity as the moment when IT views everything from a business point of view, rather than a technical standpoint. The focus is on actions that will benefit the company as a whole.

We’ve entered the nascent stages of understanding how to manage cloud computing and virtual heterogeneous environments. This means management complexity increases since there’s no longer a permanent and exclusive relationship between physical resources and the software that runs on it. IT needs the right people, processes and tools to navigate through the hype to make this work.

Capacity management isn’t the Holy Grail, but it can provide the information you need to make miraculous decisions. The process is simple. Realize where you are now, plan to move ahead, and take the first step toward greater maturity.

Take this quiz to discover where you are now. Once you get your score, review this 8-page paper. Then take the steps necessary to move to the next stage of maturity. The majority of companies are between stages 1 and 2 (i.e., reactive and proactive).

If that’s good enough, then continue on. If you’re looking to accurately weigh costs against benefits and risks, measure process efficiency and effectiveness, and link IT services to business processes, then it’s time to get to work. Get in touch with us or leave a comment.

Of course, if you’re already providing value to the business, let us know how you’re doing it. In fact, tell everyone how you’re doing it. We all want to respond to business needs in an appropriate manner.

## Who do you blame when something goes wrong?

Last month, I watched this video from Kathryn Schulz who is a “wrongologist”. What she said caught my attention about how we react from being right or wrong and the emotions and beliefs that go along with it.

Then I asked myself these questions:

1. How can this relate to IT?
2. What can we learn from being wrong?
3. Who do we blame when something goes wrong?

Kathryn talks about how the aviation industry got it right after many years. Let’s face it, this is an industry where mistakes are not acceptable. Imagine you are about to board a plane, and they tell you the availability of the plane is 99.5% or the capacity of the plane is over by 20%. Would you feel comfortable?’ The changes and decisions they make on a daily basis have great consequences for all of their passengers.

The aviation industry realized that they had to move away from blaming an individual when something went wrong. Individuals make mistakes. It is inevitable. So they figured out that the answer to something going wrong was not an individual’s fault. Mistakes are great information and an opportunity to learn and improve. So what the aviation industry decided to focus on was their system/process. Where did the system/process fail and why?’ We might not be able to get perfect people, but we can definitely improve the process, so that mistakes (by people or IT components) are minimized.

How can this relate to IT?

IT obviously does not want to be wrong. But it seems that when something goes wrong in IT, there is finger pointing from Developers to System Administrators, Database to Application Developers and everyone else blaming the Network group!

Processes and Best Practices like ITSO (IT Service Optimization) can help minimize those mistakes by ensuring that the business requirements are understood for all IT services, ensuring risk levels are considered and prioritized and planning for future scenarios and how they will affect services, applications and servers.

As the process matures, we provide better value to the business, better alignment and better risk assessment.

What can we learn from being wrong?

Danish scientist and Nobel laureate Niels Bohr defined an expert as “A person that has made every possible mistake within his or her field.”‘ So being wrong should be taken as an opportunity to learn that time and effort will lead us to well being right!!
In ITSO, there is an unknown sixth part of the process Continual Service Improvement. When you complete the 5th step, go back to the 1st step. Many variables in IT that affect services are in constant flux. There is a continuous need to realign IT processes to the changing business needs. This continuous process is how we gain IT maturity.

Who do we blame when something goes wrong?

We can blame people or even computers/operating systems/application. But that approach will only lead us to staying in a reactive mode, as people and IT components will eventually be wrong (they are not perfect). For example, an excellent IT operator might be up all night sick and the next day might not be 100% alert and could make a mistake. A disk, which has been working fine for years, all of a sudden might fail. There are no guarantees for any of these two scenarios. Following a process (whether it’s a written process or an application transaction process) can minimize and in some cases avoid the mistake completely. A disk can be measured, diagnosed and also we could have a contingency plan in case the disk fails.

IT will make mistakes. Take those mistakes as an opportunity to learn to be right. It is the only path to maturity. Remember that IT maturity is a journey, not a destination. Enjoy the ride!

## TeamQuest Presents IT Service Optimization Award to Verizon Wireless

We are pleased to announce that Verizon Wireless has been awarded the 2011 IT Service Optimization (ITSO) Award. Verizon Wireless recently launched Apple’s iPhone 4 on the nation’s fastest and most advanced 4G network. By using TeamQuest software to optimize their IT services, the launch was a complete success.

The launch of iPhone 4 for Verizon Wireless showcased how we utilize TeamQuest tools to enhance and optimize systems. Through continuous testing and strategic planning, we accomplished a record number of device sales without incurring any critical performance issues or outages, said Rich Rodgers, Verizon Wireless Executive Director of IT Systems Engineering, Integration and Finance.

Can you hear me now?

Previous ITSO Award winner Law School Administration Council (LSAC) established an IT Service Optimization framework in tandem with TeamQuest software and has been able to streamline its infrastructure to completely fulfill its service demands. LSAC successfully negotiated its peak activity period with no service shortfalls, while adding new services at the same time.

ITSO Award nominees are judged on a variety of criteria:

1. Adoption: Implementation and use of TeamQuest software and best practices
2. Impact: The benefits obtained from implementing TeamQuest software and best practices
3. Innovation: The way the company uses TeamQuest software and best practices
4. Results: Concrete improvement and measurable change

Verizon Wireless implementation of TeamQuest software is a shining example of how to use sound capacity management people, process and tools to bring value to the organization.

Join us in congratulating Verizon Wireless on their achievement!