## What is Nonlinear Regression:Linear Least Square Error Regression of Multi Variable Function (Part II)

A simple form of multi variable function is a 2-variable function expressed as

$f(x_1,x_2)=a_0+a_1x_1+a_2x_2.$
(1)

The Sum of Square Error function for a set of measurements $\{(\{x_{1j},x_{2j}\},\hat{f}(x_{1j},x_{2j}))|j=1,2,3,...,M\},$ where $\hat{f}$ is the measurement of $f$ and $M$ is the number of measurements, is

$S(a_0,a_1,a_2)=\sum_{j=1}^M[\hat{f}(x_{1j},x_{2j})-a_0-a_1x_{1j}-a_2x_{2j}]^2.$
(2)

The expanded form of the right hand side (RHS) of Eq.(2) is

$\sum_{j=1}^M[\hat{f}(x_{1j},x_{2j})-a_0-a_1x_{1j}-a_2x_{2j}]^2=a_0^2\sum_{j=1}^M1+ a_1^2\sum_{j=1}^Mx_{1j}^2+a_2^2\sum_{j=1}^Mx_{2j}^2-2a_0\sum_{j=1}^M\hat{f}(x_{1j},x_{2j})-2a_1\sum_{j=1}^M\hat{f}(x_{1j},x_{2j})x_{1j}-2a_2\sum_{j=1}^M\hat{f}(x_{1j},x_{2j})x_{2j}+2a_0a_1\sum_{j=1}^Mx_{1j}+2a_0a_2\sum_{j=1}^Mx_{2j}+2a_1a_2\sum_{j=1}^Mx_{1j}x_{2j}+\sum_{j=1}^M[\hat{f}(x_{1j},x_{2j})]^2.$
(3)

To further simplify the example, assume that the constant term in the linear function of Eq.(1) is zero (i.e., $a_0=0$). Then Eq.(3) is rewritten as

$\sum_{j=1}^M[\hat{f}(x_{1j},x_{2j})-a_1x_{1j}-a_2x_{2j}]^2=a_1^2\sum_{j=1}^Mx_{1j}^2+a_2^2\sum_{j=1}^Mx_{2j}^2-2a_1\sum_{j=1}^M\hat{f}(x_{1j},x_{2j})x_{1j}-2a_2\sum_{j=1}^M\hat{f}(x_{1j},x_{2j})x_{2j}+2a_1a_2\sum_{j=1}^Mx_{1j}x_{2j}+\sum_{j=1}^M[\hat{f}(x_{1j},x_{2j})]^2.$
(4)

In the previous post, the graph was like a vertical slice of a bowl. Figure 1 shows inside of the seed holder of an avocado as bowl shape.

Figure 1: Seed holder surface as an example of a bowl shape.

Figure 2 shows how a vertical slice of bowl would look like in the length wise quarter slice of avocado.

Figure 2: Seed holder surface of the quarter slice as the shape of a vertical slice of a bowl.

In this case, the graph is like a 3-dimensional bowl as shown in Figure 3. The heatmap surface of Figure 3 is similar to the seed holder surface of the half of an avocado in Figure 1. In Figure 3, the

Figure 3: Quadratic polynomial Square Error function with positive leading coefficient for a function of 2 random variables

graph of  $4x^2+4y^2-8x-8y+2xy+4$ is displayed as an example of a polynomial of two input variables. To compare this as a function with the Sum of Square Error function of a function with two input random variables, replace $x$ by $a_1$ and $y$ by $a_2$. The minimum is a point at the bottom of the bowl of Figure  3. The base of the bowl is not a perfect circle in Figure 3. The heat map of the surface in the top part of Figure 3 shows the lowest value of the surface is possibly a negative value. In the lower part of Figure 3, the red color contour line at the center is for -0.1. A contour line is  a line joining all the points at same height at z-axis assuming that the bottom of the bowl is touching the horizontal plane of x-axis and y-axis. Any smaller value is so small that drawing an ellipse is not possible. Also, it is not possible to draw a point size ellipse, which would represent the minimum point of the surface of Figure 3. The smallest ellipse representing the value of -0.1 in Figure 3 indicates that the lowest point can be found in the area encircled by the contour line for -0.1. From the lower part of Figure 3, the center of the contour line at level -0.1 seems to be very close to the origin of the horizontal surface formed by x-axis and y-axis. The values of the other contour lines are listed in the right side above the lower part of Figure 3.

In the previous post, the conditional expression at the minimum for one random variable was

$\frac{dS}{da}=0.$

An extension of the above condition to take into account 2 random variables will be

$\frac{\partial S(a_1,a_2)}{\partial a_i}=0, \text{ for } i=1,2.$
(5)

To test this on the function of Figure 3, write $S(a_1,a_2)$ as

$S(a_1,a_2)=4a_1^2+4a_2^2-8a_1-8a_2+2a_1a_2+4.$
(6)

When the condition of Eq.(5) is applied to the example graph of Figure 3, the following equations are obtained

$\frac{\partial}{\partial a_1}(4a_1^2+4a_2^2-8a_1-8a_2+2a_1a_2+4)=8a_1-8+2a_2=0.$
(7a)

$\frac{\partial}{\partial a_2}(4a_1^2+4a_2^2-8a_1-8a_2+2a_1a_2+4)=8a_2-8+2a_1=0.$
(7b)

As a result, the following system of linear equations are found

$4a_1+a_2=4.$
(8a)

$a_1+4a_2=4.$
(8b)

The solution of the system of linear equations of Eq.(8) is $a_1=0.8$ and $a_2=0.8$ or the point $(0.8,0.8)$ on the horizontal plane formed by x-axis and y-axis in Figure 3. So, the precise point of the minimum is at the point $(0.8,0.8)$ in the x-y plane of Figure 3 and is encircled by the smallest red contour at level -0.1 near the origin of the contour plots in the lower part. The precise value of the minimum is found by substituting $a_1$ by 0.8 and $a_2$ by 0.8 in the expression of Eq.(6)

$4\times(0.8)^2+4\times(0.8)^2-8\times(0.8)-8\times(0.8)+2\times(0.8)\times(0.8)+4=-2.4.$

The above derived value of -2.4 as minimum explains why the contour line is so small for the value -0.1 and no contour line is displayed for -0.5 as it does not exist.
A generalized multi variable linear function is formally stated as

$f(x_1,x_2,...,x_n)=a_0+a_1x_1+a_2x_2+...+a_nx_n.$
(9)

The coefficients $a_0,a_1,a_2,...,a_n$ need to be estimated using Linear Least Square Error Regression from a set of measurements of the random variables $x_1,x_2,...,x_n$ and the corresponding measurements of $f(x_1,x_2,...,x_n)$. The theory explained in the previous post is extended here to deal with the multi variable function of Eq.(9). As was done in the previous post, let the measurement of $f(x_1,x_2,...,x_n)$ be denoted by $\hat{f}(x_1,x_2,...,x_n)$.

The sum of square of error function is expressed as

$S(a_0,a_1,a_2,...,a_n)=\sum_{j=1}^M[\hat{f}(x_{1j},x_{2j},...,x_{nj})-a_0-a_1x_{1j}-a_2x_{2j}-...-a_nx_{nj}]^2.$
(10)

An example of $n=2$ is explained in the beginning and in Figure 3 to have an understanding of the graph of $S(a_0,a_1,a_2,...,a_n)$ of Eq.(10), as this graph will need more than 3-dimension and is not possible to display.
The extension of the conditional expression of Eq.(5) is

$\frac{\partial S(a_0,a_1,a_2,...,a_n)}{\partial a_i}=0,\text{for } i=0,1,2,...,n.$
(11)

Substituting the RHS of Eq.(10) into Eq.(11)and expressing $(x_1,x_2,x_3,...,x_n)$as $\overrightarrow{x}$ we obtain

$\frac{\partial S(a_0,a_1,a_2,...,a_n)}{\partial a_i}=\frac{\partial}{\partial a_i}\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)-a_0-a_1x_{1j}-a_2x_{2j}-...-a_nx_{nj}]^2=0.$

$\Rightarrow 2\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)-a_0-a_1x_{1j}-...-a_nx_{nj}]\frac{\partial}{\partial a_i}[\hat{f}(\overrightarrow{x}_j)-a_0-...-a_ix_{ij}-...-a_nx_{nj}]=0.$

$\Rightarrow 2\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)-a_0-a_1x_{1j}-...-a_nx_{nj}][\frac{\partial}{\partial a_i}\hat{f}(\overrightarrow{x}_j)-\frac{\partial}{\partial a_i}a_0-...-\frac{\partial}{\partial a_i}a_ix_{ij}-...-\frac{\partial}{\partial a_i}a_nx_{nj}]=0.$

$\Rightarrow 2\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)-a_0-a_1x_{1j}-...-a_ix_{ij}-...-a_nx_{nj}][0-0-...-0-x_{ij}-0-...-0]=0.$

$\Rightarrow 2\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)-a_0-a_1x_{1j}...-a_ix_{ij}-...-a_nx_{nj}](-x_{ij})=0.$

$\Rightarrow -2\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{ij}-a_0x_{ij}-a_1x_{1j}-...-a_ix^2_{ij}-...-a_nx_{nj}x_{ij}]=0.$

Dividing both sides of the above expression by -2

$\Rightarrow \sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{ij}]-\sum_{j=1}^M[a_0x_{ij}]-\sum_{j=1}^M[a_1x_{1j}x_{ij}]-...-\sum_{j=1}^M[a_ix^2_{ij}]-...-\sum_{j=1}^M[a_nx_{nj}x_{ij}]=0.$

$\Rightarrow a_0\sum_{j=1}^Mx_{ij}+a_1\sum_{j=1}^M(x_{1j}x_{ij})+...+a_i\sum_{j=1}^Mx^2_{ij}+...+a_n\sum_{j=1}^M(x_{nj}x_{ij})=\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{ij}].$

The above expression can be expressed as a matrix equation as shown below.

$\left(\begin{matrix}\sum_{j=1}^Mx_{1j} & \sum_{j=1}^M(x_{1j})^2 & \ldots & \sum_{j=1}^Mx_{1j}x_{nj} \\ \sum_{j=1}^Mx_{2j} & \sum_{j=1}^Mx_{1j}x_{2j} & \ldots & \sum_{j=1}^Mx_{2j}x_{nj} \\ \vdots & \vdots & \ddots &\vdots\\\sum_{j=1}^Mx_{nj}&\sum_{j=1}^Mx_{1j}x_{nj} &\ldots & \sum_{j=1}^M(x_{nj})^2\end{matrix}\right)\times\begin{pmatrix}a_0 \\ a_1 \\ \vdots \\ a_n\end{pmatrix}=\begin{pmatrix}\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{1j}] \\ \sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{2j}] \\ \vdots\cr \sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{nj}]\end{pmatrix}.$
(12)

The solution of the above matrix equation will give the desired estimates of $a_0,a_1,a_2,...,a_n$. The solution involves finding the inverse of of the matrix

$\left(\begin{matrix}\sum_{j=1}^Mx_{1j} & \sum_{j=1}^M(x_{1j})^2 & \ldots & \sum_{j=1}^Mx_{1j}x_{nj}\\\sum_{j=1}^Mx_{2j} & \sum_{j=1}^Mx_{1j}x_{2j} & \ldots & \sum_{j=1}^Mx_{2j}x_{nj} \\ \vdots & \vdots & \ddots & \vdots \\ \sum_{j=1}^Mx_{nj} & \sum_{j=1}^Mx_{1j}x_{nj} & \ldots & \sum_{j=1}^M(x_{nj})^2\end{matrix}\right).$
(13)

Once the inverse is found the solution is found by multiplying both sides of Eq.(12) by the inverse matrix of Eq.(13) and obtain the solution

$\begin{pmatrix}a_0 \\ a_1 \\ \vdots \\ a_n\end{pmatrix}=\begin{pmatrix}\sum_{j=1}^Mx_{1j} & \sum_{j=1}^M(x_{1j})^2 & \ldots & \sum_{j=1}^Mx_{1j}x_{nj} \\ \sum_{j=1}^Mx_{2j} & \sum_{j=1}^Mx_{1j}x_{2j} & \ldots & \sum_{j=1}^Mx_{2j}x_{nj} \\ \vdots & \vdots & \ddots &\vdots\\\sum_{j=1}^Mx_{nj}&\sum_{j=1}^Mx_{1j}x_{nj} &\ldots & \sum_{j=1}^M(x_{nj})^2\end{pmatrix}^{-1}\times\begin{pmatrix}\sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{1j}] \\ \sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{2j}] \\ \vdots\cr \sum_{j=1}^M[\hat{f}(\overrightarrow{x}_j)x_{nj}]\end{pmatrix}.$
(14)

In the next post, we will find out how to apply the technique of linear least square error regression for linear functions of multiple variable to nonlinear functions of single variable.

## TTS 2013 – Getting the Big Picture with the New TeamQuest Storage Solution

TeamQuest’s Dave Wagner and our storage partner Intellimagic’s Brett Allison gave attendees an in depth look at automated, exception-based capacity planning solution that spans servers, storage, and a plethora of business data sources.

Using TeamQuest Surveyor’s automated reporting capabilities in conjunction with the detailed performance and capacity data from Intellimagic as well as the rich server and network performance and capacity data of the TeamQuest CMIS, we are able to provide comprehensive, federated analytics extended across server/storage. Extending Surveyor’s capabilities to encompass the storage domain while still fully integrated with any/all other data sources/use-cases is a powerful application of TeamQuest technology. The sky is the limit – financial, power consumption, application response, resource utilization, service catalog, trouble tickets…the list is endless. All of these data sources, in one place, correlated and automated in an exception-based format.

The TeamQuest storage solution uses an SMI-S compliant storage data collection and storage. We currently support EMC and IBM in VMware environments. Expansions are planned throughout 2013 to include other storage brands and platforms. We can integrate with other storage tools and data sources as well. Click here for more information.

## TTS 2013 – Recap of Forrester’s James Staten: “Turn Cloud Economics To Your Advantage”

Everyone is talking about creating, using, or exploiting the numerous advantages of cloud computing. There are so many vendors out there saying so many things about why cloud is the perfect fit for you.

Forrester Analyst James Staten delivered a thought provoking presentation on how to turn cloud economics to your advantage without the vendor hype.

IT is being left out of many business discussions because of the requirement of speed to market and the perception that IT is the “Department of No.”

Business ready, self-service technology includes SaaS, mobile, tablets, and cloud platforms. The market in this area has exploded and grown much more rapidly that anyone anticipated. Cloud is no longer for test and development – you cannot ignore this trend.

Self-sufficient, tech-savvy workforce is rising as well. It isn’t the “young whipper-snapper” that brought a Mac into the workplace. These changes are now being driven by people with a “V” in their title.

Your business environment will be radically more complex. By the year 2020, the US will be overtaken by China in terms of GDP. North American businesses will have to adapt and realize the size and importance of this market. New competitors, new products, new markets require the business to think and act drastically different.

The initial reaction of IT is, “NO! What about security? You can’t back that up! You can’t…” IT needs to embrace the cloud via 5 Habits of Cloud-enabled IT Leadership.

1. Embrace it where it fits. Understand what services are out there and what their economic models are.

2. Get your hands dirty. If you know your business is using a cloud service and no one in IT has an account or access, you can’t understand it. You need to know what your business is using.

3. Acknowledge and leverage hybrid. Realize you are already hybrid if you have a SaaS application in use within your organization. You have to have performance management and capacity planning work all the time so you can see what is coming down the pipe.

4.Plan for failure, plan for success. You have to understand that things fail in the cloud a lot. Traditional enterprise applications assumptions (stable, reliable hardware, static relationships, uninterrupted network, etc.) highlight why traditional applications struggle to run on true clouds.

5. Run the numbers. Cloud is not always cheaper. Be informed of the cost and how the economics of cloud differ.

The basics of cloud economics:
1. Elastic scale delivers just-in-time capacity.
2. Pay-per-use keeps costs low.
3. Self-service fuels productivity.

Stages of cloud economics:
1. Scale-up: elastic and transient applications.
2. Scale-down: application optimization and performance monitoring.
3. Profit Center: new revenue streams and hybrid architecture.

## TTS 2013 – Recap of Monday’s Breakout Sessions

Improving Cloud Efficiency Using Model-based, Transaction-aware Management – Leonid Grinshpan, Oracle.

Mr. Grinshpan showed attendees how to use TeamQuest to accurately capacity plan for a cloud implementation. He showed the flexibility of using TeamQuest Predictor to model various scenarios to ensure SLA’s are met. Grinshpan walked through a real life case study that proves how TeamQuest can help in your cloud implementation.

Keep All of Your Services, Apps, & Servers Running at Optimum Levels Using TeamQuest Predictor’s New Automated Analysis Capabilities – Scott Johnson, TeamQuest.

TeamQuest’s Predictor and capacity planning guru, Scott Johnson, detailed the latest automated analysis features that help remove the complexity of capacity planning. By systematically automating prediction scenarios in a routine manner, TeamQuest is able to take the guesswork out knowing the health of your services. Here’s more information on TeamQuest’s AutoPredict feature.

Optimizing IT Services in a VMware Environment – Evan Anderson, TeamQuest.

VMware is the leader in virtualization technology and Evan showed TTS attendees how to make the most out of their VMware environment. TeamQuest’s VMware solution offers unprecedented storage, system, and network capacity planning all in one view. TeamQuest’s agentless VMware data collection provides you the capability to analyze, report, and capacity plan your VMware environment.

Surveyor Out of the Box Reporting with EyeR – Walter Verhoeven, CREATIVE Associates

CREATIVE Associates is a valuable TeamQuest partner and their latest solution extends the capabilities of TeamQuest Surveyor. These out of the box reports help you save time and money by eliminating the need to develop your own views for Surveyor and providing intelligence for virtualized environments instantly after installation. Contact us for more information.

## TTS 2013 – Product Management Highlights Recent Developments

TeamQuest’s Director of Product Management, Scott Adams, ended the morning session of TTS 2013 by keeping us up to speed with the latest developments of TeamQuest Performance Software in the past several months.  Here’s a rundown:

1. TeamQuest Performance Indicator (TPI) – a unique measurement of system and workload health. From 0 to 100, TPI simplifies and automates predictive analysis of performance. Our website has a collection of information on TPI including a video, release data sheet, and additional web pages explaining TPI.
2. TeamQuest CMIS for Storage – Powered by our partnership with Intellimagic, the TeamQuest CMIS for storage automatically maps how virtual machines are connected to storage devices, identifies performance bottlenecks regardless of where they may be – storage, server, virtualization layer, and provides capacity planning for your storage environment. Taneja Group wrote a white paper on the storage capabilities combined with systems to create a very powerful solution that is only offered at TeamQuest. We are not aware of any other company doing this in the market.
3. AWS EC2 Support – thinking about moving a workload or application to the cloud and worried about how much it might cost or how it will behave? TeamQuest can now predict exactly how a migration to the Amazon cloud will go. Here’s an earlier blog post on these capabilities.
4. PostreSQL Database Option is Available – to help increase scalability as well as decrease costs of another popular relational database that we support.
5. Additional Virtualization Support – added support for KVM, extended HyperV as well as VMware capabilities and measurements.
6. A host of performance, administrative, and product usability improvements

Scott did go through future plans and activities, but those are reserved for TTS attendees’ eyes only.

## TTS 2013 – Create Value by Elevating Your Process Maturity: Do the Right Things

Director of Global Services, Per Bauer, took the stage this morning to talk through the TeamQuest Capacity Management Maturity Model.

Attendees of TTS understand the value of capacity management. It helps you avoid service interruptions, avoid unneeded purchases, avoid over-provisioning, and increase IT efficiency by mitigating risk.

To understand capacity, you need to understand what exactly you are managing. Over the past several years, we have seen the transformation from 1 server, 1 application to dynamic resource scheduling to the many flavors of cloud computing available today. You need to be able to cope with these changes in people, process, and tools.

Measure–>Analyze–>Plan. You have many options of how you collect data, how you can analyze the data, and what methods to use to plan for capacity.  Understanding the differences and implications of the choices is key to developing the correct strategy for your organization.

There are different levels of capacity management – component, service, and business. Each level builds the foundation for the next helping to provide a comprehensive, business oriented capacity management discipline.

Capacity management also involves an accuracy tradeoff. Some portions of your business may not require a high level of certainty in capacity planning while other critical aspects demand thorough testing and planning to ensure maximum availability.

So, where should you start? ITIL provides a framework for continual service improvement. For some, an ITIL implementation just isn’t a good fit – it’s intimidating and might be too high level.

The TeamQuest Maturity Model is another option to entertain. Each level of maturity is identified by a distinct set of characteristics covering people, process, and tools; and culminating with suggestions for next steps that can be taken to increase your maturity.

If you would like to find out where you are on the journey to capacity management maturity, take this short 15 question survey. Here is the link to the white paper Per wrote on maturity. You can also contact Per directly at per.bauer@teamquest.com.

## Interactive: TeamQuest Technology Summit 2013

TTS is quickly approaching and I wanted to share with everyone how you can stay in touch with what’s going on and be interactive before, during, and after the event. We know everyone can’t make the trip to the heart of Texas hill country, so we’ll be live tweeting and live blogging the event and posting to a variety of other social sites. Make sure to follow us for updates and join the conversation!

A few highlights of who will be speaking:

1. Cameron Haight – Gartner Analyst. Cameron will be discussing how to deal with complexity via design-centered IT.
2. James Staten – Forrester Analyst. James will examine how to take advantage of cloud economics.
3. Per Bauer – TeamQuest Director of Global Services. Per pioneered our Capacity Management Maturity Model and is going to show how to elevate your maturity to the next level.
4. Scott Adams – TeamQuest Director of Product Management. Scott will review the latest features and highlighting what is coming in the very near future.

Full agenda can be found here.

Here’s how you can stay connected to what’s happening this year:

1. TeamQuest IT Service Optimization Blog – You’ve already made it here if you are reading this, but we will have recaps of all of the general and breakout sessions.
2. TeamQuest on Twitter – Live commentary on the event.
3. #TTS2013 Hashtag – A collection of all tweets related to the event.
6. TeamQuest on Facebook – Don’t forget to like us on Facebook, you’ll see a lighter side of the event here.

Put it on your calendars, set a reminder, or write yourself a sticky note to check out TTS this coming Sunday through Tuesday, April 21-23. If you like what you see, you can always attend in person next year!

## Announcing Support for KVM and Amazon EC2

Today, our newest release of TeamQuest Performance Software hit the market. We are proud to announce that we are giving IT professionals the ability to monitor and analyze the performance of Red Hat Enterprise Virtualization (KVM) and to capacity plan for migrations to Amazon Elastic Compute Cloud (EC2).

KVM is a widely adopted and well respected virtualization platform. With this release, you can detect and quickly troubleshoot performance issues in your virtual environment – regardless of OS. We now support VMware, Hyper-V, Solaris Zones, Containers and LDOMs, and IBM LPARs and WPARs. The addition of KVM makes the TeamQuest solution the front runner in adapting to the complex, heterogeneous virtual and cloud environments that are being built today.

When people think of IaaS, they usually think of Amazon first. We’ve given you the power to create what-if scenarios to evaluate just how a workload will perform in an Amazon EC2 cloud with TeamQuest Predictor. No more hoping that things won’t break or guessing how much it will cost – we take the guesswork out of the equation. This is yet another valuable addition to the multi-vendor environment coverage that enterprises need to support their business.

We are all about giving you options, and another noteworthy item is our PostgreSQL alternative to Oracle or the TeamQuest proprietary database. Just another way we continue to listen to our customers and build a solution that makes them successful.

## Top 10 Lessons Learned as Director of Capacity Planning – #3

This entry is one in a series of Top 10 lessons learned by Ron Potter in his previous job as the Director of Capacity Planning at a Fortune 100 health insurance provider.

Change the Conversation

I can’t tell you how many times I have sat in front of my CIO or CFO and seen their eyes glaze over as I presented the IT costs needed to support a particular project. That is because business leaders dislike expenses. However they do like investments with a reasonable return. In order to break the “cost” pattern, we need to change the conversation from one of cost to one of investment.

Consider the following two statements:

1. We need to spend $1.5 million to support the additional sales invoice infrastructure. 2. By investing$1.5 million, we can support the new sales invoicing system and economies of scale as a result will reduce our IT cost per sale by 5%.

Which one would be more appealing to an executive? Probably the second one.

As you can see, by spending more time to frame the information, we can change the conversation to reveal the true business value of a project.

Another way is to give choices. Many capacity planners tell me their business leaders demand sub-second response time so they can improve productivity and thus the bottom line. I have yet to see a study from those business leaders that show they actually analyzed the different productivity gains from changes in response times and attributed savings to them. It is usually perception driving that conversation. I have pushed back on a few and discovered that the improvements are mainly being requested to satisfy a few high performing piece-work employees and the change will have little if any impact on the normal employee.

So giving choices transforms the decision from technical to business. For example, there could easily be a $5 million per year spread in delivering sub-second response and 5-second response. In the previous case, spending$5 million extra to satisfy a few workers will not recoup the expense any time soon, if ever. The business is better of giving the few high performers a bonus for their productivity and keeping the systems as-is. So by giving choices, you force the analysis and ensure a reasonable option is chosen.

Until the next post…

Ron

## What is Nonlinear Regression: Linear Least Square Error Regression (PART I)

This is a post in a series of posts that dives deep into the mathematics involved in capacity planning. We have incredibly smart people figuring out the hardest problems so you don’t have to. This is a closer look at what goes on behind the scenes in a capacity planning tool.

The method of estimation of mean of constant coefficients of a nonlinear function using the mathematical rules of least square error (LSE) principle is often named as nonlinear regression. The mathematical rules are same as the LSE for the estimation of constant coefficients of a linear function in most of the cases. Let’s elaborate on what is linear regression.

Need to estimate the average of the constant parameter of the function of x in (1) from a set of collected measurements of pairs for  for a given function :

Mathematically, the collected measurements of pairs of  is written as

Note that  is symbolically the measurement of . The constant coefficients of expressions, used for modeling some practical phenomena or process, are almost always estimated by using LSE. This type of estimation approach is adopted if the model is intuitive and actual dynamics of the phenomena are unknown but certain factors are assumed to be influencing the phenomena. This means that the model of Eq.(1) is an intuitive approximation based on measured data.
A common sense idea is to estimate the constant parameter such that some expression of errors between the modeled  and measurements of is minimum. So, it is assumed that for the measured , the modeled  will deviate from the measurement of . The errors are the difference between and the measurement of for .

Most commonly known difference expression would be

The total error from Eq.(3) would be

Instead of estimating  such that the expression in Eq.(4) is smallest, a slightly different approach is used.

The sum of square of error (SSE) function is minimized in the process of estimating . The SSE function, , for the function of Eq.(1) for the measured data of Eq.(2) is

The expansion of the right hand side (RHS) of the equation of Eq.(5) is

The RHS of above implies that is a 2nd degree polynomial function of . This seems contradicting the definition of as constant. In fact, it is no contradiction at all. is a constant in the model of Eq(1) but in Eq.(5) is the only unknown. All the quantities involving sum of and and combinations in Eq.(5) are known as and are known from Eq.(2). This explains why the left hand side (LHS) of equation in Eq.(5) is written as a function of . A 2nd degree polynomial function is also called quadratic polynomial. Some terminology and characteristics of polynomial functions are:

1. The degree of a polynomial function is determined by the highest exponent of the variable among the terms of the series summations. The highest exponent is 2 in Eq.(6) and so it is 2nd degree polynomial.
2. The term with highest exponent is called the leading term. The first term is the leading term in Eq.(6).
3. The coefficient of the leading term is called the leading coefficient. If the leading coefficient is positive then the graph of the polynomial is of a bowl shape as in Figure 1, else if the leading coefficient is negative then the graph of the polynomial is an inverted bowl shape as in Figure 2.

Notice that the function in Figure 1 has a minimum at and no maximum is possible. This allows using a simple first order derivative operator or where x is the independent variable to find the minimum point. According to the theory of calculus, the solution of

will provide values of at a point where  of Eq.(5) is either minimum or maximum. From Figure 1 and the sign of leading coefficient of the RHS of Eq.(6), we know that in Eq.(5) is going to have a minimum only. Thus we can get the for which the SSE is minimum. From Eq.(7) and Eq.(5)