Human beings are distinct from the rest of the living beings in a very unique way. Every human being wants to have every thing that any other human being has in possession. There are definite exceptions to this characterization, like the people who will dedicate their lives in feeding the poor or helping others at the cost of their own life. At the same time, almost every human being wants to spend the rest of their lives vacationing. Either a human being has to control the temptation or in this surrounding of activity for productivity, one will need to control the others to have it all in one's possession. This resulted into kings and nobles and barons and billionaires and presidents, prime ministers, law makers etc. Presidents, Prime ministers and law makers claim that they are serving the people.
I wonder how many of the lawmakers, presidents, ministers and secretaries will be in a cluster, labelled public servant, if a clustering algorithm is executed for all of them with data about their involvement in scandalous deals, voting records, fund raising parties, extramarital affairs etc. What if Bayesian principle is applied on the voting records and bills passed by them to verify the probability of true public benefit of those bills or benefit of campaign donors and lobbyists? This will show how many of the self-proclaimed servants truly want to serve or want to control others to have it all. I just gave a new application for Big data analytics in the political science.
Cloud computing and virtualization seems to be out of control. The blogospheres, linkeinians, twitteratzies are inundated with advice, warnings, suggestions, guidance, wisdom, caution regarding the benefit and challenge of cloud computing and virtualization. In this crowd, one person is still relevant and his name is Dr.Neil J Gunther.
In 1993, Dr.Gunther proposed a model, called super-serial scalability law (SSL), and in 2006 he proposed a similar model, Universal Scalability Law (USL) that allowed control of scalability by controlling the concurrent load. Cloud computing had not yet become the reality of every day routine. Many businesses were maturing in adapting virtualization in the datacenters. Between 1993 and 2008 businesses were mostly running their applications on big servers (smaller than main frames) with many CPUs running many applications, controlled by an operating system installed in the server box. SSL or USL could not handle the many application situation but it could and still can deal with many concurrent instances of same application very well.
[caption id="attachment_1223" align="aligncenter" width="695" caption="Concurrent loads vs scalability"][/caption]
A simple best practice suggestion for virtualized systems is to have multiple VMs of same configuration with single VCPU in a host instead of one VM with several VCPUs and to run only one application in each VM. Applications that were developed for execution in physical machines with many CPUs are moved to VMs with several VCPUs for the lack of better management idea. Mostly the number of VCPUs match the number of CPUs of the physical system, which was hosting the application.
Many articles will suggest to "appropriately" configure the VMs to give them "enough" resources to run the application "safely" and thus do overprovisioning efficiently in the virtualized systems for smooth operation of datacenters. Most of those articles and blogs do not explain what is an "appropriate" configuration or how much is "enough" resources or what defines "safe" operation of the data center.
If many VMs with same configuration and single VCPU is deployed in a single host and each VM is running a single application, then it is a situation that can be handled by SSL or USL. Safe operation of datacenter means no incident like the disaster with "healthcare.gov". Configuration of VMs will depend on the number of VMs sharing the host resources. A high priority VM will need a higher "share" value if it has to share resources with many lower priority VMs. That raises the question of how many VMs can run in a host concurrently before the host is saturated? What if the host will need to run in over saturated mode when some other host fails? The graphs in the figure can shed some light.
In the figure, the boxes are measured data points of a system running a database. The horizontal axis shows the number of concurrent connections handled by the database and the vertical axis shows the scalability. The scalability is computed as the ratio of throughput or transaction rate for n connections or T(n) and throughput for 1 connection or T(1). The solid line is a prediction of scalability by SSL based on historical data or load test data up to 30 concurrent connections. The dash and dot line is a prediction of scalability by USL based on historical data or load test data up to 30 connections. These two lines show a distinct point of maximum scalability. The measured data shows that maximum scalability occurs for 43 concurrent connections. This means that the throughput will degrade for more than 43 concurrent connections. The predicted maximum concurrency by SSL is very close to the measured value but USL prediction is not so well. This shows that a working estimate of how many VMs can run until the host saturates can be found using SSL or USL.
Prediction of the system behavior in the post saturation phase is not handled well by either SSL or USL. The predicted scalability by SSL and USL are much lower than the measured test data. System can be in over saturation phase if automatic vMotion is allowed in a virtualized computing environment even if not for long. Do we want to have control over the over saturation phase or do we want to ignore?
If we want to control the over saturation phase then we need to do better than SSL or USL. Here comes the improvement of SSL. It is called Asymptotically Improved Super-serial Scalability Law or AISSL, in short. The dotted line passing through the boxes, representing measured data points, is the predicted model by AISSL. If you are an efficiency freak, who wants to squeeze the maximum out of a host, then you might get into the oversaturation phase more often than other dovish commanders of the datacenters. With AISSL, you can be a hawkish efficiency freak datacenter commander with confidence. This is almost as good as it gets.
This brand new modeling technology will be presented during session 252 of Performance and Capcity 2014 on November 3, 2014 in Atlanta, GA, USA. If you are the hawk, who is efficiency freak, then I would like to see you in Atlanta on November 3rd and together we will soar high above the doves.