How to Control Your Cloud Deployment [Webinar Recap]

    March 19, 2018
    avatar

    By Per Bauer

    As a follow-up to our primer on capacity management for the cloud, my "How to Control Your Cloud Deployment" webinar focused on the first step toward cloud existence: transition and migration.

    To kick off our discussion of cloud deployment strategies, here’s a refresher on the definitions of different entities in the cloud space.

    Cloud Types Diagram

    Traditional IT includes on-prem data centers.

    Private cloud is somewhere in between traditional IT and public cloud, providing the self-provisioning and elasticity of the cloud but with infrastructure you operate and optimize.

    Public cloud hands the reins to a cloud provider, providing storage space, optimization, and infrastructure.

    Hybrid cloud is a mix between public and private cloud, declining in use as companies find that it’s hard to move workloads between them.

    Multi-cloud is the growing alternative to hybrid cloud, which uses multiple public cloud vendors, either to use each vendor for what they do best or to spread out risk.

    Hybrid IT means managing all of the above in any combination. You need solutions that can federate data from traditional IT and cloud instances and understand the differences between running things in the cloud and on-prem.

    Over the past few years, there’s been a huge growth in public cloud adoption, with IaaS (Infrastructure as a Service) as one of the bigger chunks of revenue for the cloud. IaaS that requires capacity planning has also been growing quickly and is supposed to triple over the next few years, according to a 2017 Gartner survey.

    What’s driving people to adopt IaaS with cloud providers?

    Cost reduction is one of the top drivers, with the possibility of a pay-as-you-go model and not having to pay for more on-prem data centers. The agility IaaS provides means you can allocate new resources with short notice. In the past, business users have endured long lead times for provisioning.

    Relying on external experts for infrastructure maintenance frees up your IT professionals to focus on value-adding strategic initiatives, and having someone who specializes in infrastructure—especially for smaller organizations—provides peace of mind and assurance that everything is taken care of.

    Finally, the possibility of scaling up and matching demand easily is compelling. This doesn’t happen automatically with the cloud, but the prospect is a selling point.

    Migration Strategies

    The public cloud is designed for “cloud native” workloads, which means monolith or n-Tier applications must be broken into microservices that run independently in order to operate in the cloud. You’ll also need to be able to add or subtract nodes for a dynamic application of resources that matches demand. Applications need microservices that can act independently to maintain a dynamic state. You also need a certain level of automation since there are more components to operate.

    Ideally, an application would go through all these phases before migrating to the cloud. You’d need to optimize your target instance, identifying the optimal capacity unit for each component and the price. Then while evaluating seasonality and variability in demand, determine how much capacity you might need to scale up for rapid or gradual increases in demand.

    All these steps need to happen in order to run successfully on the cloud.

    Now, performing all of these before you migrate may prove a challenge, as you may have a restless CIO or business users who have been waiting many months for the changes. Introducing all of these changes at the same time also makes it harder to troubleshoot when there are performance issues. And it’s difficult to mimic the public cloud environment before you move, which complicates your testing process.

    Cloud Migration Diagram

    The alternate method is known as Lift and Shift. You’ll migrate workloads as they are, at least when they can operate in the public cloud as is. This is pretty common, with as much as 80% of workloads moved this way. However, some applications may not work in the public cloud and are better running on a data center.

    An optimal Lift and Shift process means you don’t have to do all the refactoring work up front, but you do need to assess your needs thoroughly before you shift your data over. In setting up the public cloud, you’ll need to examine your business activity cycles and provision for the busiest period, looking at resource utilization and any plans for growth in the near future you’d need to account for before you migrate.

    On-demand or reserved instances both have their benefits as payment models. Pay-as-you-go means higher rates, but you only pay for what you use. Reserved means lower rates, but you’re locked into a contract. You’ll have to determine your break point in terms of cost to know which way to go. Reserved is often best for companies that know their resource utilization, busy periods, and plans for short-term growth.

    A tool like Vityl Capacity Management can help users analyze their requirements and know how much capacity they need, which reduces the possibility of manual mistakes.

    There are some downsides to the Lift and Shift model. Your TCO calculation is probably not going to be the same as what the cloud provider offered you because they’re assuming you’ve already refactored. Running all your applications as is will certainly increase the amount of resources you’re using.

    And it’s just a first step. Following up by refactoring and optimizing is necessary to run your workloads sustainably in the cloud.

    Considering all of that, here are some things to think about when getting started with your cloud deployment:

    Categorize and rate your workloads

    Determine if each workload should go into the cloud or not. If yes, then by examining the current cost of ownership, business process it supports, and demand, decide the priority for migrating your workloads.

    Cost management challenges

    Instrumentation to curb spending needs to be in place before you move, to make sure you are delivering what’s required at the lowest possible cost.

    Lack of oversight for spending, lack of factoring of costs so business users can see what they’re using and pay for it, and service valuation are some of the challenges in managing cost.

    Cost management recommendations

    Use tagging of resources (at the business unit, application, and owner levels) and a global structure to keep track of it. Reports should come to the people who incur the costs to create incentive to optimize cloud capacity usage.

    Becoming cloud agnostic?

    You may want to consider being cloud agnostic to be able to move workloads across cloud services without changing things. This gives flexibility, avoids lock-in, and enables a future multi-cloud strategy. But the downside is that it limits you to least common denominator of cloud services. You’ll need to include a CSB (Cloud Service Broker) or CMP (Cloud Management Platform), which actually does lock you in in another way. The benefit may not justify the cost.

    Either way you migrate, you’ll have to identify and address inefficiencies, monitor budgets and spending, forecast future capacity requirements (demand management approach), and predict costs.

    Capacity Management for the cloud is about delivering the right capacity at the lowest possible cost and knowing how to scale up or scale back.

    You can refactor on the front end of your process or lift and shift, making sure you’re using your resources in ways that make the most sense. Some businesses have the lead time to frontload the refactoring process, and some will need to migrate workloads as they are, refactoring and optimizing as they go. In the latter case, they’ll see immediate changes in their bills as they optimize and use less capacity.

    Businesses will need to determine which method makes the most sense and uses their resources best. Regardless of whether you migrate before or after refactoring, the process must happen at some point for a sustainable long-term TCO, which ongoing optimization and planning will further improve, post migration. Tagging resources with a global structure to keep track of it will help to motivate business users to manage cost well, overall providing more control over your cloud deployment.

    Still have questions? Head over to the recording and start at 47:50 to see if any participants had the same question.


    Tags:
    Category: cloud-computing