A cloud strategy that isn’t predicated on the ability to scale resource capacity automatically and instantaneously is bound to disappoint.
Taking full advantage of cloud services’ ability to streamline IT operations requires a change in mindset from traditional data-center provisioning of servers and storage to envisioning processor, memory, bandwidth, and other key resources as commodities that are purchased on demand. If you find your company’s cloud operations fail to realize the efficiency and agility you expected, the reason may very well be that your paradigm and personnel aren’t properly implementing a cornerstone of cloud computing: elastic scalability.
Any definition of “cloud services” you’re likely to find includes the adjectives “elastic” and “scalable,” often in conjunction, as in “the cloud is the greatest thing since the silicon chip because it offers elastic scalability.” This begs two questions:
- Is there such a thing as “non-elastic scalability”?
- Aren’t “elastic” and “scalable” synonymous?
The consensus of the experts is that a service or system can indeed be scalable without being elastic, which indicates the terms are more than just two different ways of describing the same concept.
What “elasticity” and “scalability” have in common: They both relate to an environment’s ability to accommodate changes in workload volumes – to add resources when demand increases, and to remove or reallocate resources when demand decreases. Thoughts on Clouds’ Edwin Schouten distinguishes the two terms by explaining that in traditional IT environments, scalability is built in, but it entails manual reconfiguration (think server reboots – sometimes many reboots), so it isn’t done very often.
In cloud environments, by contrast, capacity is more nebulous: You’re adding and removing processor, memory, and other resources on demand. This is one of the defining characteristics of cloud computing: allocations change dynamically and automatically as workloads change. All this is made possible by two other pillars of the cloud: resource pooling, and on-demand self-service, both of which are key components of the Morpheus cloud application management service.
Schouten lists factors to consider when implementing elastic scalability:
- Monitor apps to ensure a sudden spike in resource use isn’t due to a defect in the app’s code (there’s more on the importance of monitoring below).
- Be sure the new instances you add don’t violate any software licenses.
- Specify the conditions that will cause an app to scale up or down, as well as the individual components that will be allowed to scale.
- Clarify whether the service provider or the customer (you) will be responsible for monitoring and backup.
- Determine whether any manual approvals for scaling will be required, such as to prevent exceeding a monthly bandwidth limit, for example.
- Ensure that the scaling will not violate any contracts, regulations, or compliance requirements.
Making the business case for elastic scalability
There’s really only one reason any organization adopts cloud computing: Get more work done in less time and at a lower cost. So why are so many companies finding that their cloud operations actually cost more than the in-house data centers the cloud services are designed to replace? Sam Caldwell explains in an October 6, 2014, post on LinkedIn Pulse that these companies are using the wrong paradigm: they’re managing their cloud operations the same way they managed their on-site hardware running 24/7. Caldwell also points a finger at a lack of cloud specialists in these organizations.
Public cloud services such as AWS scale on demand, which eliminates the lag traditional data centers experience between changes in demand and changes in capacity. Source: LTech
Caldwell lists four reasons why monitoring is “non-negotiable”:
- It’s the only way to know when the system is scaling to meet demand.
- It’s the only way to prevent the system from spinning up 1000 instances when you only need 10.
- It’s one of the best ways to spot inefficient applications.
- It lets you measure business performance against other non-technical key performance indicators.
The benefits of building elasticity into the application
The only way to realize the potential of the cloud’s elasticity is by making your applications cloud-native or cloud aware, as Tim Pat Dufficy describes in a March 16, 2015, post on the ServerSide blog. So what are we to do with the vast majority of applications that are neither cloud-native nor cloud-aware? They are designed to run on a single machine. Most databases and business process software fall into this category.
Believing that the cloud is the best destination for all your applications is one of the ten cloud myths debunked by Vanessa Clark in an October 1, 2015, article on Memeburn. If you simply “lift and shift” applications from the data center to a virtualized environment, you will likely increase costs without improving performance. There is no shortcut to building both scalability and elasticity into your applications. In fact, TechTarget’s Timothy J. Patterson puts elasticity at the top of his list of the four “must haves” when building scalable AWS applications.
Auto scaling groups and elastic load balancers work together to launch new instances or terminate unneeded instances on the one hand, and to define and assign a load balancer for each new instance automatically on the other. This provides the application with built-in health checks, fault tolerance, and load balancing.
AWS combines auto scaling groups and elastic load balancers to logically group multiple EC2 instances, which facilitates scaling and makes triggers easy to manage. Source: Harish Ganesan, 8K Miles
Patterson recommends that application components such as the web layer, database layer, and middleware layer be loosely coupled; the components should also be stateless so they can be terminated easily.