In distributed system and system resource, elasticity is defined as "the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible".[1] [2] Elasticity is a defining characteristic that differentiates cloud computing from previously proposed computing paradigms, such as grid computing. The dynamic adaptation of capacity, e.g., by altering the use of computing resources, to meet a varying workload is called "elastic computing".
In the world of distributed systems, there are several definitions according to the authors, some considering the concepts of scalability a sub-part of elasticity, others as being distinct.
Let us illustrate elasticity through a simple example of a service provider who wants to run a website on an IaaS cloud. At moment
t0
t1
At time
t2
Elasticity aims at matching the amount of resource allocated to a service with the amount of resource it actually requires, avoiding over- or under-provisioning. Over-provisioning, i.e., allocating more resources than required, should be avoided as the service provider often has to pay for the resources that are allocated to the service. For example, an Amazon EC2 M4 extra-large instance costs US$0.239/hour. If a service has allocated two virtual machines when only one is required, the service provider wastes $2,095 every year. Hence, the service provider's expenses are higher than optimal and their profit is reduced.
Under-provisioning, i.e., allocating fewer resources than required, must be avoided, otherwise the service cannot serve its users with a good service. In the above example, under-provisioning the website may make it seem slow or unreachable. Web users eventually give up on accessing it, thus, the service provider loses customers. On the long term, the provider's income will decrease, which also reduces their profit.
One potential problem is that elasticity takes time. A cloud virtual machine (VM) can be acquired at any time by the user; however, it may take up to several minutes for the acquired VM to be ready to use. The VM startup time is dependent on factors, such as image size, VM type, data center location, number of VMs, etc.[3] Cloud providers have different VM startup performance. This implies any control mechanism designed for elastic applications must consider in its decision process the time needed for the elasticity actions to take effect,[4] such as provisioning another VM for a specific application component.
Elastic applications can allocate and deallocate resources (such as VMs) on demand for specific application components. This makes cloud resources volatile, and traditional monitoring tools which associate monitoring data with a particular resource (i.e. VM), such as Ganglia or Nagios, are no longer suitable for monitoring the behavior of elastic applications. For example, during its lifetime, a data storage tier of an elastic application might add and remove data storage VMs due to cost and performance requirements, varying the number of used VMs. Thus, additional information is needed in monitoring elastic applications, such as associating the logical application structure over the underlying virtual infrastructure.[5] This in turn generates other problems, such as how to aggregate data from multiple VMs towards extracting the behavior of the application component running on top of those VMs, as different metrics might need to be aggregated differently (e.g., cpu usage could be averaged, network transfer might be summed up).
When deploying applications in cloud infrastructures (IaaS/PaaS), requirements of the stakeholder need to be considered in order to ensure proper elasticity behavior. Even though traditionally one would try to find the optimal trade-off between cost and quality or performance, for real world cloud users requirements regarding the behavior are more complex and target multiple dimensions of elasticity (e.g., SYBL[6]).
Cloud applications can be of varying types and complexities, with multiple levels of artifacts deployed in layers. Controlling such structures must take into consideration a variety of issues, an approach in this sense being rSYBL.[7] For multi-level control, control systems need to consider the impact lower level control has upon higher level ones and vice versa (e.g., controlling virtual machines, web containers, or web services in the same time), as well as conflicts which may appear between various control strategies from various levels.[8] Elastic strategies on Clouds can take advantage of control-theoretic methods (e.g., predictive control has been experimented in Cloud scenarios by showing considerable advantages with respect to reactive methods).[9]