Wednesday, October 20, 2010

Decision Model for Cloud Computing under SLA Constraints

Today I had a look at the software that comes with the "Decision Model for Cloud Computing under SLA Constraints" paper by Andrzejak A., Kondo D. and Yi S. The implementation of the presented model can be found on the project website.

The paper describes its contribution as follows:
Our main contribution is a probabilistic model that can
be used to answer the question of how to bid given SLA
constraints. A broker can easily apply this model to present
automatically to the user a bid (or set of bids) that will meet
reliability and performance requirements. This model is particularly
suited for Cloud Computing as it is tailored for environments
where resource pricing and reliability vary significantly
and dynamically, and where the number of resources allocated
initially is flexible and is almost unbounded. We demonstrate
the utility of this model with simulation experiments driven
by real price traces of Amazon Spot Instances, and workloads
based on real applications.
So, I had a look at the source file, they implemented the simulation process in one C-file. It took me a while to completely understand the code, especially the implementation of the actual simulation in the methods simulateOptimalCkpt() and simulateHourCkpt() was pretty hard. But now I'm sure this code can be used to develop a broker application (and it can be easily ported to another language). Note that other checkpointing schemes can easily be added since the required methods can be found in the source code that comes with the "Reducing Costs of Spot Instances via Checkpointing in the Amazon Elastic Compute Cloud" paper on this website.

The following quote from the paper describes how the output of the program can be used to make an intelligent (task is done before the given deadline and within the foreseen budget) bid for the spot price:
To find the optimal instance type and bid price, we compute ET(cdead) and M(cB) and check the feasibility (as stated in section 3D) for all relevant combinations of both parameters. As these computations are basically “look-ups” in tables of previously computed distributions, the processing effort is negligible. Among the feasible cases, we select the one with the smallest M(cB); if no feasible cases exist, the job cannot be performed under the desired constraints.
Their simulations resulted in some interesting findings found in the summary of the results in the paper (section 4F). Note that constraints on other random variables than ET and M can be introduced as well.

How do I see the broker application at the moment? It takes as input a collection of workloads and some properties and constraints should be provided for each workload. For a workload should first be determined on what kind of instance it should be executed (maybe in a first iteration of the software with simple CPU, Memory, ... usage thresholds). Then a division between reserved and other instances should be made (based on the on-demand price, since the spot price fluctuates too much). Then finally the algorithms provided in the paper can be used to determine whether the constraints can be met by using spot instances. The bid price and checkpointing scheme that should be chosen for the spot instances is determined when this is the case. Otherwise on-demand instances can be used. I'm thinking the broker should be a Java webservice that periodically downloads the spot price history and then reruns the simulations to update its data.

No comments:

Post a Comment