This event is endorsed
and organized by

9th EAI International Conference on Performance Evaluation Methodologies and Tools

December 14–16, 2015 | Berlin, Germany




Martina Maggio (Lund University)

 

Title:

Cloud Control: using control theory to optimize cloud applications performance

 

Abstract:

Self-adaptation is a first class concern for cloud applications, which should be able to withstand diverse runtime changes. Variations are simultaneously happening both at the cloud infrastructure level and at the user workload level. An example of the first variation is hardware failure, while the second category comprehends flash crows and workload patterns. Robustly withstanding extreme variability requires costly hardware over-provisioning, which is often not an option, especially for small clouds. This talk presents a potential way to deal with runtime variations and errors, guaranteeing that the cloud application meets the performance level in terms of response time and throughput. In this work we use models and techniques from control theory to achieve our results. The paradigm is based on optional code that can be dynamically deactivated through decisions made by a controller and the presented results show fault tolerance and response time predictability with two comm only used applications for cloud computing research: RUBiS and RUBBoS.

 

Short bio:

Martina Maggio is an assistant professor at the Department of Automatic Control, Lund University. She previously was a postdoc at the same institution and a PhD student at the Dipartimento di Elettronica e Informazione at Politecnico di Milano. During her PhD she has been a visiting student at the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology. Her main research interest is the application of control theoretical techniques to solve computer science problems, especially resource management in embedded systems and cloud computing.

 

 

Scheffer, Tobias


Tobias Scheffer (University of Potsdam)


Title:

Active Model Evaluation


Abstract:

Model evaluation often requires cumbersome human data labeling. For instance, in order to evaluate the ranking function of a web search engine, human labelers go through lists of search results for many queries, and judge the relevance of the information items. Active evaluation techniques evaluate the performance of a model accurately at minimal labeling costs. Active evaluation techniques select elements from a pool of unlabeled data, and query their labels from a human labeler. For several types of classification, regression, and ranking problems, it is possible to derive a selection criterion that minimizes the expected estimation error.

 

Short bio:

Tobias Scheffer is  a Professor of Computer Science at the University of Potsdam. From 2007 to 2008 he was the head of the Machine Learning Group at the Max Planck Institute of Computer Science in Saarbrücken. Between 2003 and 2006, he was Assistant Professor at Humboldt-Universität zu Berlin.

In 2003 was Prof. Dr. Tobias Scheffer awarded an Emmy Noether Fellowship of the German Science Foundation DFG and an Ernst von Siemens Fellowship by Siemens AG in 1996. He received a Master's Degree in Computer Science (Diplominformatiker) in 1995 and a Ph.D. (Dr. rer nat.) in 1999 from Technische Universität Berlin.