Experimental Validation in Large-Scale Systems: a Survey of Methodologies
Résumé
The increasing complexity of available infrastructures with specific features (caches, hyperthreading, dual core, etc.) or with complex architectures (hierarchical, parallel, distributed, etc.) makes it extremely difficult to build analytical models that allow for a satisfying prediction. Hence, it raises the question on how to validate algorithms if a realistic analytic analysis is not possible any longer. As for some many other sciences, the one answer is experimental validation. Nevertheless, experimentation in Computer Science is a difficult subject that today still opens more questions than it solves: What may an experiment validate? What is a ``\emph{good experiment}''? How to build an experimental environment that allows for ``\emph{good experiments}''? etc. In this paper we will provide some hints on this subject and show how some tools can help in performing ``\emph{good experiments}'', mainly in the context of parallel and distributed computing. More precisely we will focus on four main experimental methodologies, namely real-scale experiments (with an emphasis on PlanetLab and Grid'5000), Emulation (with an emphasis on Wrekavoc) simulation (with an emphasis on SimGRID and GridSim) and benchmarking. We will provide a comparison of these tools and methodologies from a quantitative but also qualitative point of view.
Origine | Fichiers produits par l'(les) auteur(s) |
---|