The HDFS Replica Placement Policies: A Comparative Experimental Investigation - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2022

The HDFS Replica Placement Policies: A Comparative Experimental Investigation

Résumé

The Hadoop Distributed File System (HDFS) is a robust and flexible file system designed for reliably storing large volumes of data in distributed environments. Its storage model relies upon data replication and one of its central features is to optimize the placement of the replicas across the cluster for fault tolerance, availability, and performance. To this end, the Replica Placement Policy selects which nodes will store the data blocks. This work presents an experimental investigation of the different placement strategies available in HDFS. For a broader analysis, we consider different stages where the placement of the replicas is necessary, such as writing files in the system, re-replicating blocks among the nodes, and balancing the replica distribution in the cluster. The evaluation results allowed a deeper understanding of the behavior of the policies, in addition to highlighting the advantages and drawbacks of the replica placement concerning optimizations in data availability, data locality, write and read throughput, and in the overall performance of the HDFS.
Fichier sous embargo
Fichier sous embargo
0 0 10
Année Mois Jours
Avant la publication
mercredi 1 janvier 2025
Fichier sous embargo
mercredi 1 janvier 2025
Connectez-vous pour demander l'accès au fichier

Dates et versions

hal-03855562 , version 1 (09-12-2024)

Licence

Identifiants

Citer

Rhauani Weber Aita Fazul, Patrícia Pitthan Barcelos. The HDFS Replica Placement Policies: A Comparative Experimental Investigation. 17th IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS), Jun 2022, Lucca, Italy. pp.151-166, ⟨10.1007/978-3-031-16092-9_10⟩. ⟨hal-03855562⟩
9 Consultations
0 Téléchargements

Altmetric

Partager

More