Melissa: coordinating large-scale ensemble runs for deep learning and sensitivity analyses
Résumé
Large-scale ensemble runs typically consist of executing thousands of physical simulation instances according to a range of different input parameters. These ensemble runs enable sensitivity analyses, deep surrogate trainings, reinforcement learning, and data assimilation, but they rely on volumes of data that are too large to store. For example, a recent data assimilation ensemble study generated 1.3 PB of data [@yashiro2020]. These enormous volumes of data hinder scientific analyses in two ways: first, the I/O speeds are the slowest component in supercomputers; the incongruence between slow read/write speeds compared to the rapid generation of data leads to a degradation and plateau of performance. Second, the file systems on supercomputers are not designed to allocate such large volumes of data to singular studies. To avoid this I/O limitation, scientists reduce their study size by running low resolution simulations or down-sampling output data in space and time. However, the I/O problem only becomes more pronounced as the speed and size of supercomputers continues to advance faster than I/O speeds of storage disks.
Origine | Fichiers produits par l'(les) auteur(s) |
---|