Scale-Adaptive Forest Training via an Efficient Feature Sampling Scheme
Résumé
In the context of forest-based segmentation of medical data, modeling the visual appearance around a voxel requires the choice of the scale at which contextual information is extracted, which is of crucial im- portance for the final segmentation performance. Building on Haar-like visual features, we introduce a simple yet effective modification of the for- est training which automatically infers the most informative scale at each stage of the procedure. Instead of the standard uniform sampling during node split optimization, our approach draws candidate features sequen- tially in a fine-to-coarse fashion. While being very easy to implement, this alternative is free of additional parameters, has the same computa- tional cost as a standard training and shows consistent improvements on three medical segmentation datasets with very different properties.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...