Limitations on Robust Ratings and Predictions
Résumé
Predictions are a well-studied form of ratings. Their objective nature allows a rigourous analysis. A problem is that there are attacks on prediction systems and rating systems. These attacks decrease the usefulness of the predictions. Attackers may ignore the incentives in the system, so we may not rely on these to protect ourselves. The user must block attackers, ideally before the attackers introduce too much misinformation. We formally axiomatically define robustness as the property that no rater can introduce too much misinformation. We formally prove that notions of robustness come at the expense of other desirable properties, such as the lack of bias or effectiveness. We also show that there do exist trade-offs between the different properties, allowing a prediction system with limited robustness, limited bias and limited effectiveness.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...