Additional Metrics for uncertainty evaluation #551
Labels
Needs decision
The MAPIE team is deciding what to do next.
Other or internal
If no other grey tag is relevant or if issue from the MAPIE team
Hi all, I recently accross this paper : https://arxiv.org/abs/2305.19187
It introduces two interesting metrics about uncertainty evaluation.
The two business questions addressed are :
The corresponding two metrics are quite easy to implement:
1. The AUCROC(y_wrong, y_uncertainty), where y_wrong is 1 if the prediction is wrong, and y_uncertainty is simply the prediction uncertainty. This directly translate the ability of uncertainties to rank wrongest responses (in expectation).
2. The AUARC (Area Under the Accuracy Rejection Curve), which is simply the accuracy score as a function of rejection rate, or uncertainty cut-off.
Beyond the two basic metrics, we could push further the concept to:
• Precision recall curve
• "Mondrianized" metrics, with an additional
groups
parameters, allowing to stratify the analysis by group• Extensive utilities to plot diagnostic curves within plotly (as much as sklearn does), with additional information (e.g. the curve of a perfect/random model).
I think this would elegantly complete the existing coverage_scores metrics with metrics closer to business considerations. Moreover, these metrics are almost use case agnostic, since the user can quite easily compute y_wrong as a function of y_true and y_pred, and y_uncertainty as a function of y_pis (e.g. y_uncertainty = y_pis.sum(axis=1) for multiclass classification, which is the length of the prediction set).
Happy to discuss further about this !
The text was updated successfully, but these errors were encountered: