Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update BestThresholdMetric #88

Open
LouisCarpentier42 opened this issue Feb 7, 2025 · 1 comment
Open

Update BestThresholdMetric #88

LouisCarpentier42 opened this issue Feb 7, 2025 · 1 comment
Labels
evaluation metric Implement a new evaluation metric

Comments

@LouisCarpentier42
Copy link
Collaborator

The BestThresholdMetric iterates over all the possible thresholds (or a subset thereof) and uses the threshold that corresponds to the best evaluation. Currently, the used metric is saved as the threshold_´´ property. This drops a lot of information: the evaluation for all the other metrics. To fix this, the class should have two additional properties: thresholds_which stores all the thresholds used for evaluation andevaluations_`` are then the corresponding score for each threshold. This information allows to plot the score in function of the threshold, and thus analyse if the predictions are robust against the chosen threshold or if the threshold should be set carefully.

@LouisCarpentier42 LouisCarpentier42 added the evaluation metric Implement a new evaluation metric label Feb 7, 2025
@LouisCarpentier42
Copy link
Collaborator Author

In addition, a new method could be added specifically to the BestThresholdMetric, which will fit the best threshold on some given samples. The existing compute_ can then either use this threshold if it exists, or simply iterate over the thresholds if it does not exist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation metric Implement a new evaluation metric
Projects
None yet
Development

No branches or pull requests

1 participant