Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about evaluating results #31

Open
P260 opened this issue Apr 2, 2021 · 2 comments
Open

Question about evaluating results #31

P260 opened this issue Apr 2, 2021 · 2 comments

Comments

@P260
Copy link

P260 commented Apr 2, 2021

Hi, I had one query about can we use any other metrics to evaluate results? Why you chose auc_roc_score()?
I wanted to get confusion matrix for my dataset using your model. I tried but I could not succeed. Could you help me with it?
and I must say, wonderful work you all have done. Hats off!

@weihua916
Copy link
Collaborator

Hi! ROC-AUC is a natural metric for binary classification. Also, the positive labels are pretty skewed in our datasets, which means confusion matrix may not be really helpful.

@P260
Copy link
Author

P260 commented Apr 6, 2021

@weihua916 Thanks for your reply. Yes I understand but is there anyway I can find out precision, recall metrices using your model on my dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants