You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I had one query about can we use any other metrics to evaluate results? Why you chose auc_roc_score()?
I wanted to get confusion matrix for my dataset using your model. I tried but I could not succeed. Could you help me with it?
and I must say, wonderful work you all have done. Hats off!
The text was updated successfully, but these errors were encountered:
Hi! ROC-AUC is a natural metric for binary classification. Also, the positive labels are pretty skewed in our datasets, which means confusion matrix may not be really helpful.
Hi, I had one query about can we use any other metrics to evaluate results? Why you chose auc_roc_score()?
I wanted to get confusion matrix for my dataset using your model. I tried but I could not succeed. Could you help me with it?
and I must say, wonderful work you all have done. Hats off!
The text was updated successfully, but these errors were encountered: