Add a new column named Threshold . Start from the highest predicted probability down to the lowest, then add 0.
You should now have a table like:
If you work in data science, machine learning, or medical diagnostics, you’ve probably heard of the (Receiver Operating Characteristic curve). It’s a powerful tool to evaluate the performance of a binary classification model. But what if you don’t have access to Python, R, or SPSS? plot roc curve excel
= =F2/(F2+I2)
= =COUNTIFS($A$2:$A$100,0,$B$2:$B$100,"<"&E2) Add a new column named Threshold
= =SUM(N2:N_last) AUC ≥ 0.8 is generally considered good; 0.9+ is excellent. Practical Example & Interpretation Let’s say your AUC = 0.87. This means there’s an 87% chance that the model will rank a randomly chosen positive instance higher than a randomly chosen negative one.
Column M: = =(J2+J3)/2
Assume Sensitivity (TPR) values in col J and FPR values in col K.
Add a new column named Threshold . Start from the highest predicted probability down to the lowest, then add 0.
You should now have a table like:
If you work in data science, machine learning, or medical diagnostics, you’ve probably heard of the (Receiver Operating Characteristic curve). It’s a powerful tool to evaluate the performance of a binary classification model. But what if you don’t have access to Python, R, or SPSS?
= =F2/(F2+I2)
= =COUNTIFS($A$2:$A$100,0,$B$2:$B$100,"<"&E2)
= =SUM(N2:N_last) AUC ≥ 0.8 is generally considered good; 0.9+ is excellent. Practical Example & Interpretation Let’s say your AUC = 0.87. This means there’s an 87% chance that the model will rank a randomly chosen positive instance higher than a randomly chosen negative one.
Column M: = =(J2+J3)/2
Assume Sensitivity (TPR) values in col J and FPR values in col K.