Performance metrics | Classification | Sensitivity (recall): TP/(TP + FN) Specificity (true-negative rate): TN/(TN + FP) Accuracy: number of correct predictions/total predictions AUC: plot of true positive rate (sensitivity) against false positive rate (1 – specificity) |
Segmentation | Dice similarity coefficient: overlap of 2 samples Pearson correlation coefficient: strength of linear relationship between 2 variables | |
Limitations and ways to address them | Requires large datasets: multisite collaboration, open-source datasets Interpretability: saliency maps Overfitting: more training data, regularization, and batch normalization |
Note:—FP indicates false positive; FN, false-negative; ROC, receiver operating characteristic; TN, true-negative; TP, true positive.