Model performance is typically evaluated using metrics such as:
- Accuracy: The proportion of correct predictions. - Precision and Recall: Measures of the model's ability to identify true positives and true negatives. - F1 Score: The harmonic mean of precision and recall. - ROC-AUC: The area under the receiver operating characteristic curve, indicating the model's ability to distinguish between classes.