Classification label evaluation
summarize_label_metrics_binary(y_true, y_pred)
Generate a comprehensive report of various evaluation metrics for binary classification results.
The output includes accuracy, precision, recall, F1 scores and confusion matrix elements (true negatives, false positives, false negatives, true positives).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true |
ndarray
|
True labels. |
required |
y_pred |
ndarray
|
Predicted labels. The array should come from a binary classifier. |
required |
Returns:
Type | Description |
---|---|
Dict[str, Number]
|
A dictionary containing the evaluated metrics. |
Source code in eis_toolkit/evaluation/classification_label_evaluation.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|