Zero one loss#

  • Calcula la suma o el promedio de perdida de clasificación 0-1 sobre n_\text{samples}

  • Se computa como:

L_{0-1}(y, \hat{y})= \frac{1}{n_\text{samples}} \sum_{i=0}^{\text{samples}-1} 1(\hat{y}_i \ne y_i) = 1 - \text{accuracy}(y, \hat{y})

[1]:
from sklearn.metrics import zero_one_loss

y_pred = [1, 2, 3, 4]
y_true = [2, 2, 3, 4]

zero_one_loss(
    # -------------------------------------------------------------------------
    # Ground truth (correct) target values.
    y_true=y_true,
    # -------------------------------------------------------------------------
    # Estimated targets as returned by a classifier.
    y_pred=y_pred,
    # -------------------------------------------------------------------------
    # If False, return the number of misclassifications. Otherwise, return the
    # fraction of misclassifications.
    normalize=True,
    # -------------------------------------------------------------------------
    # Sample weights.
    sample_weight=None,

)
[1]:
0.25
[2]:
zero_one_loss(
    y_true,
    y_pred,
    normalize=False,
)
[2]:
1