From Sklearn.metrics import Precision_score,recall_score
Print (Precision_score (y_true, y_scores,average= ' micro '))
The Sklearn.metrics module implements some loss, score, and some tool functions to calculate classification performance. Some metrics may require a probability estimate of a positive case, a confidence level, or a binary decision value. Most implementations allow each sample to provide a weighted distribution of the overall score, which is accomplished by the Sample_weight parameter.
Case used by some two classifications (binary classification):
- Matthews_corrcoef (Y_true, y_pred)
- Precision_recall_curve (Y_true, probas_pred)
- Roc_curve (Y_true, y_score[, Pos_label, ...])
Some cases used by multiple classifications (Multiclass):
- Confusion_matrix (Y_true, y_pred[, labels])
- Hinge_loss (Y_true, pred_decision[, labels, ...])
Some multi-label (Multilabel) Cases:
- Accuracy_score (Y_true, y_pred[, normalize, ...])
- Classification_report (Y_true, y_pred[, ...])
- F1_score (Y_true, y_pred[, labels, ...])
- Fbeta_score (Y_true, y_pred, beta[, labels, ...])
- Hamming_loss (Y_true, y_pred[, classes])
- Jaccard_similarity_score (Y_true, y_pred[, ...])
- Log_loss (Y_true, y_pred[, EPS, normalize, ...])
- Precision_recall_fscore_support (Y_true, y_pred)
- Precision_score (Y_true, y_pred[, labels, ...])
- Recall_score (Y_true, y_pred[, labels, ...])
- Zero_one_loss (Y_true, y_pred[, normalize, ...])
There are also some issues that can be used for both two-label and multi-label (not multi-classification):
- Average_precision_score (Y_true, y_score[, ...])
- Roc_auc_score (Y_true, y_score[, average, ...])
Multi-classification evaluation indicator Python code