代码之家  ›  专栏  ›  技术社区  ›  con

如何在GridSearchCV中使用马修斯系数进行评分?

  •  0
  • con  · 技术社区  · 3 年前

    我在用 GridSearchCV 调整我的机器学习结果的超参数:

    grid_search = GridSearchCV(estimator=xg_clf, scoring='f1', param_grid=param_grid, n_jobs=-1, cv=kfold)
    

    然而,我的主管希望我使用马修斯系数进行评分,不幸的是,这不是可用的选项之一:

    >>> sorted(sklearn.metrics.SCORERS.keys())
    ['accuracy', 'adjusted_mutual_info_score', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy', 'completeness_score', 'explained_variance', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'fowlkes_mallows_score', 'homogeneity_score', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted', 'max_error', 'mutual_info_score', 'neg_brier_score', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_absolute_percentage_error', 'neg_mean_gamma_deviance', 'neg_mean_poisson_deviance', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_median_absolute_error', 'neg_root_mean_squared_error', 'normalized_mutual_info_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'rand_score', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc', 'roc_auc_ovo', 'roc_auc_ovo_weighted', 'roc_auc_ovr', 'roc_auc_ovr_weighted', 'top_k_accuracy', 'v_measure_score']
    

    我读过 Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV 在文档中,但看起来这并不容易做到。

    我如何在GridSearchCV评分中使用马修斯系数?

    1 回复  |  直到 3 年前
        1
  •  0
  •   desertnaut SKZI    3 年前

    关于sklearn.metrics模块中make_scorer函数的帮助_得分手:

    make_scorer(score_func, *, greater_is_better=True, needs_proba=False, needs_threshold=False, **kwargs)
        Make a scorer from a performance metric or loss function.
        
        This factory function wraps scoring functions for use in
        :class:`~sklearn.model_selection.GridSearchCV` and
        :func:`~sklearn.model_selection.cross_val_score`.
        It takes a score function, such as :func:`~sklearn.metrics.accuracy_score`,
        :func:`~sklearn.metrics.mean_squared_error`,
    

    因此,解决方案是简单地使用 make_scorer 并从调用相应的函数 sklearn.metrics :

    grid_search = GridSearchCV(estimator=xg_clf, scoring=make_scorer(matthews_corrcoef), param_grid=param_grid, n_jobs=-1, cv=kfold)