Skip to contents

Evaluate classification prediction results.

Usage

X.evaluate(
  data = NULL,
  scores.col = NA,
  labels.col = NA,
  eval.metric = "prc",
  plot = FALSE,
  black = FALSE,
  kappa.cutoff = 0.5,
  kappa.weight = "unweighted",
  F1.cutoff = 0.5
)

Arguments

data

Data frame with columns for model scores and labels.

scores.col

Character string: 'data' column name with probs that is to be evaluated.

labels.col

Character string: 'data' column name with labels (1 for complex-forming, 0 for others.)

eval.metric

Character string: Method for predictor evaluation. 'roc' for area under the receiver-operator curve, 'prc' for area under the precision-recall curve, 'kappa' for Cohen's kappa and 'F1' for F1 score.

plot

Logical: Should the plots be plotted and outputted?

black

Logical: Should the plotted curve be black or color gradient? Default is FALSE.

kappa.cutoff

Numeric: What is the score cutoff for evaluating based on Cohen's kappa? Default is 0.5.

kappa.weight

Character string: Same as argument weight in irr:kappa2

F1.cutoff

Numeric: What is the score cutoff for evaluating based on F1? Default is 0.5.

Value

A list with three elements. $eval.metric is a number corresponding to the chosen evaluation method. $curvedata are data that allow plotting a plot corresponding to the evaluation metric. $plot is the plot of the corresponding metric.

Examples

evaluation <- X.evaluate(data)
#> Error in theme_bw(base_size = 12): could not find function "theme_bw"