graph4nlp.evaluation

Evaluation Metrics

class graph4nlp.evaluation.BLEU(n_grams, verbose=0)

The BLEU evaluation metric class.

Parameters
  • n_grams (list[int]) – The BLEU’s n_gram parameter. The results will be returned according to the n_grams one-by-one.

  • verbose (int, default = 0) – The log indicator. If set to 0, it will output no logs.

Methods

calculate_scores(ground_truth, predict)

The BLEU calculation function. It will compute the BLEU scores.

calculate_scores(ground_truth, predict)

The BLEU calculation function. It will compute the BLEU scores.

Parameters
  • ground_truth (list[string]) – The ground truth (correct) target values. It is a list of strings.

  • predict (list[string]) – The predicted target values. It is a list of strings.

Returns

  • score (list[float]) – The list contains BLEU_n results according to n_grams.

  • scores (list[list[float]]) – The specific results for each needed BLEU_n metric.

class graph4nlp.evaluation.CIDEr(df)

The CIDEr evaluation metric class.

Parameters

df (string) – Parameter indicating document frequency.

Methods

calculate_scores(ground_truth, predict)

The CIDEr calculation function. It will compute the CIDEr scores.

calculate_scores(ground_truth, predict)

The CIDEr calculation function. It will compute the CIDEr scores.

Parameters
  • ground_truth (list[string]) – The ground truth (correct) target values. It is a list of strings.

  • predict (list[string]) – The predicted target values. It is a list of strings.

Returns

  • score (float) – The CIDEr value.

  • scores (list[float]) – The specific results for CIDEr metric.

class graph4nlp.evaluation.METEOR

The METEOR evaluation metric class.

Parameters
  • rubric: (.) – Methods:

  • autosummary:: (.) –

    toctree

    calculate_scores

  • . – !! processed by numpydoc !!

calculate_scores(ground_truth, predict)

The METEOR calculation function. It will compute the METEOR scores.

Parameters
  • ground_truth (list[string]) – The ground truth (correct) target values. It is a list of strings.

  • predict (list[string]) – The predicted target values. It is a list of strings.

Returns

  • score (float) – The METEOR value.

  • scores (list[float]) – The specific results for METEOR metric.

class graph4nlp.evaluation.ROUGE

The METEOR evaluation metric class.

Parameters
  • rubric: (.) – Methods:

  • autosummary:: (.) –

    toctree

    calculate_scores

  • . – !! processed by numpydoc !!

calculate_scores(ground_truth, predict)

The METEOR calculation function. It will compute the METEOR scores.

Parameters
  • ground_truth (list[string]) – The ground truth (correct) target values. It is a list of strings.

  • predict (list[string]) – The predicted target values. It is a list of strings.

Returns

  • score (float) – The METEOR value.

  • scores (list[float]) – The specific results for METEOR metric.