# Chapter 7. Evaluations and Loss components¶  ## Evaluations¶

This part evolves the main evaluation metrics for various tasks. These metrics derive from the save base class and have the same interface for easy use.

1. For classification problems, we implement: precision, recall, F1 and accuracy metrics

import torch
from graph4nlp.pytorch.modules.evaluation.accuracy import Accuracy
ground_truth = torch.Tensor([0, 1, 3, 1, 1]).long()
predict = torch.Tensor([0, 1, 3, 3, 1]).long()

metric = Accuracy(metrics=["precision", "recall", "F1", "accuracy"])

precision, recall, f1, accuracy = metric.calculate_scores(
ground_truth=ground_truth, predict=predict, average="micro")

1. For generation tasks, we implement 6 metrics including: 1) BLEU (BLEU@1-4), 2), BLEUTranslation(SacreBLEU), 3)CIDEr, 4)METEOR, 5) ROUGE (ROUGE-L), 6)SummarizationRouge (implemented by pyrouge).

from graph4nlp.pytorch.modules.evaluation import BLEU

bleu_metrics = BLEU(n_grams=[1, 2, 3, 4])

prediction = ["I am a PHD student.", "I am interested in Graph Neural Network."]
ground_truth = ["I am a student.", "She is interested in Math."]

scores = bleu_metrics.calculate_scores(ground_truth=ground_truth, predict=prediction)


## Loss components¶

We have implemented several specific loss functions for various tasks.

### Sequence generation loss.¶

We have wrapped the cross-entropy loss and the coverage loss(optional) to calculate the final loss for various sequence generation tasks (e.g., graph2seq, seq2seq).

from graph4nlp.pytorch.modules.loss.seq_generation_loss import SeqGenerationLoss
from graph4nlp.pytorch.models.graph2seq import Graph2Seq

loss_function = SeqGenerationLoss(ignore_index=0, use_coverage=True)

logits, enc_attn_weights, coverage_vectors = Graph2Seq(graph, tgt)
graph2seq_loss = SeqGenerationLoss(logits, tgt, enc_attn_weights=enc_attn_weights, coverage_vectors=coverage_vectors)


### Knowledge Graph Loss¶

In the state-of-the-art KGE models, loss functions were designed according to various pointwise, pairwise and multi-class approaches. Refers to Loss Functions in Knowledge Graph Embedding Models

Pointwise Loss Function

1. MSELoss creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input $$x$$ and target $$y$$. It is the wrapper of nn.MSELoss in pytorch.

2. SOFTMARGINLOSS Creates a criterion that optimizes a two-class classification logistic loss between input tensor $$x$$ and target tensor $$y$$ (containing 1 or -1). It is the wrapper of nn.SoftMarginLoss in pytorch.

The number of positive and negative samples should be about the same, otherwise it’s easy to overfit.

$\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}$

Pairwise Loss Function

1. SoftplusLoss refers to the paper OpenKE: An Open Toolkit for Knowledge Embedding

class SoftplusLoss(nn.Module):
super(SoftplusLoss, self).__init__()
self.criterion = nn.Softplus()
else:

def get_weights(self, n_score):

def forward(self, p_score, n_score):
return (self.criterion(-p_score).mean() + (self.get_weights(n_score) * self.criterion(n_score)).sum(
dim=-1).mean()) / 2
else:
return (self.criterion(-p_score).mean() + self.criterion(n_score).mean()) / 2


2. SigmoidLoss refers to the paper OpenKE: An Open Toolkit for Knowledge Embedding

class SigmoidLoss(nn.Module):
super(SigmoidLoss, self).__init__()
self.criterion = nn.LogSigmoid()
else:

def get_weights(self, n_score):

def forward(self, p_score, n_score):
return -(self.criterion(p_score).mean() + (self.get_weights(n_score) * self.criterion(-n_score)).sum(dim = -1).mean()) / 2
else:
return -(self.criterion(p_score).mean() + self.criterion(-n_score).mean()) / 2


The implementations of SoftplusLoss and SigmoidLoss refer to OpenKE.

Multi-Class Loss Function

1. Binary Cross Entropy Loss Creates a criterion that measures the Binary Cross Entropy between the target and the output. Note that the targets $$y$$ should be numbers between 0 and 1. It is the wrapper of nn.BCELoss in pytorch.

Next it is a simple how to use code:

import torch
from graph4nlp.pytorch.modules.loss.kg_loss import KGLoss

loss_function = KGLoss(loss_type="BCELoss")
m = nn.Sigmoid()
target = torch.empty(3).random_(2)
output = loss_function(m(input), target)


### General Loss¶

It includes the most used loss functions containing:

1. NLL loss. It is the wrapper of nn.NLLLoss in pytorch.

2. BCE loss. It is the wrapper of nn.BCELoss in pytorch.

3. BCEWithLogits loss. It is the wrapper of nn.BCEWithLogitsLoss in pytorch.

4. MultiLabelMargin loss. It is the wrapper of nn.MultiLabelMarginLoss in pytorch.

5. SoftMargin loss. It is the wrapper of nn.SoftMargin in pytorch.

6. CrossEntropy loss. It is the wrapper of nn.CrossEntropy in pytorch.

Next it is a simple how to use code:

import torch
from graph4nlp.pytorch.modules.loss.general_loss import GeneralLoss

loss_function = GeneralLoss(loss_type="CrossEntropy")
input = torch.randn(3, 5)
target = torch.empty(3, dtype=torch.long).random_(5)
output = loss_function(input, target)