graph4nlp.prediction

Classification

Graph Classification

class graph4nlp.prediction.classification.graph_classification.FeedForwardNN(input_size, num_class, hidden_size, activation=None, graph_pool_type='max_pool', **kwargs)

FeedForwardNN class for graph classification task.

Parameters
input_sizeint

The dimension of input graph embeddings.

num_classint

The number of classes for classification.

hidden_sizelist of int

Hidden size per NN layer.

activation: nn.Module, optional

The activation function, default: nn.ReLU().

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(graph)

Compute the logits tensor for graph classification.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(graph)

Compute the logits tensor for graph classification.

Parameters
graphGraphData

The graph data containing graph embeddings.

Returns
list of GraphData

The output graph data containing logits tensor for graph classification.

class graph4nlp.prediction.classification.graph_classification.AvgPooling

Apply average pooling over the nodes in the graph.

\[r^{(i)} = \frac{1}{N_i}\sum_{k=1}^{N_i} x^{(i)}_k\]

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(graph, feat)

Compute average pooling.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(graph, feat)

Compute average pooling.

Parameters
graphGraphData

The graph data.

featstr

The feature field name.

Returns
torch.Tensor

The output feature.

Knowledge Graph Completion

class graph4nlp.prediction.classification.kg_completion.ComplEx(input_dropout=0.0, loss_name='BCELoss')

Specific class for knowledge graph completion task.

ComplEx from paper Complex Embeddings for Simple Link Prediction.

Parameters
input_dropout: float

Dropout for node_emb and rel_emb. Default: 0.0

loss_name: str

The loss type selected fot the KG completion task. Default: ‘BCELoss’

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(input_graph, e1_embedded_real, …)

Forward functions to compute the logits tensor for kg completion.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(input_graph: graph4nlp.pytorch.data.data.GraphData, e1_embedded_real, rel_embedded_real, e1_embedded_img, rel_embedded_img, all_node_emb_real, all_node_emb_img, multi_label=None)

Forward functions to compute the logits tensor for kg completion.

Parameters
input graphGraphData

The tensors stored in the node feature field named “node_emb” and “rel_emb” in the input_graph are used for knowledge graph completion.

e1_embedded_realtensor [B, H]

The selected entity_1 real embeddings of a batch. B: batch size H: length of the node embeddings (entity embeddings)

rel_embedded_realtensor [B, H]

The selected relation real embeddings of a batch. B: batch size H: length of the edge embeddings (relation embeddings)

e1_embedded_imgtensor [B, H]

The selected entity_1 img embeddings of a batch. B: batch size H: length of the node embeddings (entity embeddings)

rel_embedded_imgtensor [B, H]

The selected relation img embeddings of a batch. B: batch size H: length of the edge embeddings (relation embeddings)

all_node_emb_realtorch.nn.modules.sparse.Embedding [N, H]

All node real embeddings. N: number of nodes in the whole KG graph H: length of the node real embeddings (entity embeddings)

all_node_emb_imgtorch.nn.modules.sparse.Embedding [N, H]

All node img embeddings. N: number of nodes in the whole KG graph H: length of the node img embeddings (entity embeddings)

multi_label: tensor [B, N]

multi_label is a binary matrix. Each element can be equal to 1 for true label and 0 for false label (or 1 for true label, -1 for false label). multi_label[i] represents a multi-label of a given head-rel pair. B is the batch size. N: number of nodes in the whole KG graph.

Returns
output_graphGraphData

The computed logit tensor for each nodes in the graph are stored in the node feature field named “node_logits”. logit tensor shape is: [num_class]

class graph4nlp.prediction.classification.kg_completion.ComplExLayer(input_dropout=0.0, loss_name='BCELoss')

Specific class for knowledge graph completion task.

ComplEx from paper Complex Embeddings for Simple Link Prediction.

Parameters
input_dropout: float

Dropout for node_emb and rel_emb. Default: 0.0

loss_name: str

The loss type selected fot the KG completion task.

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(e1_embedded_real, e1_embedded_img, …)

Parameters

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(e1_embedded_real, e1_embedded_img, rel_embedded_real, rel_embedded_img, all_node_emb_real, all_node_emb_img, multi_label=None)
Parameters
input graphGraphData

The tensors stored in the node feature field named “node_emb” and “rel_emb” in the input_graph are used for knowledge graph completion.

e1_embedded_realtensor [B, H]

The selected entity_1 real embeddings of a batch. B: batch size H: length of the node embeddings (entity embeddings)

rel_embedded_realtensor [B, H]

The selected relation real embeddings of a batch. B: batch size H: length of the edge embeddings (relation embeddings)

e1_embedded_imgtensor [B, H]

The selected entity_1 img embeddings of a batch. B: batch size H: length of the node embeddings (entity embeddings)

rel_embedded_imgtensor [B, H]

The selected relation img embeddings of a batch. B: batch size H: length of the edge embeddings (relation embeddings)

all_node_emb_realtorch.nn.modules.sparse.Embedding [N, H]

All node real embeddings. N: number of nodes in the whole KG graph H: length of the node real embeddings (entity embeddings)

all_node_emb_imgtorch.nn.modules.sparse.Embedding [N, H]

All node img embeddings. N: number of nodes in the whole KG graph H: length of the node img embeddings (entity embeddings)

multi_label: tensor [B, N]

multi_label is a binary matrix. Each element can be equal to 1 for true label and 0 for false label (or 1 for true label, -1 for false label). multi_label[i] represents a multi-label of a given head-rel pair. B is the batch size. N: number of nodes in the whole KG graph.

Returns
pred: tensor [B, N].

The score logits for all nodes preidcted.

pred_pos: tensor [B_p]

The predition scores of positive examples.

pred_neg: tensor [B_n]

The predition scores of negative examples. B_p + B_n == B * N.

class graph4nlp.prediction.classification.kg_completion.DistMult(input_dropout=0.0, loss_name='BCELoss')

Specific class for knowledge graph completion task.

DistMult from paper Embedding entities and relations for learning and inference in knowledge bases.

Parameters
input_dropout: float

Dropout for node_emb and rel_emb. Default: 0.0

loss_name: str

The loss type selected fot the KG completion task. Default: ‘BCELoss’

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(input_graph, e1_emb, rel_emb, …[, …])

Forward functions to compute the logits tensor for kg completion.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(input_graph: graph4nlp.pytorch.data.data.GraphData, e1_emb, rel_emb, all_node_emb, multi_label=None)

Forward functions to compute the logits tensor for kg completion.

Parameters
input graphGraphData

The tensors stored in the node feature field named “node_emb” and “rel_emb” in the input_graph are used for knowledge graph completion.

e1_embtensor [B, H]

The selected entity_1 embeddings of a batch. B: batch size H: length of the node embeddings (entity embeddings)

rel_embtensor [B, H]

The selected relation embeddings of a batch. B: batch size H: length of the edge embeddings (relation embeddings)

all_node_embtorch.nn.modules.sparse.Embedding [N, H]

All node embeddings. N: number of nodes in the whole KG graph H: length of the node embeddings (entity embeddings)

multi_label: tensor [B, N]

multi_label is a binary matrix. Each element can be equal to 1 for true label and 0 for false label (or 1 for true label, -1 for false label). multi_label[i] represents a multi-label of a given head-rel pair. B is the batch size. N: number of nodes in the whole KG graph.

Returns
output_graphGraphData

The computed logit tensor for each nodes in the graph are stored in the node feature field named “node_logits”. logit tensor shape is: [num_class]

class graph4nlp.prediction.classification.kg_completion.DistMultLayer(input_dropout=0.0, loss_name='BCELoss')

Specific class for knowledge graph completion task.

DistMult from paper Embedding entities and relations for learning and inference in knowledge bases.

\[f(s, r, o) & = e_s^T R_r e_o\]
Parameters
input_dropout: float

Dropout for node_emb and rel_emb. Default: 0.0

loss_name: str

The loss type selected fot the KG completion task.

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(e1_emb, rel_emb, all_node_emb[, …])

Parameters

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(e1_emb, rel_emb, all_node_emb, multi_label=None)
Parameters
e1_embtensor [B, H]

The selected entity_1 embeddings of a batch. B: batch size H: length of the node embeddings (entity embeddings)

rel_embtensor [B, H]

The selected relation embeddings of a batch. B: batch size H: length of the edge embeddings (relation embeddings)

all_node_embtorch.nn.modules.sparse.Embedding [N, H]

All node embeddings. N: number of nodes in the whole KG graph H: length of the node embeddings (entity embeddings)

multi_label: tensor [B, N]

multi_label is a binary matrix. Each element can be equal to 1 for true label and 0 for false label (or 1 for true label, -1 for false label). multi_label[i] represents a multi-label of a given head-rel pair. B is the batch size. N: number of nodes in the whole KG graph.

Returns
pred: tensor [B, N].

The score logits for all nodes preidcted.

pred_pos: tensor [B_p]

The predition scores of positive examples.

pred_neg: tensor [B_n]

The predition scores of negative examples. B_p + B_n == B * N.

Node Classification

class graph4nlp.prediction.classification.node_classification.BiLSTMFeedForwardNN(input_size, num_class, hidden_size=None, dropout=0)

Specific class for node classification task.

Attributes
input_sizeint

the length of input node embeddings

num_class: int

the number of node catrgoriey for classification

hidden_size: the hidden size of the linear layer

Methods

forward(node_emb)

Generate the node classification logits.

forward(input_graph)

Forward functions to compute the logits tensor for node classification.

Parameters
input graphGraphData

The tensors stored in the node feature field named “node_emb” in the input_graph are used for classification. GraphData are bacthed and needs to unbatch to each sentence.

Returns
output_graphGraphData

The computed logit tensor for each nodes in the graph are stored in the node feature field named “node_logits”. logit tensor shape is: [num_class]

class graph4nlp.prediction.classification.node_classification.BiLSTMFeedForwardNNLayer(input_size, output_size, hidden_size=None, dropout=0)

Specific class for node classification layer.

Parameters
input_sizeint

The length of input node embeddings

output_sizeint

The number of node catrgoriey for classification

hidden_sizeint

the hidden size of the linear layer

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(node_emb[, node_idx])

Forward functions for classification task.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

init_hidden

forward(node_emb, node_idx=None)

Forward functions for classification task.

Parameters
node_embpadded tensor [B,N,H]

B: batch size N: max number of nodes H: length of the node embeddings

node_idxa list of index of nodes that needs classification.

Default: ‘None’ Example: [1,3,5]

Returns
logit tensor: [B,N, num_class] The score logits for all nodes preidcted.
class graph4nlp.prediction.classification.node_classification.FeedForwardNN(input_size, num_class, hidden_size, activation=None)

Specific class for node classification task.

Parameters
input_sizeint

The length of input node embeddings

num_classint

The number of node catrgoriey for classification

hidden_sizelist of int type values

Example for two layers’s FeedforwardNN: [50, 20]

activation: the activation function class for each fully connected layer

Default: nn.ReLU() Example: nn.ReLU(),nn.Sigmoid().

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(input_graph)

Forward functions to compute the logits tensor for node classification.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(input_graph)

Forward functions to compute the logits tensor for node classification.

Parameters
input graphGraphData

The tensors stored in the node feature field named “node_emb” in the input_graph are used for classification.

Returns
output_graphGraphData

The computed logit tensor for each nodes in the graph are stored in the node feature field named “node_logits”. logit tensor shape is: [num_class]

class graph4nlp.prediction.classification.node_classification.FeedForwardNNLayer(input_size, num_class, hidden_size, activation=None)

Specific class for node classification task.

Parameters
input_sizeint

The length of input node embeddings

num_classint

The number of node catrgoriey for classification

hidden_sizelist of int type values

Example for two layers’s FeedforwardNN: [50, 20]

activation: the activation function class for each fully connected layer

Default: nn.ReLU() Example: nn.ReLU(),nn.Sigmoid().

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(node_emb[, node_idx])

Forward functions to compute the logits tensor for node classification.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

forward(node_emb, node_idx=None)

Forward functions to compute the logits tensor for node classification.

Parameters
node_embtensor [N,H]

N: number of nodes H: length of the node embeddings

node_idxa list of index of nodes that needs classification.

Default: ‘None’

Returns
logit tensor: [N, num_class] The score logits for all nodes preidcted.

Generation

class graph4nlp.prediction.generation.StdRNNDecoder(max_decoder_step, input_size, hidden_size, word_emb, vocab: graph4nlp.pytorch.modules.utils.vocab_utils.Vocab, rnn_type='lstm', graph_pooling_strategy=None, use_attention=True, attention_type='uniform', rnn_emb_input_size=None, attention_function='mlp', node_type_num=None, fuse_strategy='average', use_copy=False, use_coverage=False, coverage_strategy='sum', tgt_emb_as_output_layer=False, dropout=0.3)

The standard rnn for sequence decoder.

Parameters
  • max_decoder_step (int) – The maximal decoding step.

  • input_size (int) – The dimension for standard rnn decoder’s input.

  • hidden_size (int) – The dimension for standard rnn decoder’s hidden representation during calculation.

  • word_emb (torch.nn.Embedding) – The target’s embedding matrix.

  • vocab (Any) – The target’s vocabulary

  • rnn_type (str, option=["lstm", "gru"], default="lstm") – The rnn’s type. We support lstm and gru here.

  • use_attention (bool, default=True) – Whether use attention during decoding.

  • attention_type (str, option=["uniform", "sep_diff_encoder_type", sep_diff_node_type], default="uniform" # noqa) –

    The attention strategy choice. “uniform”: uniform attention. We will attend on the nodes uniformly. “sep_diff_encoder_type”: separate attention.

    We will attend on graph encoder and rnn encoder’s results separately.

    sep_diff_node_type”: separate attention.

    We will attend on different node type separately.

  • attention_function (str, option=["general", "mlp"], default="mlp") – Different attention function.

  • node_type_num (int, default=None) – When we choose “sep_diff_node_type”, we must set this parameter. This parameter indicate the the amount of node type.

  • fuse_strategy (str, option=["average", "concatenate"], default=average) – The strategy to fuse attention results generated by separate attention. “average”: We will take an average on all results. “concatenate”: We will concatenate all results to one.

  • use_copy (bool, default=False) – Whether use copy mechanism. See pointer network. Note that you must use attention first.

  • use_coverage (bool, default=False) – Whether use coverage mechanism. Note that you must use attention first.

  • coverage_strategy (str, option=["sum", "max"], default="sum") – The coverage strategy when calculating the coverage vector.

  • tgt_emb_as_output_layer (bool, default=False) – When this option is set True, the output projection layer(It is used to project RNN encoded # noqa representation to target sequence)’s weight will be shared with the target vocabulary’s embedding. # noqa

  • dropout (float, default=0.3) –

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

decode_step(decoder_input, input_feed, …)

One step for decoding :param decoder_input: The input for current decoding step :type decoder_input: torch.Tensor :param rnn_state: Rnn_state :type rnn_state: torch.Tensor :param encoder_out: The graph node embedding for decoding :type encoder_out: torch.Tensor :param dec_input_mask: The mask of graph node.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

extract_params(batch_graph)

Extract parameters from batch_graph for _run_forward_pass() function.

float()

Casts all floating point parameters and buffers to float datatype.

forward(batch_graph[, tgt_seq, oov_dict, …])

The forward function of StdRNNDecoder

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_decoder_init_state(rnn_type, batch_size)

The initial state for RNN decoder.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

coverage_function

graph_pooling

decode_step(decoder_input, input_feed, rnn_state, encoder_out, dec_input_mask, rnn_emb=None, enc_attn_weights_average=None, src_seq=None, oov_dict=None)

One step for decoding :param decoder_input: The input for current decoding step :type decoder_input: torch.Tensor :param rnn_state: Rnn_state :type rnn_state: torch.Tensor :param encoder_out: The graph node embedding for decoding :type encoder_out: torch.Tensor :param dec_input_mask: The mask of graph node.

Notes: -1 is the dummy node, each int larger than -1 is one class for separate attention. # noqa

Parameters
  • rnn_emb (torch.Tensor) – The graph node embedding from RNN encoder.

  • enc_attn_weights_average (list) – The list of encoder attention weights. It will be used for coverage.

  • src_seq (torch.Tensor) – The source sequence. It will be used for copy.

  • oov_dict (Vocab) – The vocabulary containing out-of-vocabulary words.

extract_params(batch_graph)

Extract parameters from batch_graph for _run_forward_pass() function.

Parameters

batch_graph (GraphData) –

Returns
params: dict
forward(batch_graph, tgt_seq=None, oov_dict=None, teacher_forcing_rate=1.0)

The forward function of StdRNNDecoder

Parameters
  • batch_graph (GraphData) – The graph input

  • tgt_seq (torch.Tensor) – shape=[B, T] The target sequence’s index.

  • oov_dict (VocabModel, default=None) – The vocabulary for copy mechanism.

  • teacher_forcing_rate (float, default=1.0) – The teacher forcing rate.

Returns
logits: torch.Tensor

shape=[B, tgt_len, vocab_size] The probability for predicted target sequence. It is processed by softmax function.

enc_attn_weights_average: torch.Tensor

It is used for calculating coverage loss. The averaged attention scores.

coverage_vectors: torch.Tensor

It is used for calculating coverage loss. The coverage vector.

get_decoder_init_state(rnn_type, batch_size, content=None)

The initial state for RNN decoder. :param rnn_type: The rnn type. :type rnn_type: str, option=[“LSTM”, “GRU’] :param batch_size: The batch size of the initial state. :type batch_size: int :param content: The initialization of initial state. :type content: torch.Tensor, default=None

Returns
initial_state: Any
class graph4nlp.prediction.generation.StdTreeDecoder(attn_type, embeddings, enc_hidden_size, dec_emb_size, dec_hidden_size, output_size, criterion, teacher_force_ratio, use_sibling=True, use_attention=True, use_copy=False, fuse_strategy='average', num_layers=1, dropout_for_decoder=0.1, rnn_type='lstm', max_dec_seq_length=512, max_dec_tree_depth=256, tgt_vocab=None, graph_pooling_strategy='max')

StdTreeDecoder: This is a tree decoder implementation, which is used for tree object decoding.

Attributes
attn_typestr,

Describe which attention mechanism is used, can be uniform, separate_on_encoder_type, separate_on_node_type.

embeddingstorch.nn.Module,

Embedding layer, input is tensor of word index, output is word embedding tensor.

enc_hidden_sizeint,

Size of encoder hidden state.

dec_emb_sizeint,

Size of decoder word embedding layer output size.

dec_hidden_sizeint,

Size of decoder hidden state. (namely the lstm or gru hidden size when rnn unit has been specified)

output_sizeint,

Size of output vocabulary size.

teacher_force_ratiofloat,

The ratio of possibility to use teacher force training.

use_siblingboolean,

Whether feed sibling state in each decoding step.

use_copyboolean,

Whether use copy mechanism in decoding.

fuse_strategy: str, option=[None, “average”, “concatenate”], default=None

The strategy to fuse attention results generated by separate attention. “None”: If we do uniform attention, we will set it to None. “average”: We will take an average on all results. “concatenate”: We will concatenate all results to one.

num_layersint, optional,

Layer number of decoder rnn unit.

dropout_for_decoder: float,

Dropout ratio for decoder(include both the dropout for word embedding and the dropout for attention layer)

tgt_vocabobject,

The vocab object used in decoder, including all the word<->id pairs appeared in the output sentences.

graph_pooling_strategystr,

The graph pooling strategy used to generate the graph embedding with node embeddings

rnn_type: str, optional,

The rnn unit is used, option=[“lstm”, “gru”], default=”lstm”.

max_dec_seq_lengthint, optional,

In decoding, the decoding steps upper limit.

max_dec_tree_depthint, optional,

In decoding, the tree depth lower limit.

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

decode_step(tgt_batch_size, …[, …])

The decoding function in tree decoder.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(g[, tgt_tree_batch, oov_dict])

Forward calculation method

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_decoder_init_state(**kwargs)

The initial state for decoding.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

decode_step(tgt_batch_size, dec_single_input, dec_single_state, memory, parent_state, input_mask=None, memory_mask=None, memory_candidate=None, sibling_state=None, oov_dict=None, enc_batch=None)

The decoding function in tree decoder.

Parameters
tgt_batch_sizeint,

batch size.

dec_single_inputtorch.Tensor,

word id matrix for decoder input: [B, N].

dec_single_statetorch.Tensor

the rnn decoding hidden state: [B, N, D].

memorytorch.Tensor

the encoder output node embedding.

parent_statetorch.Tensor

the parent embedding used in parent feeding mechanism.

input_masktorch.Tensor, optional

input mask, by default None

memory_masktorch.Tensor, optional

mask for encoder output, by default None

memory_candidatetorch.Tensor, optional

encoder output used for separate attention mechanism, by default None

sibling_statetorch.Tensor, optional

sibling state for sibling feeding mechanism, by default None

oov_dictobject, optional

out-of-vocabulary object for copy mechanism, by default None

enc_batchtorch.Tensor,

The input batch : (Batch_size * Source sentence word index tensor).

forward(g, tgt_tree_batch=None, oov_dict=None)

Forward calculation method

class graph4nlp.prediction.generation.DecoderStrategy(beam_size, vocab, decoder: graph4nlp.pytorch.modules.prediction.generation.base.DecoderBase, rnn_type, use_copy=False, use_coverage=False, max_decoder_step=50)

The strategy for sequence decoding. Support beam seach only temporally.

Parameters
  • beam_size (int) – The beam size for beam search.

  • batch_graph (GraphData) – The input graph

  • decoder (DecoderBase) – The decoder instance.

  • rnn_type (str, option=["lstm", "gru"]) – The type of RNN.

  • use_copy (bool, default=False) – Whether use copy mechanism. See pointer network. Note that you must use attention first.

  • use_coverage (bool, default=False) – Whether use coverage mechanism. Note that you must use attention first.

  • max_decoder_step (int, default=50) – The maximal decoding step.

Methods

add_module(name, module)

Adds a child module to the current module.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Returns an iterator over module buffers.

children()

Returns an iterator over immediate children modules.

cpu()

Moves all model parameters and buffers to the CPU.

cuda([device])

Moves all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Sets the module in evaluation mode.

extra_repr()

Set the extra representation of the module

float()

Casts all floating point parameters and buffers to float datatype.

forward(*input)

Defines the computation performed at every call.

generate(batch_graph[, oov_dict, topk])

Generate sequences using beam search.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

get_extra_state()

Returns any extra state to include in the module’s state_dict.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

half()

Casts all floating point parameters and buffers to half datatype.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

modules()

Returns an iterator over all modules in the network.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Returns an iterator over module parameters.

register_backward_hook(hook)

Registers a backward hook on the module.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_forward_hook(hook)

Registers a forward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_parameter(name, param)

Adds a parameter to the module.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

This function is called from load_state_dict() to handle any extra state found within the state_dict.

share_memory()

See torch.Tensor.share_memory_()

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

train([mode])

Sets the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Moves all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

__call__

beam_search_for_tree_decoding

generate(batch_graph, oov_dict=None, topk=1)

Generate sequences using beam search.

Parameters
  • batch_graph (GraphData) –

  • oov_dict (VocabModel, default=None) – The vocabulary for copy mechanism.

  • topk (int, default=1) –

Returns
prediction: list