⇅ Text Features
Text Features Preprocessing¶
Text features are an extension of sequence features. Text inputs are processed by a tokenizer which maps the raw text input into a sequence of tokens. An integer id is assigned to each unique token. Using this mapping, each text string is converted first to a sequence of tokens, and next to a sequence of integers.
The list of tokens and their integer representations (vocabulary) is stored in the metadata of the model. In the case of a text output feature, this same mapping is used to post-process predictions to text.
The parameters for text preprocessing are as follows:
tokenizer
(defaultspace_punct
): defines how to map from the raw string content of the dataset column to a sequence of elements. For all available options see Tokenizers.vocab_file
(defaultnull
): filepath string to a UTF-8 encoded file containing the sequence's vocabulary. On each line the first string until\t
or\n
is considered a word.max_sequence_length
(default256
): the maximum length (number of tokens) of the text. Texts that are longer than this value will be truncated, while texts that are shorter will be padded.most_common
(default20000
): the maximum number of most common tokens in the vocabulary. If the data contains more than this amount, the most infrequent symbols will be treated as unknown.padding_symbol
(default<PAD>
): the string used as a padding symbol. This special token is mapped to the integer ID 0 in the vocabulary.unknown_symbol
(default<UNK>
): the string used as an unknown placeholder. This special token is mapped to the integer ID 1 in the vocabulary.padding
(defaultright
): the direction of the padding.right
andleft
are available options.lowercase
(defaultfalse
): If true, converts the string to lowercase before tokenizing.missing_value_strategy
(defaultfill_with_const
): what strategy to follow when there's a missing value in the dataset. The value should be one offill_with_const
(replaces the missing value with a specific value specified with thefill_value
parameter),fill_with_mode
(replaces the missing values with the most frequent value in the column),bfill
(replaces the missing values with the next valid value),ffill
(replaces the missing values with the previous valid value) ordrop_row
.fill_value
(default""
): the value to replace the missing values with in case themissing_value_strategy
isfill_value
.
Configuration example:
name: text_column_name
type: text
preprocessing:
tokenizer: space_punct
vocab_file: null
max_sequence_length: 256
most_common: 20000
padding_symbol: <PAD>
unknown_symbol: <UNK>
padding: right
lowercase: false
missing_value_strategy: fill_with_const
fill_value: ""
Preprocessing parameters can also be defined once and applied to all text input features using the Type-Global Preprocessing section.
Note
If a text feature's encoder specifies a huggingface model, then the tokenizer for that model will be used automatically.
Text Input Features and Encoders¶
The encoder parameters specified at the feature level are:
tied
(defaultnull
): name of another input feature to tie the weights of the encoder with. It needs to be the name of a feature of the same type and with the same encoder parameters.
Example text feature entry in the input features list:
name: text_column_name
type: text
tied: null
encoder:
type: bert
trainable: true
The available encoder parameters:
type
(defaultparallel_cnn
): encoder to use for the input text feature. The available encoders include encoders used for Sequence Features as well as pre-trained text encoders from the face transformers library:albert
,auto_transformer
,bert
,camembert
,ctrl
,distilbert
,electra
,flaubert
,gpt
,gpt2
,longformer
,roberta
,t5
,mt5
,transformer_xl
,xlm
,xlmroberta
,xlnet
.
Encoder type and encoder parameters can also be defined once and applied to all text input features using the Type-Global Encoder section.
Embed Encoder¶
The embed encoder simply maps each token in the input sequence to an embedding, creating a b x s x h
tensor where b
is the batch size, s
is the length of the sequence and h
is the embedding size.
The tensor is reduced along the s
dimension to obtain a single vector of size h
for each element of the batch.
If you want to output the full b x s x h
tensor, you can specify reduce_output: null
.
+------+
|Emb 12|
+------+
+--+ |Emb 7 |
|12| +------+
|7 | |Emb 43| +-----------+
|43| +------+ |Aggregation|
|65+--->Emb 65+--->Reduce +-->
|23| +------+ |Operation |
|4 | |Emb 23| +-----------+
|1 | +------+
+--+ |Emb 4 |
+------+
|Emb 1 |
+------+
These are the parameters available for the embed encoder
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): it is the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
,sparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.dropout
(default0
): dropout rate.weights_initializer
(defaultglorot_uniform
): initializer for the weight matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.reduce_output
(defaultsum
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(selects the last vector of the sequence dimension) andnull
(which does not reduce and returns the full tensor).
Example text feature entry in the input features list using an embed encoder:
name: text_column_name
type: text
encoder:
type: embed
representation: dense
embedding_size: 256
embeddings_trainable: true
dropout: 0
reduce_output: sum
Parallel CNN Encoder¶
The parallel cnn encoder is inspired by
Yoon Kim's Convolutional Neural Network for Sentence Classification.
It works by first mapping the input token sequence b x s
(where b
is the batch size and s
is the length of the
sequence) into a sequence of embeddings, then it passes the embedding through a number of parallel 1d convolutional
layers with different filter size (by default 4 layers with filter size 2, 3, 4 and 5), followed by max pooling and
concatenation.
This single vector concatenating the outputs of the parallel convolutional layers is then passed through a stack of
fully connected layers and returned as a b x h
tensor where h
is the output size of the last fully connected layer.
If you want to output the full b x s x h
tensor, you can specify reduce_output: null
.
+-------+ +----+
+-->1D Conv+--->Pool+--+
+------+ | |Width 2| +----+ |
|Emb 12| | +-------+ |
+------+ | |
+--+ |Emb 7 | | +-------+ +----+ |
|12| +------+ +-->1D Conv+--->Pool+--+
|7 | |Emb 43| | |Width 3| +----+ | +---------+
|43| +------+ | +-------+ | +------+ |Fully |
|65+-->Emb 65 +--+ +-->Concat+-->Connected+-->
|23| +------+ | +-------+ +----+ | +------+ |Layers |
|4 | |Emb 23| +-->1D Conv+--->Pool+--+ +---------+
|1 | +------+ | |Width 4| +----+ |
+--+ |Emb 4 | | +-------+ |
+------+ | |
|Emb 1 | | +-------+ +----+ |
+------+ +-->1D Conv+--->Pool+--+
|Width 5| +----+
+-------+
These are the available parameters for a parallel cnn encoder:
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): it is the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
assparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.conv_layers
(defaultnull
): a list of dictionaries containing the parameters of all the convolutional layers. The length of the list determines the number of parallel convolutional layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,num_filters
,filter_size
,strides
,padding
,dilation_rate
,use_bias
,pool_function
,pool_padding
,pool_size
,pool_strides
,bias_initializer
,weights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothconv_layers
andnum_conv_layers
arenull
, a default list will be assigned toconv_layers
with the value[{filter_size: 2}, {filter_size: 3}, {filter_size: 4}, {filter_size: 5}]
.num_conv_layers
(defaultnull
): ifconv_layers
isnull
, this is the number of parallel convolutional layers.filter_size
(default3
): if afilter_size
is not already specified inconv_layers
this is the defaultfilter_size
that will be used for each layer. It indicates how wide is the 1d convolutional filter.num_filters
(default256
): if anum_filters
is not already specified inconv_layers
this is the defaultnum_filters
that will be used for each layer. It indicates the number of filters, and by consequence the output channels of the 1d convolution.pool_function
(defaultmax
): pooling function:max
will select the maximum value. Any ofaverage
,avg
ormean
will compute the mean value.pool_size
(defaultnull
): if apool_size
is not already specified inconv_layers
this is the defaultpool_size
that will be used for each layer. It indicates the size of the max pooling that will be performed along thes
sequence dimension after the convolution operation.fc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothfc_layers
andnum_fc_layers
arenull
, a default list will be assigned tofc_layers
with the value[{output_size: 512}, {output_size: 256}]
(only applies ifreduce_output
is notnull
).num_fc_layers
(defaultnull
): iffc_layers
isnull
, this is the number of stacked fully connected layers (only applies ifreduce_output
is notnull
).output_size
(default256
): ifoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates how the output should be normalized and may be one ofnull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout ratereduce_output
(defaultsum
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(selects the last vector of the sequence dimension) andnull
(which does not reduce and returns the full tensor).
Example text feature entry in the input features list using a parallel cnn encoder:
name: text_column_name
type: text
encoder:
type: parallel_cnn
representation: dense
embedding_size: 256
embeddings_trainable: true
filter_size: 3
num_filters: 256
pool_function: max
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
activation: relu
dropout: 0.0
reduce_output: sum
Stacked CNN Encoder¶
The stacked cnn encoder is inspired by Xiang Zhang at all's Character-level Convolutional Networks for Text Classification.
It works by first mapping the input token sequence b x s
(where b
is the batch size and s
is the length of the
sequence) into a sequence of embeddings, then it passes the embedding through a stack of 1d convolutional layers with
different filter size (by default 6 layers with filter size 7, 7, 3, 3, 3 and 3), followed by an optional final pool and
by a flatten operation.
This single flatten vector is then passed through a stack of fully connected layers and returned as a b x h
tensor
where h
is the output size of the last fully connected layer.
If you want to output the full b x s x h
tensor, you can specify the pool_size
of all your conv_layers
to be
null
and reduce_output: null
, while if pool_size
has a value different from null
and reduce_output: null
the
returned tensor will be of shape b x s' x h
, where s'
is width of the output of the last convolutional layer.
+------+
|Emb 12|
+------+
+--+ |Emb 7 |
|12| +------+
|7 | |Emb 43| +----------------+ +---------+
|43| +------+ |1D Conv | |Fully |
|65+--->Emb 65+--->Layers +--->Connected+-->
|23| +------+ |Different Widths| |Layers |
|4 | |Emb 23| +----------------+ +---------+
|1 | +------+
+--+ |Emb 4 |
+------+
|Emb 1 |
+------+
These are the parameters available for the stack cnn encoder:
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
assparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.conv_layers
(defaultnull
): a list of dictionaries containing the parameters of all the convolutional layers. The length of the list determines the number of stacked convolutional layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,num_filters
,filter_size
,strides
,padding
,dilation_rate
,use_bias
,pool_function
,pool_padding
,pool_size
,pool_strides
,bias_initializer
,weights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothconv_layers
andnum_conv_layers
arenull
, a default list will be assigned toconv_layers
with the value[{filter_size: 7, pool_size: 3}, {filter_size: 7, pool_size: 3}, {filter_size: 3, pool_size: null}, {filter_size: 3, pool_size: null}, {filter_size: 3, pool_size: null}, {filter_size: 3, pool_size: 3}]
.num_conv_layers
(defaultnull
): ifconv_layers
isnull
, this is the number of stacked convolutional layers.filter_size
(default3
): if afilter_size
is not already specified inconv_layers
this is the defaultfilter_size
that will be used for each layer. It indicates how wide is the 1d convolutional filter.num_filters
(default256
): if anum_filters
is not already specified inconv_layers
this is the defaultnum_filters
that will be used for each layer. It indicates the number of filters, and by consequence the output channels of the 1d convolution.strides
(default1
): stride length of the convolutionpadding
(defaultsame
): one ofvalid
orsame
.dilation_rate
(default1
): dilation rate to use for dilated convolutionpool_function
(defaultmax
): pooling function:max
will select the maximum value. Any ofaverage
,avg
ormean
will compute the mean value.pool_size
(defaultnull
): if apool_size
is not already specified inconv_layers
this is the defaultpool_size
that will be used for each layer. It indicates the size of the max pooling that will be performed along thes
sequence dimension after the convolution operation.pool_strides
(defaultnull
): factor to scale downpool_padding
(defaultsame
): one ofvalid
orsame
fc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothfc_layers
andnum_fc_layers
arenull
, a default list will be assigned tofc_layers
with the value[{output_size: 512}, {output_size: 256}]
(only applies ifreduce_output
is notnull
).num_fc_layers
(defaultnull
): iffc_layers
isnull
, this is the number of stacked fully connected layers (only applies ifreduce_output
is notnull
).output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weight matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates how the output should be normalized and may be one ofnull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout ratereduce_output
(defaultmax
): defines how to reduce the output tensor of the convolutional layers along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension) andnull
(which does not reduce and returns the full tensor).
Example text feature entry in the input features list using a parallel cnn encoder:
name: text_column_name
type: text
encoder:
type: stacked_cnn
representation: dense
embedding_size: 256
embeddings_trainable: true
filter_size: 3
num_filters: 256
strides: 1
padding: same
dilation_rate: 1
pool_function: max
pool_padding: same
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
activation: relu
dropout: 0
reduce_output: max
Stacked Parallel CNN Encoder¶
The stacked parallel cnn encoder is a combination of the Parallel CNN and the Stacked CNN encoders where each layer of
the stack is composed of parallel convolutional layers.
It works by first mapping the input token sequence b x s
(where b
is the batch size and s
is the length of the
sequence) into a sequence of embeddings, then it passes the embedding through a stack of several parallel 1d
convolutional layers with different filter size, followed by an optional final pool and by a flatten operation.
This single flattened vector is then passed through a stack of fully connected layers and returned as a b x h
tensor
where h
is the output size of the last fully connected layer.
If you want to output the full b x s x h
tensor, you can specify reduce_output: null
.
+-------+ +-------+
+->1D Conv+-+ +->1D Conv+-+
+------+ | |Width 2| | | |Width 2| |
|Emb 12| | +-------+ | | +-------+ |
+------+ | | | |
+--+ |Emb 7 | | +-------+ | | +-------+ |
|12| +------+ +->1D Conv+-+ +->1D Conv+-+
|7 | |Emb 43| | |Width 3| | | |Width 3| | +---------+
|43| +------+ | +-------+ | +------+ +---+ | +-------+ | +------+ +----+ |Fully |
|65+->Emb 65 +--+ +->Concat+-->...+-+ +->Concat+->Pool+->Connected+-->
|23| +------+ | +-------+ | +------+ +---+ | +-------+ | +------+ +----+ |Layers |
|4 | |Emb 23| +->1D Conv+-+ +->1D Conv+-+ +---------+
|1 | +------+ | |Width 4| | | |Width 4| |
+--+ |Emb 4 | | +-------+ | | +-------+ |
+------+ | | | |
|Emb 1 | | +-------+ | | +-------+ |
+------+ +->1D Conv+-+ +->1D Conv+-+
|Width 5| |Width 5|
+-------+ +-------+
These are the available parameters for the stack parallel cnn encoder:
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
assparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.stacked_layers
(defaultnull
): a nested list of lists of dictionaries containing the parameters of the stack of parallel convolutional layers. The length of the list determines the number of stacked parallel convolutional layers, length of the sub-lists determines the number of parallel conv layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,num_filters
,filter_size
,strides
,padding
,dilation_rate
,use_bias
,pool_function
,pool_padding
,pool_size
,pool_strides
,bias_initializer
,weights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothstacked_layers
andnum_stacked_layers
arenull
, a default list will be assigned tostacked_layers
with the value[[{filter_size: 2}, {filter_size: 3}, {filter_size: 4}, {filter_size: 5}], [{filter_size: 2}, {filter_size: 3}, {filter_size: 4}, {filter_size: 5}], [{filter_size: 2}, {filter_size: 3}, {filter_size: 4}, {filter_size: 5}]]
.num_stacked_layers
(defaultnull
): ifstacked_layers
isnull
, this is the number of elements in the stack of parallel convolutional layers.filter_size
(default3
): if afilter_size
is not already specified instacked_layers
this is the defaultfilter_size
that will be used for each layer. It indicates how wide is the 1d convolutional filter.num_filters
(default256
): if anum_filters
is not already specified instacker_layers
this is the defaultnum_filters
that will be used for each layer. It indicates the number of filters, and by consequence the output channels of the 1d convolution.pool_function
(defaultmax
): pooling function:max
will select the maximum value. Any ofaverage
,avg
ormean
will compute the mean value.pool_size
(defaultnull
): if apool_size
is not already specified instacked_layers
this is the defaultpool_size
that will be used for each layer. It indicates the size of the max pooling that will be performed along thes
sequence dimension after the convolution operation.fc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothfc_layers
andnum_fc_layers
arenull
, a default list will be assigned tofc_layers
with the value[{output_size: 512}, {output_size: 256}]
(only applies ifreduce_output
is notnull
).num_fc_layers
(defaultnull
): iffc_layers
isnull
, this is the number of stacked fully connected layers (only applies ifreduce_output
is notnull
).output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates how the output should be normalized and may be one ofnull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout ratereduce_output
(defaultsum
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the first dimension),last
(returns the last vector of the first dimension) andnull
(which does not reduce and returns the full tensor).
Example text feature entry in the input features list using a parallel cnn encoder:
name: text_column_name
type: text
encoder:
type: stacked_parallel_cnn
representation: dense
embedding_size: 256
embeddings_trainable: true
filter_size: 3
num_filters: 256
pool_function: max
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
activation: relu
dropout: 0
reduce_output: max
RNN Encoder¶
The rnn encoder works by first mapping the input token sequence b x s
(where b
is the batch size and s
is the
length of the sequence) into a sequence of embeddings, then it passes the embedding through a stack of recurrent layers
(by default 1 layer), followed by a reduce operation that by default only returns the last output, but can perform other
reduce functions.
If you want to output the full b x s x h
where h
is the size of the output of the last rnn layer, you can specify
reduce_output: null
.
+------+
|Emb 12|
+------+
+--+ |Emb 7 |
|12| +------+
|7 | |Emb 43| +---------+
|43| +------+ +----------+ |Fully |
|65+--->Emb 65+--->RNN Layers+-->Connected+-->
|23| +------+ +----------+ |Layers |
|4 | |Emb 23| +---------+
|1 | +------+
+--+ |Emb 4 |
+------+
|Emb 1 |
+------+
These are the available parameters for the rnn encoder:
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
assparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.num_layers
(default1
): the number of stacked recurrent layers.state_size
(default256
): the size of the state of the rnn.cell_type
(defaultrnn
): the type of recurrent cell to use. Available values are:rnn
,lstm
,gru
. For reference about the differences between the cells please refer to torch.nn Recurrent Layers.bidirectional
(defaultfalse
): iftrue
two recurrent networks will perform encoding in the forward and backward direction and their outputs will be concatenated.activation
(defaulttanh
): activation function to use.recurrent_activation
(defaultsigmoid
): activation function to use in the recurrent stepunit_forget_bias
(defaulttrue
): Iftrue
, add 1 to the bias of the forget gate at initializationrecurrent_initializer
(defaultorthogonal
): initializer for recurrent matrix weightsdropout
(default0.0
): dropout raterecurrent_dropout
(default0.0
): dropout rate for recurrent statefc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothfc_layers
andnum_fc_layers
arenull
, a default list will be assigned tofc_layers
with the value[{output_size: 512}, {output_size: 256}]
(only applies ifreduce_output
is notnull
).num_fc_layers
(defaultnull
): iffc_layers
isnull
, this is the number of stacked fully connected layers (only applies ifreduce_output
is notnull
).output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weight matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates how the output should be normalized and may be one ofnull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.fc_activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.fc_dropout
(default0
): dropout ratereduce_output
(defaultlast
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension) andnull
(which does not reduce and returns the full tensor).
Example text feature entry in the input features list using a parallel cnn encoder:
name: text_column_name
type: text
encoder:
type: rnn
representation': dense
embedding_size: 256
embeddings_trainable: true
num_layers: 1
state_size: 256
cell_type: rnn
bidirectional: false
activation: tanh
recurrent_activation: sigmoid
unit_forget_bias: true
recurrent_initializer: orthogonal
dropout: 0.0
recurrent_dropout: 0.0
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
fc_activation: relu
fc_dropout: 0
reduce_output: last
CNN RNN Encoder¶
The cnnrnn
encoder works by first mapping the input token sequence b x s
(where b
is the batch size and s
is
the length of the sequence) into a sequence of embeddings, then it passes the embedding through a stack of convolutional
layers (by default 2), that is followed by a stack of recurrent layers (by default 1), followed by a reduce operation
that by default only returns the last output, but can perform other reduce functions.
If you want to output the full b x s x h
where h
is the size of the output of the last rnn layer, you can specify
reduce_output: null
.
+------+
|Emb 12|
+------+
+--+ |Emb 7 |
|12| +------+
|7 | |Emb 43| +---------+
|43| +------+ +----------+ +----------+ |Fully |
|65+--->Emb 65+-->CNN Layers+-->RNN Layers+-->Connected+-->
|23| +------+ +----------+ +----------+ |Layers |
|4 | |Emb 23| +---------+
|1 | +------+
+--+ |Emb 4 |
+------+
|Emb 1 |
+------+
These are the available parameters of the cnn rnn encoder:
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
assparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.conv_layers
(defaultnull
): a list of dictionaries containing the parameters of all the convolutional layers. The length of the list determines the number of stacked convolutional layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,num_filters
,filter_size
,strides
,padding
,dilation_rate
,use_bias
,pool_function
,pool_padding
,pool_size
,pool_strides
,bias_initializer
,weights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothconv_layers
andnum_conv_layers
arenull
, a default list will be assigned toconv_layers
with the value[{filter_size: 7, pool_size: 3}, {filter_size: 7, pool_size: 3}, {filter_size: 3, pool_size: null}, {filter_size: 3, pool_size: null}, {filter_size: 3, pool_size: null}, {filter_size: 3, pool_size: 3}]
.num_conv_layers
(default1
): the number of stacked convolutional layers.num_filters
(default256
): if anum_filters
is not already specified inconv_layers
this is the defaultnum_filters
that will be used for each layer. It indicates the number of filters, and by consequence the output channels of the 1d convolution.filter_size
(default5
): if afilter_size
is not already specified inconv_layers
this is the defaultfilter_size
that will be used for each layer. It indicates how wide is the 1d convolutional filter.strides
(default1
): stride length of the convolutionpadding
(defaultsame
): one ofvalid
orsame
.dilation_rate
(default1
): dilation rate to use for dilated convolutionconv_activation
(defaultrelu
): activation for the convolution layerconv_dropout
(default0.0
): dropout rate for the convolution layerpool_function
(defaultmax
): pooling function:max
will select the maximum value. Any ofaverage
,avg
ormean
will compute the mean value.pool_size
(default 2 ): if apool_size
is not already specified inconv_layers
this is the defaultpool_size
that will be used for each layer. It indicates the size of the max pooling that will be performed along thes
sequence dimension after the convolution operation.pool_strides
(defaultnull
): factor to scale downpool_padding
(defaultsame
): one ofvalid
orsame
num_rec_layers
(default1
): the number of recurrent layersstate_size
(default256
): the size of the state of the rnn.cell_type
(defaultrnn
): the type of recurrent cell to use. Available values are:rnn
,lstm
,gru
. For reference about the differences between the cells please refer to torch.nn Recurrent Layers.bidirectional
(defaultfalse
): iftrue
two recurrent networks will perform encoding in the forward and backward direction and their outputs will be concatenated.activation
(defaulttanh
): activation function to userecurrent_activation
(defaultsigmoid
): activation function to use in the recurrent stepunit_forget_bias
(defaulttrue
): Iftrue
, add 1 to the bias of the forget gate at initializationrecurrent_initializer
(defaultorthogonal
): initializer for recurrent matrix weightsdropout
(default0.0
): dropout raterecurrent_dropout
(default0.0
): dropout rate for recurrent statefc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothfc_layers
andnum_fc_layers
arenull
, a default list will be assigned tofc_layers
with the value[{output_size: 512}, {output_size: 256}]
(only applies ifreduce_output
is notnull
).num_fc_layers
(defaultnull
): iffc_layers
isnull
, this is the number of stacked fully connected layers (only applies ifreduce_output
is notnull
).output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates the norm of the output and it can benull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.fc_activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.fc_dropout
(default0
): dropout ratereduce_output
(defaultlast
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension) andnull
(which does not reduce and returns the full tensor).
Example sequence feature entry in the inputs features list using a cnn rnn encoder:
name: text_column_name
type: text
encoder:
type: cnnrnn
representation: dense
embedding_size: 256
embeddings_trainable: true
num_conv_layers: 1
num_filters: 256
filter_size: 5
strides: 1
padding: same
dilation_rate: 1
conv_activation: relu
conv_dropout: 0.0
pool_function: max
pool_size: 2
pool_padding: same
num_rec_layers: 1
state_size: 256
cell_type: rnn
bidirectional: false
activation: tanh
recurrent_activation: sigmoid
unit_forget_bias: true
recurrent_initializer: orthogonal
dropout: 0.0
recurrent_dropout: 0.0
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
fc_activation: relu
fc_dropout: 0
reduce_output: last
Transformer Encoder¶
The transformer
encoder implements a stack of transformer blocks, replicating the architecture introduced in the
Attention is all you need paper, and adds am optional stack of fully connected
layers at the end.
+------+
|Emb 12|
+------+
+--+ |Emb 7 |
|12| +------+
|7 | |Emb 43| +-------------+ +---------+
|43| +------+ | | |Fully |
|65+---+Emb 65+---> Transformer +--->Connected+-->
|23| +------+ | Blocks | |Layers |
|4 | |Emb 23| +-------------+ +---------+
|1 | +------+
+--+ |Emb 4 |
+------+
|Emb 1 |
+------+
representation
(defaultdense
): the possible values aredense
andsparse
.dense
means the embeddings are initialized randomly,sparse
means they are initialized to be one-hot encodings.embedding_size
(default256
): the maximum embedding size, the actual size will bemin(vocabulary_size, embedding_size)
fordense
representations and exactlyvocabulary_size
for thesparse
encoding, wherevocabulary_size
is the number of unique strings appearing in the training set input column plus the number of special tokens (<UNK>
,<PAD>
,<SOS>
,<EOS>
).embeddings_trainable
(defaulttrue
): Iftrue
embeddings are trained during the training process, iffalse
embeddings are fixed. It may be useful when loading pretrained embeddings for avoiding finetuning them. This parameter has effect only whenrepresentation
isdense
assparse
one-hot encodings are not trainable.pretrained_embeddings
(defaultnull
): by defaultdense
embeddings are initialized randomly, but this parameter allows to specify a path to a file containing embeddings in the GloVe format. When the file containing the embeddings is loaded, only the embeddings with labels present in the vocabulary are kept, the others are discarded. If the vocabulary contains strings that have no match in the embeddings file, their embeddings are initialized with the average of all other embedding plus some random noise to make them different from each other. This parameter has effect only ifrepresentation
isdense
.embeddings_on_cpu
(defaultfalse
): by default embedding matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be too large. This parameter forces the placement of the embedding matrix in regular memory and the CPU is used for embedding lookup, slightly slowing down the process as a result of data transfer between CPU and GPU memory.num_layers
(default1
): number of transformer blocks.hidden_size
(default256
): the size of the hidden representation within the transformer block. It is usually the same as theembedding_size
, but if the two values are different, a projection layer will be added before the first transformer block.num_heads
(default8
): number of attention heads in each transformer block.transformer_output_size
(default256
): Size of the fully connected layer after self attention in the transformer block. This is usually the same ashidden_size
andembedding_size
.dropout
(default0.1
): dropout rate for the transformer blockfc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead. If bothfc_layers
andnum_fc_layers
arenull
, a default list will be assigned tofc_layers
with the value[{output_size: 512}, {output_size: 256}]
(only applies ifreduce_output
is notnull
).num_fc_layers
(default0
): This is the number of stacked fully connected layers (only applies ifreduce_output
is notnull
).output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates the norm of the output and it can benull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.fc_activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.fc_dropout
(default0
): dropout ratereduce_output
(defaultlast
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension) andnull
(which does not reduce and returns the full tensor).
Example sequence feature entry in the inputs features list using a Transformer encoder:
name: text_column_name
type: text
encoder:
type: transformer
representation: dense
embedding_size: 256
embeddings_trainable: true
num_layers: 1
hidden_size: 256
num_heads: 8
transformer_output_size: 256
dropout: 0.1
num_fc_layers: 0
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
fc_activation: relu
fc_dropout: 0
reduce_output: last
Huggingface encoders¶
All huggingface-based text encoders are configured with the following parameters:
pretrained_model_name_or_path
(default is the huggingface default model path for the specified encoder, i.e.bert-base-uncased
for BERT). This can be either the name of a model or a path where it was downloaded. For details on the variants available refer to the Hugging Face documentation.reduce_output
(defaultcls_pooled
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:cls_pooled
,sum
,mean
oravg
,max
,concat
(concatenates along the first dimension),last
(returns the last vector of the first dimension) andnull
(which does not reduce and returns the full tensor).trainable
(defaultfalse
): iftrue
the weights of the encoder will be trained, otherwise they will be kept frozen.
Note
Any hyperparameter of any huggingface encoder can be overridden. Check the huggingface documentation for which parameters are used for which models.
name: text_column_name
type: text
encoder: bert
trainable: true
num_attention_heads: 16 # Instead of 12
ALBERT Encoder¶
The albert
encoder loads a pretrained ALBERT (default albert-base-v2
) model
using the Hugging Face transformers package. Albert is similar to BERT, with significantly lower memory usage and
somewhat faster training time.
AutoTransformer¶
The auto_transformer
encoder automatically instantiates the model architecture for the specified
pretrained_model_name_or_path
. Unlike the other HF encoders, auto_transformer
does not provide a default value for
pretrained_model_name_or_path
, this is its only mandatory parameter. See the Hugging Face
AutoModels documentation for more details.
BERT Encoder¶
The bert
encoder loads a pretrained BERT (default bert-base-uncased
) model using
the Hugging Face transformers package.
CamemBERT Encoder¶
The camembert
encoder loads a pretrained CamemBERT
(default jplu/tf-camembert-base
) model using the Hugging Face transformers package. CamemBERT is pre-trained on a
large French language web-crawled text corpus.
CTRL Encoder¶
The ctrl
encoder loads a pretrained CTRL (default ctrl
) model using the Hugging
Face transformers package. CTRL is a conditional transformer language model trained to condition on control codes that
govern style, content, and task-specific behavior.
DistilBERT Encoder¶
The distilbert
encoder loads a pretrained DistilBERT
(default distilbert-base-uncased
) model using the Hugging Face transformers package. A compressed version of BERT,
60% faster and smaller that BERT.
ELECTRA Encoder¶
The electra
encoder loads a pretrained ELECTRA model using the Hugging
Face transformers package.
FlauBERT Encoder¶
The flaubert
encoder loads a pretrained FlauBERT
(default jplu/tf-flaubert-base-uncased
) model using the Hugging Face transformers package. FlauBERT has an architecture
similar to BERT and is pre-trained on a large French language corpus.
GPT Encoder¶
The gpt
encoder loads a pretrained
GPT
(default openai-gpt
) model using the Hugging Face transformers package.
GPT-2 Encoder¶
The gpt2
encoder loads a pretrained
GPT-2
(default gpt2
) model using the Hugging Face transformers package.
Longformer Encoder¶
The longformer
encoder loads a pretrained Longformer
(default allenai/longformer-base-4096
) model using the Hugging Face transformers package. Longformer is a good choice
for longer text, as it supports sequences up to 4096 tokens long.
RoBERTa Encoder¶
The roberta
encoder loads a pretrained RoBERTa (default roberta-base
) model
using the Hugging Face transformers package. Replication of BERT pretraining which may match or exceed the performance
of BERT.
Transformer XL Encoder¶
The transformer_xl
encoder loads a pretrained Transformer-XL
(default transfo-xl-wt103
) model using the Hugging Face transformers package. Adds novel positional encoding scheme
which improves understanding and generation of long-form text up to thousands of tokens.
T5 Encoder¶
The t5
encoder loads a pretrained T5 (default t5-small
) model using the
Hugging Face transformers package. T5 (Text-to-Text Transfer Transformer) is pre-trained on a huge text dataset crawled
from the web and shows good transfer performance on multiple tasks.
MT5 Encoder¶
The mt5
encoder loads a pretrained MT5 (default google/mt5-base
) model using the
Hugging Face transformers package. MT5 is a multilingual variant of T5 trained on a dataset of 101 languages.
XLM Encoder¶
The xlm
encoder loads a pretrained XLM (default xlm-mlm-en-2048
) model using the
Hugging Face transformers package. Pre-trained by cross-language modeling.
XLM-RoBERTa Encoder¶
The xlmroberta
encoder loads a pretrained XLM-RoBERTa
(default jplu/tf-xlm-reoberta-base
) model using the Hugging Face transformers package. XLM-RoBERTa is a multi-language
model similar to BERT, trained on 100 languages.
XLNet Encoder¶
The xlnet
encoder loads a pretrained XLNet (default xlnet-base-cased
) model
using the Hugging Face transformers package. XLNet outperforms BERT on a variety of benchmarks.
Text Output Features and Decoders¶
Text output features are a special case of Sequence Features, so all options of sequence features are available for text features as well.
Text output features can be used for either tagging (classifying each token of an input sequence) or text
generation (generating text by repeatedly sampling from the model). There are two decoders available for these tasks
named tagger
and generator
respectively.
The following are the available parameters of a text output feature:
reduce_input
(defaultsum
): defines how to reduce an input that is not a vector, but a matrix or a higher order tensor, on the first dimension (second if you count the batch dimension). Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension).dependencies
(default[]
): the output features this one is dependent on. For a detailed explanation refer to Output Feature Dependencies.reduce_dependencies
(defaultsum
): defines how to reduce the output of a dependent feature that is not a vector, but a matrix or a higher order tensor, on the first dimension (second if you count the batch dimension). Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension).loss
(default{type: softmax_cross_entropy, class_similarities_temperature: 0, class_weights: 1, confidence_penalty: 0, robust_lambda: 0}
): is a dictionary containing a losstype
. The only available losstype
for text features issoftmax_cross_entropy
. For more details on losses and their options, see also Category Output Features and Decoders.
Decoder type and decoder parameters can also be defined once and applied to all text output features using the Type-Global Decoder section. Loss and loss related parameters can also be defined once in the same way.
Tagger Decoder¶
In the case of tagger
the decoder is a (potentially empty) stack of fully connected layers, followed by a projection
into a tensor of size b x s x c
, where b
is the batch size, s
is the length of the sequence and c
is the number
of classes, followed by a softmax_cross_entropy.
This decoder requires its input to be shaped as b x s x h
, where h
is a hidden dimension, which is the output of a
sequence, text or time series input feature without reduced outputs or the output of a sequence-based combiner.
If a b x h
input is provided instead, an error will be raised during model building.
Combiner
Output
+---+ +----------+ +-------+
|emb| +---------+ |Projection| |Softmax|
+---+ |Fully | +----------+ +-------+
|...+--->Connected+--->... +--->... |
+---+ |Layers | +----------+ +-------+
|emb| +---------+ |Projection| |Softmax|
+---+ +----------+ +-------+
These are the available parameters of a tagger decoder:
fc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the decoder will be used instead.num_fc_layers
(default 0): the number of stacked fully connected layers that the input to the feature passes through. Their output is projected in the feature's output space.output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates how the output should be normalized and may be one ofnull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see the Torch documentation on batch normalization or forlayer
see the Torch documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout rateattention
(defaultfalse
): Iftrue
, applies a multi-head self attention layer before prediction.attention_embedding_size
(default256
): the embedding size of the multi-head self attention layer.attention_num_heads
(default8
): number of attention heads in the multi-head self attention layer.
Example text feature entry using a tagger decoder (with default parameters) in the output features list:
name: text_column_name
type: text
reduce_input: null
dependencies: []
reduce_dependencies: sum
loss:
type: softmax_cross_entropy
confidence_penalty: 0
robust_lambda: 0
class_weights: 1
class_similarities_temperature: 0
decoder:
type: tagger
num_fc_layers: 0
output_size: 256
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
activation: relu
dropout: 0
attention: false
attention_embedding_size: 256
attention_num_heads: 8
Generator Decoder¶
In the case of generator
the decoder is a (potentially empty) stack of fully connected layers, followed by an RNN that
generates outputs feeding on its own previous predictions and generates a tensor of size b x s' x c
, where b
is the
batch size, s'
is the length of the generated sequence and c
is the number of classes, followed by a
softmax_cross_entropy.
During training teacher forcing is adopted, meaning the list of targets is provided as both inputs and outputs (shifted
by 1), while at evaluation time greedy decoding (generating one token at a time and feeding it as input for the next
step) is performed by beam search, using a beam of 1 by default.
In general a generator expects a b x h
shaped input tensor, where h
is a hidden dimension.
The h
vectors are (after an optional stack of fully connected layers) fed into the rnn generator.
One exception is when the generator uses attention, as in that case the expected size of the input tensor is
b x s x h
, which is the output of a sequence, text or time series input feature without reduced outputs or the output
of a sequence-based combiner.
If a b x h
input is provided to a generator decoder using an RNN with attention instead, an error will be raised
during model building.
Output Output
1 +-+ ... +--+ END
^ | ^ | ^
+--------+ +---------+ | | | | |
|Combiner| |Fully | +---+--+ | +---+---+ | +---+--+
|Output +--->Connected+---+RNN +--->RNN... +--->RNN |
| | |Layers | +---^--+ | +---^---+ | +---^--+
+--------+ +---------+ | | | | |
GO +-----+ +-----+
reduce_input
(defaultsum
): defines how to reduce an input that is not a vector, but a matrix or a higher order tensor, on the first dimension (second if you count the batch dimension). Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the sequence dimension),last
(returns the last vector of the sequence dimension).
These are the available parameters of a Generator decoder:
fc_layers
(defaultnull
): a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:activation
,dropout
,norm
,norm_params
,output_size
,use_bias
,bias_initializer
andweights_initializer
. If any of those values is missing from the dictionary, the default one specified as a parameter of the decoder will be used instead.num_fc_layers
(default 0): the number of stacked fully connected layers that the input to the feature passes through. Their output is projected in the feature's output space.output_size
(default256
): if anoutput_size
is not already specified infc_layers
this is the defaultoutput_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(defaultglorot_uniform
): initializer for the weight matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.bias_initializer
(defaultzeros
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to torch.nn.init.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates how the output should be normalized and may be one ofnull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see Torch documentation on batch normalization or forlayer
see Torch documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout ratecell_type
(defaultrnn
): the type of recurrent cell to use. Available values are:rnn
,lstm
,gru
. For reference about the differences between the cells please refer to torch.nn Recurrent Layers.state_size
(default256
): the size of the state of the rnn.embedding_size
(default256
): The size of the embeddings of the inputs of the generator.beam_width
(default1
): sampling from the RNN generator is performed using beam search. By default, with a beam of one, only a greedy sequence using always the most probable next token is generated, but the beam size can be increased. This usually leads to better performance at the expense of more computation and slower generation.tied
(defaultnull
): ifnull
the embeddings of the targets are initialized randomly. Iftied
names an input feature, the embeddings of that input feature will be used as embeddings of the target. Thevocabulary_size
of that input feature has to be the same as the output feature and it has to have an embedding matrix (binary and number features will not have one, for instance). In this case theembedding_size
will be the same as thestate_size
. This is useful for implementing autoencoders where the encoding and decoding part of the model share parameters.max_sequence_length
(default256
): The maximum sequence length.
Example text feature entry using a generator decoder in the output features list:
name: text_column_name
type: text
reduce_input: sum
dependencies: []
reduce_dependencies: sum
loss:
type: softmax_cross_entropy
confidence_penalty: 0
robust_lambda: 0
class_weights: 1
class_similarities_temperature: 0
decoder:
type: generator
num_fc_layers: 0
output_size: 256
use_bias: true
bias_initializer: zeros
weights_initializer: glorot_uniform
activation: relu
dropout: 0
cell_type: rnn
state_size: 256
embedding_size: 256
beam_width: 1
max_sequence_length: 256
Text Features Metrics¶
The metrics available for text features are the same as for Sequence Features:
sequence_accuracy
The rate at which the model predicted the correct sequence.token_accuracy
The number of tokens correctly predicted divided by the total number of tokens in all sequences.last_accuracy
Accuracy considering only the last element of the sequence. Useful to ensure special end-of-sequence tokens are generated or tagged.edit_distance
Levenshtein distance: the minimum number of single-token edits (insertions, deletions or substitutions) required to change predicted sequence to ground truth.perplexity
Perplexity is the inverse of the predicted probability of the ground truth sequence, normalized by the number of tokens. The lower the perplexity, the higher the probability of predicting the true sequence.loss
The value of the loss function.
You can set any of the above as validation_metric
in the training
section of the configuration if validation_field
names a sequence feature.