H3 Features
H3 is a indexing system for representing geospatial data. For more details about it refer to: https://eng.uber.com/h3/ .
H3 Features Preprocessing¶
Ludwig will parse the H3 64bit encoded format automatically. The parameters for preprocessing are:
missing_value_strategy
(defaultfill_with_const
): what strategy to follow when there's a missing value in a binary column. The value should be one offill_with_const
(replaces the missing value with a specific value specified with thefill_value
parameter),fill_with_mode
(replaces the missing values with the most frequent value in the column),fill_with_mean
(replaces the missing values with the mean of the values in the column),backfill
(replaces the missing values with the next valid value).fill_value
(default576495936675512319
): the value to replace the missing values with in case themissing_value_strategy
isfill_value
. This is a 64bit integer comaptible with the H3 bit layout. The default value encodes mode 1, edge 0, resolution 0, base_cell 0.
Example of a preprocessing specification:
name: h3_feature_name
type: h3
preprocessing:
missing_value_strategy: fill_with_const
fill_value: 576495936675512319
H3 Input Features and Encoders¶
Input date features are transformed into a int valued tensors of size N x 8
(where N
is the size of the dataset and the 8 dimensions contain year, month, day, weekday, yearday, hour, minute and second) and added to HDF5 with a key that reflects the name of column in the dataset.
Currently there are two encoders supported for dates: Embed Encoder and Wave encoder which can be set by setting encoder
parameter to embed
or wave
in the input feature dictionary in the configuration (embed
is the default one).
Embed Encoder¶
This encoder encodes each components of the H3 representation (mode, edge, resolution, base cell and childern cells) with embeddings.
Chidren cells with value 0
will be masked out.
After the embedding, all embeddings are summed and optionally passed through a stack of fully connected layers.
It takes the following optional parameters:
embedding_size
(default10
): it is the maximum embedding size adopted..embeddings_on_cpu
(defaultfalse
): by default embeddings matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be really big and this parameter forces the placement of the embedding matrix in regular memory and the CPU is used to resolve them, slightly slowing down the process as a result of data transfer between CPU and GPU memory.fc_layers
(defaultnull
): it is a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:fc_size
,norm
,activation
andregularize
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead.num_fc_layers
(default0
): This is the number of stacked fully connected layers.fc_size
(default10
): if afc_size
is not already specified infc_layers
this is the defaultfc_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(default'glorot_uniform'
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.bias_initializer
(default'zeros'
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.weights_regularizer
(defaultnull
): regularizer function applied to the weights matrix. Valid values arel1
,l2
orl1_l2
.bias_regularizer
(defaultnull
): regularizer function applied to the bias vector. Valid values arel1
,l2
orl1_l2
.activity_regularizer
(defaultnull
): regurlizer function applied to the output of the layer. Valid values arel1
,l2
orl1_l2
.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates the norm of the output and it can benull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see Tensorflow's documentation on batch normalization or forlayer
see Tensorflow's documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout rate
Example date feature entry in the input features list using an embed encoder:
name: h3_column_name
type: h3
encoder: embed
embedding_size: 10
embeddings_on_cpu: false
fc_layers: null
num_fc_layers: 0
fc_size: 10
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
weights_regularizer: null
bias_regularizer: null
activity_regularizer: null
norm: null
norm_params: null
activation: relu
dropout: 0
Weighted Sum Embed Encoder¶
This encoder encodes each components of the H3 representation (mode, edge, resolution, base cell and childern cells) with embeddings.
Chidren cells with value 0
will be masked out.
After the embedding, all embeddings are summed with a weighted sum (with learned weights) and optionally passed through a stack of fully connected layers.
It takes the following optional parameters:
embedding_size
(default10
): it is the maximum embedding size adopted..embeddings_on_cpu
(defaultfalse
): by default embeddings matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be really big and this parameter forces the placement of the embedding matrix in regular memory and the CPU is used to resolve them, slightly slowing down the process as a result of data transfer between CPU and GPU memory.should_softmax
(defaultfalse
): determines if the weights of the weighted sum should be passed though a softmax layer before being used.fc_layers
(defaultnull
): it is a list of dictionaries containing the parameters of all the fully connected layers. The length of the list determines the number of stacked fully connected layers and the content of each dictionary determines the parameters for a specific layer. The available parameters for each layer are:fc_size
,norm
,activation
andregularize
. If any of those values is missing from the dictionary, the default one specified as a parameter of the encoder will be used instead.num_fc_layers
(default0
): This is the number of stacked fully connected layers.fc_size
(default10
): if afc_size
is not already specified infc_layers
this is the defaultfc_size
that will be used for each layer. It indicates the size of the output of a fully connected layer.use_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.weights_initializer
(default'glorot_uniform'
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.bias_initializer
(default'zeros'
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.weights_regularizer
(defaultnull
): regularizer function applied to the weights matrix. Valid values arel1
,l2
orl1_l2
.bias_regularizer
(defaultnull
): regularizer function applied to the bias vector. Valid values arel1
,l2
orl1_l2
.activity_regularizer
(defaultnull
): regurlizer function applied to the output of the layer. Valid values arel1
,l2
orl1_l2
.norm
(defaultnull
): if anorm
is not already specified infc_layers
this is the defaultnorm
that will be used for each layer. It indicates the norm of the output and it can benull
,batch
orlayer
.norm_params
(defaultnull
): parameters used ifnorm
is eitherbatch
orlayer
. For information on parameters used withbatch
see Tensorflow's documentation on batch normalization or forlayer
see Tensorflow's documentation on layer normalization.activation
(defaultrelu
): if anactivation
is not already specified infc_layers
this is the defaultactivation
that will be used for each layer. It indicates the activation function applied to the output.dropout
(default0
): dropout ratereduce_output
(defaultsum
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the first dimension),last
(returns the last vector of the first dimension) andnull
(which does not reduce and returns the full tensor).
Example date feature entry in the input features list using an embed encoder:
name: h3_column_name
type: h3
encoder: weighted_sum
embedding_size: 10
embeddings_on_cpu: false
should_softmax: false
fc_layers: null
num_fc_layers: 0
fc_size: 10
use_bias: true
weights_initializer: glorot_uniform
bias_initializer: zeros
weights_regularizer: null
bias_regularizer: null
activity_regularizer: null
norm: null
norm_params: null
activation: relu
dropout: 0
reduce_output: sum
RNN Encoder¶
This encoder encodes each components of the H3 representation (mode, edge, resolution, base cell and childern cells) with embeddings.
Chidren cells with value 0
will be masked out.
After the embedding, all embeddings are passed through an RNN encoder.
The intuition behind this is that, starting from the base cell, the sequence of children cells can be seen as a sequence encoding the path in the tree of all H3 hexes, thus the encoding with recurrent model.
It takes the following optional parameters:
embedding_size
(default10
): it is the maximum embedding size adopted..embeddings_on_cpu
(defaultfalse
): by default embeddings matrices are stored on GPU memory if a GPU is used, as it allows for faster access, but in some cases the embedding matrix may be really big and this parameter forces the placement of the embedding matrix in regular memory and the CPU is used to resolve them, slightly slowing down the process as a result of data transfer between CPU and GPU memory.num_layers
(default1
): the number of stacked recurrent layers.state_size
(default256
): the size of the state of the rnn.cell_type
(defaultrnn
): the type of recurrent cell to use. Available values are:rnn
,lstm
,lstm_block
,lstm
,ln
,lstm_cudnn
,gru
,gru_block
,gru_cudnn
. For reference about the differences between the cells please refer to TensorFlow's documentation. We suggest to use theblock
variants on CPU and thecudnn
variants on GPU because of their increased speed.bidirectional
(defaultfalse
): iftrue
two recurrent networks will perform encoding in the forward and backward direction and their outputs will be concatenated.activation
(default'tanh'
): activation function to userecurrent_activation
(default'sigmoid'
): activation function to use in the recurrent stepuse_bias
(defaulttrue
): boolean, whether the layer uses a bias vector.unit_forget_bias
(defaulttrue
): Iftrue
, add 1 to the bias of the forget gate at initializationweights_initializer
(default'glorot_uniform'
): initializer for the weights matrix. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.recurrent_initializer
(default'orthogonal'
): initializer for recurrent matrix weightsbias_initializer
(default'zeros'
): initializer for the bias vector. Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.weights_regularizer
(defaultnull
): regularizer function applied to the weights matrix. Valid values arel1
,l2
orl1_l2
.recurrent_regularizer
(defaultnull
): regularizer function applied to recurrent matrix weightsbias_regularizer
(defaultnull
): regularizer function applied to the bias vector. Valid values arel1
,l2
orl1_l2
.activity_regularizer
(defaultnull
): regurlizer function applied to the output of the layer. Valid values arel1
,l2
orl1_l2
.dropout
(default0.0
): dropout raterecurrent_dropout
(default0.0
): dropout rate for recurrent stateinitializer
(defaultnull
): the initializer to use. Ifnull
, the default initialized of each variable is used (glorot_uniform
in most cases). Options are:constant
,identity
,zeros
,ones
,orthogonal
,normal
,uniform
,truncated_normal
,variance_scaling
,glorot_normal
,glorot_uniform
,xavier_normal
,xavier_uniform
,he_normal
,he_uniform
,lecun_normal
,lecun_uniform
. Alternatively it is possible to specify a dictionary with a keytype
that identifies the type of initializer and other keys for its parameters, e.g.{type: normal, mean: 0, stddev: 0}
. To know the parameters of each initializer, please refer to TensorFlow's documentation.regularize
(defaulttrue
): iftrue
the embedding weights are added to the set of weights that get regularized by a regularization loss (if theregularization_lambda
intraining
is greater than 0).reduce_output
(defaultlast
): defines how to reduce the output tensor along thes
sequence length dimension if the rank of the tensor is greater than 2. Available values are:sum
,mean
oravg
,max
,concat
(concatenates along the first dimension),last
(returns the last vector of the first dimension) andnull
(which does not reduce and returns the full tensor).
Example date feature entry in the input features list using an embed encoder:
name: h3_column_name
type: h3
encoder: rnn
embedding_size: 10
embeddings_on_cpu: false
num_layers: 1
cell_type: rnn
state_size: 10
bidirectional: false
activation: tanh
recurrent_activation: sigmoid
use_bias: true
unit_forget_bias: true
weights_initializer: glorot_uniform
recurrent_initializer: orthogonal
bias_initializer: zeros
weights_regularizer: null
recurrent_regularizer: null
bias_regularizer: null
activity_regularizer: null
dropout: 0.0
recurrent_dropout: 0.0
initializer: null
regularize: true
reduce_output: last
H3 Output Features and Decoders¶
There are no date decoders at the moment (WIP), so H3 cannot be used as output features.
H3 Features Measures¶
As no H3 decoders are available at the moment, there are also no date measures.