Trainer
Overview¶
The trainer section of the configuration lets you specify parameters that
configure the training process, like the number of epochs or the learning rate.
By default, the ECD trainer is used.
trainer:
early_stop: 5
learning_rate: 0.001
epochs: 100
batch_size: auto
regularization_type: l2
use_mixed_precision: false
compile: false
checkpoints_per_epoch: 0
eval_steps: null
effective_batch_size: auto
gradient_accumulation_steps: auto
regularization_lambda: 0.0
enable_gradient_checkpointing: false
validation_field: null
validation_metric: null
train_steps: null
steps_per_checkpoint: 0
max_batch_size: 1099511627776
eval_batch_size: null
evaluate_training_set: false
should_shuffle: true
increase_batch_size_on_plateau: 0
increase_batch_size_on_plateau_patience: 5
increase_batch_size_on_plateau_rate: 2.0
increase_batch_size_eval_metric: loss
increase_batch_size_eval_split: training
learning_rate_scaling: linear
bucketing_field: null
skip_all_evaluation: false
enable_profiling: false
profiler:
wait: 1
warmup: 1
active: 3
repeat: 5
skip_first: 0
learning_rate_scheduler:
decay: null
decay_rate: 0.96
decay_steps: 10000
staircase: false
reduce_on_plateau: 0
reduce_on_plateau_patience: 10
reduce_on_plateau_rate: 0.1
warmup_evaluations: 0
warmup_fraction: 0.0
reduce_eval_metric: loss
reduce_eval_split: training
t_0: null
t_mult: 1
eta_min: 0
optimizer:
type: adam
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.0
amsgrad: false
gradient_clipping:
clipglobalnorm: 0.5
clipnorm: null
clipvalue: null
layers_to_freeze_regex: null
trainer:
type: finetune
learning_rate: 0.001
validation_field: null
validation_metric: null
early_stop: 5
skip_all_evaluation: false
enable_profiling: false
profiler:
wait: 1
warmup: 1
active: 3
repeat: 5
skip_first: 0
learning_rate_scheduler:
decay: null
decay_rate: 0.96
decay_steps: 10000
staircase: false
reduce_on_plateau: 0
reduce_on_plateau_patience: 10
reduce_on_plateau_rate: 0.1
warmup_evaluations: 0
warmup_fraction: 0.0
reduce_eval_metric: loss
reduce_eval_split: training
t_0: null
t_mult: 1
eta_min: 0
epochs: 100
checkpoints_per_epoch: 0
train_steps: null
eval_steps: null
steps_per_checkpoint: 0
effective_batch_size: auto
batch_size: 1
max_batch_size: 1099511627776
gradient_accumulation_steps: auto
eval_batch_size: 2
evaluate_training_set: false
optimizer:
type: adam
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.0
amsgrad: false
regularization_type: l2
regularization_lambda: 0.0
should_shuffle: true
increase_batch_size_on_plateau: 0
increase_batch_size_on_plateau_patience: 5
increase_batch_size_on_plateau_rate: 2.0
increase_batch_size_eval_metric: loss
increase_batch_size_eval_split: training
gradient_clipping:
clipglobalnorm: 0.5
clipnorm: null
clipvalue: null
learning_rate_scaling: linear
bucketing_field: null
use_mixed_precision: false
compile: false
enable_gradient_checkpointing: false
layers_to_freeze_regex: null
base_learning_rate: 0.0
Trainer parameters¶
early_stop(default:5) : Number of consecutive rounds of evaluation without any improvement on thevalidation_metricthat triggers training to stop. Can be set to -1, which disables early stopping entirely.learning_rate(default:null) : Controls how much to change the model in response to the estimated error each time the model weights are updated. If 'auto', the optimal learning rate is estimated by choosing the learning rate that produces the smallest non-diverging gradient update.epochs(default:100) : Number of epochs the algorithm is intended to be run over. Overridden iftrain_stepsis setbatch_size(default:auto) : The number of training examples utilized in one training step of the model. If βautoβ, the batch size that maximized training throughput (samples / sec) will be used. For CPU training, the tuned batch size is capped at 128 as throughput benefits of large batch sizes are less noticeable without a GPU.regularization_type(default:l2) : Type of regularization. Options:l1,l2,l1_l2,null.use_mixed_precision(default:false) : Enable automatic mixed-precision (AMP) during training.compile(default:false) : Whether to compile the model before training.checkpoints_per_epoch(default:0): Number of checkpoints per epoch. For example, 2 -> checkpoints are written every half of an epoch. Note that it is invalid to specify both non-zerosteps_per_checkpointand non-zerocheckpoints_per_epoch.eval_steps(default:null): The number of steps to use for evaluation. If None, the entire evaluation set will be used.effective_batch_size(default:auto): The effective batch size is the total number of samples used to compute a single gradient update to the model weights. This differs frombatch_sizeby takinggradient_accumulation_stepsand number of training worker processes into account. In practice,effective_batch_size = batch_size * gradient_accumulation_steps * num_workers. If 'auto', the effective batch size is derivied implicitly frombatch_size, but if set explicitly, then one ofbatch_sizeorgradient_accumulation_stepsmust be set to something other than 'auto', and consequently will be set following the formula given above.gradient_accumulation_steps(default:auto): Number of steps to accumulate gradients over before performing a weight update.regularization_lambda(default:0.0): Strength of the regularization.enable_gradient_checkpointing(default:false): Whether to enable gradient checkpointing, which trades compute for memory.This is useful for training very deep models with limited memory.validation_field(default:null): The field for which thevalidation_metricis used for validation-related mechanics like early stopping, parameter change plateaus, as well as what hyperparameter optimization uses to determine the best trial. If unset (default), the first output feature is used. If explicitly specified, neithervalidation_fieldnorvalidation_metricare overwritten.validation_metric(default:null): Metric fromvalidation_fieldthat is used. If validation_field is not explicitly specified, this is overwritten to be the first output feature type'sdefault_validation_metric, consistent with validation_field. If the validation_metric is specified, then we will use the first output feature that produces this metric as thevalidation_field.train_steps(default:null): Maximum number of training steps the algorithm is intended to be run over. Unset by default. If set, will overrideepochsand if left unset thenepochsis used to determine training length.steps_per_checkpoint(default:0): How often the model is checkpointed. Also dictates maximum evaluation frequency. If 0 the model is checkpointed after every epoch.max_batch_size(default:1099511627776): Auto batch size tuning and increasing batch size on plateau will be capped at this value. The default value is 2^40.eval_batch_size(default:null): Size of batch to pass to the model for evaluation. If it is0orNone, the same value ofbatch_sizeis used. This is useful to speedup evaluation with a much bigger batch size than training, if enough memory is available. If βautoβ, the biggest batch size (power of 2) that can fit in memory will be used.evaluate_training_set(default:false): Whether to evaluate on the entire training set during evaluation. By default, training metrics will be computed at the end of each training step, and accumulated up to the evaluation phase. In practice, computing training set metrics during training is up to 30% faster than running a separate evaluation pass over the training set, but results in more noisy training metrics, particularly during the earlier epochs. It's recommended to only set this to True if you need very exact training set metrics, and are willing to pay a significant performance penalty for them.should_shuffle(default:true): Whether to shuffle batches during training when true.increase_batch_size_on_plateau(default:0): The number of times to increase the batch size on a plateau.increase_batch_size_on_plateau_patience(default:5): How many epochs to wait for before increasing the batch size.increase_batch_size_on_plateau_rate(default:2.0): Rate at which the batch size increases.increase_batch_size_eval_metric(default:loss): Which metric to listen on for increasing the batch size.increase_batch_size_eval_split(default:training): Which dataset split to listen on for increasing the batch size.learning_rate_scaling(default:linear): Scale by which to increase the learning rate as the number of distributed workers increases. Traditionally the learning rate is scaled linearly with the number of workers to reflect the proportion by which the effective batch size is increased. For very large batch sizes, a softer square-root scale can sometimes lead to better model performance. If the learning rate is hand-tuned for a given number of workers, setting this value to constant can be used to disable scale-up. Options:constant,sqrt,linear.bucketing_field(default:null): Feature to use for bucketing datapointsskip_all_evaluation(default:false):enable_profiling(default:false):profiler(default:null):profiler.wait(default:1): The number of steps to wait profiling.profiler.warmup(default:1): The number of steps for profiler warmup after waiting finishes.profiler.active(default:3): The number of steps that are actively recorded. Values more than 10 wil dramatically slow down tensorboard loading.profiler.repeat(default:5): The optional number of profiling cycles. Use 0 to profile the entire training run.profiler.skip_first(default:0): The number of steps to skip in the beginning of training.learning_rate_scheduler(default:null):learning_rate_scheduler.decay(default:null) : Turn on decay of the learning rate. Options:linear,exponential,cosine,null.learning_rate_scheduler.decay_rate(default:0.96): Decay per epoch (%): Factor to decrease the Learning rate.learning_rate_scheduler.decay_steps(default:10000): The number of steps to take in the exponential learning rate decay.learning_rate_scheduler.staircase(default:false): Decays the learning rate at discrete intervals.learning_rate_scheduler.reduce_on_plateau(default:0) : How many times to reduce the learning rate when the algorithm hits a plateau (i.e. the performance on the training set does not improve)learning_rate_scheduler.reduce_on_plateau_patience(default:10): How many evaluation steps have to pass before the learning rate reduces whenreduce_on_plateau > 0.learning_rate_scheduler.reduce_on_plateau_rate(default:0.1): Rate at which we reduce the learning rate whenreduce_on_plateau > 0.learning_rate_scheduler.warmup_evaluations(default:0): Number of evaluation steps to warmup the learning rate for.learning_rate_scheduler.warmup_fraction(default:0.0): Fraction of total training steps to warmup the learning rate for.learning_rate_scheduler.reduce_eval_metric(default:loss): Metric plateau used to trigger when we reduce the learning rate whenreduce_on_plateau > 0.learning_rate_scheduler.reduce_eval_split(default:training): Which dataset split to listen on for reducing the learning rate whenreduce_on_plateau > 0.learning_rate_scheduler.t_0(default:null): Number of steps before the first restart for cosine annealing decay. If not specified, it will be set tosteps_per_checkpoint.learning_rate_scheduler.t_mult(default:1): Period multiplier after each restart for cosine annealing decay. Defaults to 1, i.e., restart everyt_0steps. If set to a larger value, the period between restarts increases by that multiplier. For e.g., if t_mult is 2, then the periods would be: t_0, 2t_0, 2^2t_0, 2^3*t_0, etc.learning_rate_scheduler.eta_min(default:0): Minimum learning rate allowed for cosine annealing decay. Default: 0.optimizer(default:null): See Optimizer parameters for details.gradient_clipping(default:null):gradient_clipping.clipglobalnorm(default:0.5): Maximum allowed norm of the gradientsgradient_clipping.clipnorm(default:null): Maximum allowed norm of the gradientsgradient_clipping.clipvalue(default:null): Maximum allowed value of the gradientslayers_to_freeze_regex(default:null): Freeze specific layers based on provided regex. Freezing specific layers can improve a pretrained model's performance in a number of ways. At a basic level, freezing early layers can prevent overfitting by retaining more general features (beneficial for small datasets). Also can reduce computational resource use and lower overall training time due to less gradient calculations.
type(default:finetune): Options:finetune.learning_rate(default:null) : Controls how much to change the model in response to the estimated error each time the model weights are updated. If 'auto', the optimal learning rate is estimated by choosing the learning rate that produces the smallest non-diverging gradient update.validation_field(default:null):validation_metric(default:null):early_stop(default:5):skip_all_evaluation(default:false):enable_profiling(default:false):profiler(default:null):profiler.wait(default:1): The number of steps to wait profiling.profiler.warmup(default:1): The number of steps for profiler warmup after waiting finishes.profiler.active(default:3): The number of steps that are actively recorded. Values more than 10 wil dramatically slow down tensorboard loading.profiler.repeat(default:5): The optional number of profiling cycles. Use 0 to profile the entire training run.profiler.skip_first(default:0): The number of steps to skip in the beginning of training.learning_rate_scheduler(default:null):learning_rate_scheduler.decay(default:null) : Turn on decay of the learning rate. Options:linear,exponential,cosine,null.learning_rate_scheduler.decay_rate(default:0.96): Decay per epoch (%): Factor to decrease the Learning rate.learning_rate_scheduler.decay_steps(default:10000): The number of steps to take in the exponential learning rate decay.learning_rate_scheduler.staircase(default:false): Decays the learning rate at discrete intervals.learning_rate_scheduler.reduce_on_plateau(default:0) : How many times to reduce the learning rate when the algorithm hits a plateau (i.e. the performance on the training set does not improve)learning_rate_scheduler.reduce_on_plateau_patience(default:10): How many evaluation steps have to pass before the learning rate reduces whenreduce_on_plateau > 0.learning_rate_scheduler.reduce_on_plateau_rate(default:0.1): Rate at which we reduce the learning rate whenreduce_on_plateau > 0.learning_rate_scheduler.warmup_evaluations(default:0): Number of evaluation steps to warmup the learning rate for.learning_rate_scheduler.warmup_fraction(default:0.0): Fraction of total training steps to warmup the learning rate for.learning_rate_scheduler.reduce_eval_metric(default:loss): Metric plateau used to trigger when we reduce the learning rate whenreduce_on_plateau > 0.learning_rate_scheduler.reduce_eval_split(default:training): Which dataset split to listen on for reducing the learning rate whenreduce_on_plateau > 0.learning_rate_scheduler.t_0(default:null): Number of steps before the first restart for cosine annealing decay. If not specified, it will be set tosteps_per_checkpoint.learning_rate_scheduler.t_mult(default:1): Period multiplier after each restart for cosine annealing decay. Defaults to 1, i.e., restart everyt_0steps. If set to a larger value, the period between restarts increases by that multiplier. For e.g., if t_mult is 2, then the periods would be: t_0, 2t_0, 2^2t_0, 2^3*t_0, etc.learning_rate_scheduler.eta_min(default:0): Minimum learning rate allowed for cosine annealing decay. Default: 0.epochs(default:100):checkpoints_per_epoch(default:0):train_steps(default:null):eval_steps(default:null):steps_per_checkpoint(default:0):effective_batch_size(default:auto):batch_size(default:1): The number of training examples utilized in one training step of the model. Ifauto, the batch size that maximized training throughput (samples / sec) will be used.max_batch_size(default:1099511627776):gradient_accumulation_steps(default:auto):eval_batch_size(default:2): Size of batch to pass to the model for evaluation. If it is0orNone, the same value ofbatch_sizeis used. This is useful to speedup evaluation with a much bigger batch size than training, if enough memory is available. Ifauto, the biggest batch size (power of 2) that can fit in memory will be used.evaluate_training_set(default:false):optimizer(default:null): See Optimizer parameters for details.regularization_type(default:l2):regularization_lambda(default:0.0):should_shuffle(default:true):increase_batch_size_on_plateau(default:0):increase_batch_size_on_plateau_patience(default:5):increase_batch_size_on_plateau_rate(default:2.0):increase_batch_size_eval_metric(default:loss):increase_batch_size_eval_split(default:training):gradient_clipping(default:null):gradient_clipping.clipglobalnorm(default:0.5): Maximum allowed norm of the gradientsgradient_clipping.clipnorm(default:null): Maximum allowed norm of the gradientsgradient_clipping.clipvalue(default:null): Maximum allowed value of the gradientslearning_rate_scaling(default:linear):bucketing_field(default:null):use_mixed_precision(default:false):compile(default:false):enable_gradient_checkpointing(default:false):layers_to_freeze_regex(default:null):base_learning_rate(default:0.0): Base learning rate used for training in the LLM trainer.
Optimizer parameters¶
The available optimizers wrap the ones available in PyTorch. For details about the parameters that can be used to configure different optimizers, please refer to the PyTorch documentation.
The learning_rate parameter used by the optimizer comes from the trainer section.
Other optimizer specific parameters, shown with their Ludwig default settings, follow:
sgd¶
optimizer:
type: sgd
momentum: 0.0
weight_decay: 0.0
dampening: 0.0
nesterov: false
momentum(default:0.0): Momentum factor.weight_decay(default:0.0): Weight decay ($L2$ penalty).dampening(default:0.0): Dampening for momentum.nesterov(default:false): Enables Nesterov momentum.
lbfgs¶
optimizer:
type: lbfgs
max_iter: 20
max_eval: null
tolerance_grad: 1.0e-07
tolerance_change: 1.0e-09
history_size: 100
line_search_fn: null
max_iter(default:20): Maximum number of iterations per optimization step.max_eval(default:null): Maximum number of function evaluations per optimization step. Default:max_iter* 1.25.tolerance_grad(default:1e-07): Termination tolerance on first order optimality.tolerance_change(default:1e-09): Termination tolerance on function value/parameter changes.history_size(default:100): Update history size.line_search_fn(default:null): Line search function to use. Options:strong_wolfe,null.
adam¶
optimizer:
type: adam
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.0
amsgrad: false
betas(default:[0.9, 0.999]): Coefficients used for computing running averages of gradient and its square.eps(default:1e-08): Term added to the denominator to improve numerical stability.weight_decay(default:0.0): Weight decay (L2 penalty).amsgrad(default:false): Whether to use the AMSGrad variant of this algorithm from the paper 'On the Convergence of Adam and Beyond'.
adamw¶
optimizer:
type: adamw
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.0
amsgrad: false
betas(default:[0.9, 0.999]): Coefficients used for computing running averages of gradient and its square.eps(default:1e-08): Term added to the denominator to improve numerical stability.weight_decay(default:0.0): Weight decay ($L2$ penalty).amsgrad(default:false): Whether to use the AMSGrad variant of this algorithm from the paper 'On the Convergence of Adam and Beyond'.
adadelta¶
optimizer:
type: adadelta
rho: 0.9
eps: 1.0e-06
weight_decay: 0.0
rho(default:0.9): Coefficient used for computing a running average of squared gradients.eps(default:1e-06): Term added to the denominator to improve numerical stability.weight_decay(default:0.0): Weight decay ($L2$ penalty).
adagrad¶
optimizer:
type: adagrad
initial_accumulator_value: 0
lr_decay: 0
weight_decay: 0
eps: 1.0e-10
initial_accumulator_value(default:0):lr_decay(default:0): Learning rate decay.weight_decay(default:0): Weight decay ($L2$ penalty).eps(default:1e-10): Term added to the denominator to improve numerical stability.
adamax¶
optimizer:
type: adamax
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.0
betas(default:[0.9, 0.999]): Coefficients used for computing running averages of gradient and its square.eps(default:1e-08): Term added to the denominator to improve numerical stability.weight_decay(default:0.0): Weight decay ($L2$ penalty).
nadam¶
optimizer:
type: nadam
betas:
- 0.9
- 0.999
eps: 1.0e-08
weight_decay: 0.0
momentum_decay: 0.004
betas(default:[0.9, 0.999]): Coefficients used for computing running averages of gradient and its square.eps(default:1e-08): Term added to the denominator to improve numerical stability.weight_decay(default:0.0): Weight decay ($L2$ penalty).momentum_decay(default:0.004): Momentum decay.
rmsprop¶
optimizer:
type: rmsprop
momentum: 0.0
alpha: 0.99
eps: 1.0e-08
centered: false
weight_decay: 0.0
momentum(default:0.0): Momentum factor.alpha(default:0.99): Smoothing constant.eps(default:1e-08): Term added to the denominator to improve numerical stability.centered(default:false): If True, computes the centered RMSProp, and the gradient is normalized by an estimation of its variance.weight_decay(default:0.0): Weight decay ($L2$ penalty).
Note
Gradient clipping is also configurable, through optimizers, with the following parameters:
clip_global_norm: 0.5
clipnorm: null
clip_value: null
Training length¶
The length of the training process is configured by:
epochs(default: 100): One epoch is one pass through the entire dataset. By default,epochsis 100 which means that the training process will run for a maximum of 100 epochs before terminating.train_steps(default:None): The maximum number of steps to train for, using one mini-batch per step. By default this is unset, andepochswill be used to determine training length.
Tip
In general, it's a good idea to set up a long training runway, relying on
early stopping criteria (early_stop) to stop training when there
hasn't been any improvement for a long time.
Early stopping¶
Machine learning models, when trained for too long, are often prone to overfitting. It's generally a good policy to set up some early stopping criteria as it's not useful to have a model train after it's maximized what it can learn, as to retain it's ability to generalize to new data.
How early stopping works in Ludwig¶
By default, Ludwig sets trainer.early_stop=5, which means that if there have
been 5 consecutive rounds of evaluation where there hasn't been any
improvement on the validation subset, then training will terminate.
Ludwig runs evaluation once per checkpoint, which by default is once per epoch.
Checkpoint frequency can be configured using checkpoints_per_epoch (default:
1) or steps_per_checkpoint (default: 0, disabled). See
this section for more details.
Changing the metric early stopping metrics¶
The metric that dictates early stopping is
trainer.validation_field and trainer.validation_metric. By default, early
stopping uses the combined loss on the validation subset.
trainer:
validation_field: combined
validation_metric: loss
However, this can be configured to use other metrics. For example, if we had an
output feature called recommended, then we can configure early stopping on the
output feature accuracy like so:
trainer:
validation_field: recommended
validation_metric: accuracy
Disabling early stopping¶
trainer.early_stop can be set to -1, which disables early stopping entirely.
Checkpoint-evaluation frequency¶
Evaluation is run every time the model is checkpointed.
By default, checkpoint-evaluation will occur once every epoch.
The frequency of checkpoint-evaluation can be configured using:
steps_per_checkpoint(default: 0): everyntraining stepscheckpoints_per_epoch(default: 0):ntimes per epoch
Note
It is invalid to specify both non-zero steps_per_checkpoint and non-zero
checkpoints_per_epoch.
Tip
Running evaluation once per epoch is an appropriate fit for small datasets that fit in memory and train quickly. However, this can be a poor fit for unstructured datasets, which tend to be much larger, and train more slowly due to larger models.
Running evaluation too frequently can be wasteful while running evaluation not frequently enough can be uninformative. In large-scale training runs, it's common for evaluation to be configured to run on a sub-epoch time scale, or every few thousand steps.
We recommend configuring evaluation such that new evaluation results are available at least several times an hour. In general, it is not necessary for models to train over the entirety of a dataset, nor evaluate over the entirety of a test set, to produce useful monitoring metrics and signals to indicate model performance.
Increasing throughput on GPUs¶
Increase batch size¶
trainer:
batch_size: auto
Users training on GPUs can often increase training throughput by increasing
the batch_size so that more examples are computed every training step. Set
batch_size to auto to use the largest batch size that can fit in memory.
Use mixed precision¶
trainer:
use_mixed_precision: true
Speeds up training by using float16 parameters where it makes sense. Mixed precision training on GPU can dramatically speedup training, with some risks to model convergence. In practice, it works particularly well when fine-tuning a pretrained model like a HuggingFace transformer. See blog here for more details.