• learning_rate_scheduler
    • cosine_decay
    • exponential_decay
    • inverse_time_decay
    • linear_lr_warmup
    • natural_exp_decay
    • noam_decay
    • piecewise_decay
    • polynomial_decay

    learning_rate_scheduler

    cosine_decay

    • paddle.fluid.layers.cosinedecay(_learning_rate, step_each_epoch, epochs)
    • 使用 cosine decay 的衰减方式进行学习率调整。

    在训练模型时,建议一边进行训练一边降低学习率。 通过使用此方法,学习速率将通过如下cosine衰减策略进行衰减:

    learning_rate_scheduler - 图1

    • 参数:
      • learning_rate (Variable | float) - 初始学习率。
      • step_each_epoch (int) - 一次迭代中的步数。
      • epochs - 总迭代次数。代码示例
    1. import paddle.fluid as fluid
    2. base_lr = 0.1
    3. lr = fluid.layers.cosine_decay( learning_rate = base_lr, step_each_epoch=10000, epochs=120)

    exponential_decay

    • paddle.fluid.layers.exponentialdecay(_learning_rate, decay_steps, decay_rate, staircase=False)
    • 在学习率上运用指数衰减。训练模型时,推荐在训练过程中降低学习率。每次 decay_steps 步骤中用 decay_rate 衰减学习率。
    1. if staircase == True:
    2. decayed_learning_rate = learning_rate * decay_rate ^ floor(global_step / decay_steps)
    3. else:
    4. decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
    • 参数:
      • learning_rate (Variable|float)-初始学习率
      • decay_steps (int)-见以上衰减运算
      • decay_rate (float)-衰减率。见以上衰减运算
      • staircase (Boolean)-若为True,按离散区间衰减学习率。默认:False返回:衰减的学习率

    返回类型:变量(Variable)

    代码示例

    1. import paddle.fluid as fluid
    2. base_lr = 0.1
    3. sgd_optimizer = fluid.optimizer.SGD(
    4. learning_rate=fluid.layers.exponential_decay(
    5. learning_rate=base_lr,
    6. decay_steps=10000,
    7. decay_rate=0.5,
    8. staircase=True))

    inverse_time_decay

    • paddle.fluid.layers.inversetime_decay(_learning_rate, decay_steps, decay_rate, staircase=False)
    • 在初始学习率上运用逆时衰减。

    训练模型时,最好在训练过程中降低学习率。通过执行该函数,将对初始学习率运用逆向衰减函数。

    1. if staircase == True:
    2. decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / decay_step))
    3. else:
    4. decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
    • 参数:
      • learning_rate (Variable|float)-初始学习率
      • decay_steps (int)-见以上衰减运算
      • decay_rate (float)-衰减率。见以上衰减运算
      • staircase (Boolean)-若为True,按间隔区间衰减学习率。默认:False返回:衰减的学习率

    返回类型:变量(Variable)

    示例代码:

    1. import paddle.fluid as fluid
    2. base_lr = 0.1
    3. sgd_optimizer = fluid.optimizer.SGD(
    4. learning_rate=fluid.layers.natural_exp_decay(
    5. learning_rate=base_lr,
    6. decay_steps=10000,
    7. decay_rate=0.5,
    8. staircase=True))
    9. sgd_optimizer.minimize(avg_cost)

    linear_lr_warmup

    • paddle.fluid.layers.linearlr_warmup(_learning_rate, warmup_steps, start_lr, end_lr)
    • 在正常学习率调整之前先应用线性学习率热身(warm up)进行初步调整。
    1. if global_step < warmup_steps:
    2. linear_step = end_lr - start_lr
    3. lr = start_lr + linear_step * (global_step / warmup_steps)
    • 参数:
      • learning_rate (float | Variable) - 学习率,类型为float值或变量。
      • warmup_steps (int) - 进行warm up过程的步数。
      • start_lr (float) - warm up的起始学习率
      • end_lr (float) - warm up的最终学习率。返回:进行热身衰减后的学习率。

    示例代码

    1. import paddle.fluid as fluid
    2. boundaries = [100, 200]
    3. lr_steps = [0.1, 0.01, 0.001]
    4. warmup_steps = 50
    5. start_lr = 1. / 3.
    6. end_lr = 0.1
    7. decayed_lr = fluid.layers.linear_lr_warmup(
    8. fluid.layers.piecewise_decay(boundaries, lr_steps),
    9. warmup_steps, start_lr, end_lr)

    natural_exp_decay

    • paddle.fluid.layers.naturalexp_decay(_learning_rate, decay_steps, decay_rate, staircase=False)
    • 将自然指数衰减运用到初始学习率上。
    1. if not staircase:
    2. decayed_learning_rate = learning_rate * exp(- decay_rate * (global_step / decay_steps))
    3. else:
    4. decayed_learning_rate = learning_rate * exp(- decay_rate * (global_step / decay_steps))
    • 参数:
      • learning_rate - 标量float32值或变量。是训练过程中的初始学习率。
      • decay_steps - Python int32数
      • decay_rate - Python float数
      • staircase - Boolean.若设为true,每个decay_steps衰减学习率返回:衰减的学习率

    示例代码:

    1. import paddle.fluid as fluid
    2. base_lr = 0.1
    3. sgd_optimizer = fluid.optimizer.SGD(
    4. learning_rate=fluid.layers.natural_exp_decay(
    5. learning_rate=base_lr,
    6. decay_steps=10000,
    7. decay_rate=0.5,
    8. staircase=True))

    noam_decay

    • paddle.fluid.layers.noamdecay(_d_model, warmup_steps)
    • Noam衰减方法。noam衰减的numpy实现如下。
    1. import padde.fluid as fluid
    2. import numpy as np
    3. # 设置超参数
    4. d_model = 2
    5. current_steps = 20
    6. warmup_steps = 200
    7. # 计算
    8. lr_value = np.power(d_model, -0.5) * np.min([
    9. np.power(current_steps, -0.5),
    10. np.power(warmup_steps, -1.5) * current_steps])

    请参照 attention is all you need

    • 参数:
      • d_model (Variable)-模型的输入和输出维度
      • warmup_steps (Variable)-超参数返回:衰减的学习率

    代码示例

    1. import padde.fluid as fluid
    2. warmup_steps = 100
    3. learning_rate = 0.01
    4. lr = fluid.layers.learning_rate_scheduler.noam_decay(
    5. 1/(warmup_steps *(learning_rate ** 2)),
    6. warmup_steps)

    piecewise_decay

    • paddle.fluid.layers.piecewisedecay(_boundaries, values)
    • 对初始学习率进行分段衰减。

    该算法可用如下代码描述。

    1. boundaries = [10000, 20000]
    2. values = [1.0, 0.5, 0.1]
    3. if step < 10000:
    4. learning_rate = 1.0
    5. elif 10000 <= step < 20000:
    6. learning_rate = 0.5
    7. else:
    8. learning_rate = 0.1
    • 参数:
      • boundaries -一列代表步数的数字
      • values -一列学习率的值,从不同的步边界中挑选返回:衰减的学习率

    代码示例

    1. import paddle.fluid as fluid
    2. boundaries = [10000, 20000]
    3. values = [1.0, 0.5, 0.1]
    4. optimizer = fluid.optimizer.Momentum(
    5. momentum=0.9,
    6. learning_rate=fluid.layers.piecewise_decay(boundaries=boundaries, values=values),
    7. regularization=fluid.regularizer.L2Decay(1e-4))

    polynomial_decay

    • paddle.fluid.layers.polynomialdecay(_learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False)
    • 对初始学习率使用多项式衰减
    1. if cycle:
    2. decay_steps = decay_steps * ceil(global_step / decay_steps)
    3. else:
    4. global_step = min(global_step, decay_steps)
    5. decayed_learning_rate = (learning_rate - end_learning_rate) *
    6. (1 - global_step / decay_steps) ^ power + end_learning_rate
    • 参数:
      • learning_rate (Variable|float32)-标量float32值或变量。是训练过程中的初始学习率。
      • decay_steps (int32)-Python int32数
      • end_learning_rate (float)-Python float数
      • power (float)-Python float数
      • cycle (bool)-若设为true,每decay_steps衰减学习率返回:衰减的学习率

    返回类型:变量(Variable)

    代码示例

    1. import paddle.fluid as fluid
    2. start_lr = 0.01
    3. total_step = 5000
    4. end_lr = 0
    5. lr = fluid.layers.polynomial_decay(
    6. start_lr, total_step, end_lr, power=1)