• fluid.regularizer
    • L1Decay
    • L1DecayRegularizer
    • L2Decay
    • L2DecayRegularizer

    fluid.regularizer

    SourceEnglish

    L1Decay

    SourceEnglish

    • paddle.fluid.regularizer.L1Decay
    • L1DecayRegularizer 的别名

    L1DecayRegularizer

    SourceEnglish

    • class paddle.fluid.regularizer.L1DecayRegularizer(regularization_coeff=0.0)
    • 实现 L1 权重衰减正则化。

    L1正则将会稀疏化权重矩阵。

    fluid.regularizer - 图1

    • 参数:
      • regularization_coeff (float) – 正则化系数代码示例
    1. import paddle.fluid as fluid
    2. main_prog = fluid.Program()
    3. startup_prog = fluid.Program()
    4. with fluid.program_guard(main_prog, startup_prog):
    5. data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32')
    6. label = fluid.layers.data(name='label', shape=[1], dtype='int64')
    7. hidden = fluid.layers.fc(input=data, size=128, act='relu')
    8. prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
    9. loss = fluid.layers.cross_entropy(input=prediction, label=label)
    10. avg_loss = fluid.layers.mean(loss)
    11. optimizer = fluid.optimizer.Adagrad(
    12. learning_rate=1e-4,
    13. regularization=fluid.regularizer.L1DecayRegularizer(
    14. regularization_coeff=0.1))
    15. optimizer.minimize(avg_loss)

    L2Decay

    SourceEnglish

    • paddle.fluid.regularizer.L2Decay
    • L2DecayRegularizer 的别名

    L2DecayRegularizer

    SourceEnglish

    • class paddle.fluid.regularizer.L2DecayRegularizer(regularization_coeff=0.0)
    • 实现L2 权重衰减正则化。

    较小的 L2 的有助于防止对训练数据的过度拟合。

    fluid.regularizer - 图2

    • 参数:
      • regularization_coeff (float) – 正则化系数代码示例
    1. import paddle.fluid as fluid
    2. main_prog = fluid.Program()
    3. startup_prog = fluid.Program()
    4. with fluid.program_guard(main_prog, startup_prog):
    5. data = fluid.layers.data(name='image', shape=[3, 28, 28], dtype='float32')
    6. label = fluid.layers.data(name='label', shape=[1], dtype='int64')
    7. hidden = fluid.layers.fc(input=data, size=128, act='relu')
    8. prediction = fluid.layers.fc(input=hidden, size=10, act='softmax')
    9. loss = fluid.layers.cross_entropy(input=prediction, label=label)
    10. avg_loss = fluid.layers.mean(loss)
    11. optimizer = fluid.optimizer.Adagrad(
    12. learning_rate=1e-4,
    13. regularization=fluid.regularizer.L2DecayRegularizer(
    14. regularization_coeff=0.1))
    15. optimizer.minimize(avg_loss)