TensorFlow实现

您可以像常规深度 MLP 一样实现栈式自编码器。 特别是,我们在第 11 章中用于训练深度网络的技术也可以应用。例如,下面的代码使用 He 初始化,ELU 激活函数和 l2 正则化为 MNIST 构建一个栈式自编码器。 代码应该看起来很熟悉,除了没有标签(没有y):

  1. n_inputs = 28 * 28 # for MNIST
  2. n_hidden1 = 300
  3. n_hidden2 = 150 # codings
  4. n_hidden3 = n_hidden1
  5. n_outputs = n_inputs
  6. learning_rate = 0.01
  7. l2_reg = 0.001
  8. X = tf.placeholder(tf.float32, shape=[None, n_inputs])
  9. with tf.contrib.framework.arg_scope(
  10. [fully_connected],
  11. activation_fn=tf.nn.elu,
  12. weights_initializer=tf.contrib.layers.variance_scaling_initializer(),
  13. weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg)):
  14. hidden1 = fully_connected(X, n_hidden1)
  15. hidden2 = fully_connected(hidden1, n_hidden2) # codings
  16. hidden3 = fully_connected(hidden2, n_hidden3)
  17. outputs = fully_connected(hidden3, n_outputs, activation_fn=None)
  18. reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
  19. reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
  20. loss = tf.add_n([reconstruction_loss] + reg_losses)
  21. optimizer = tf.train.AdamOptimizer(learning_rate)
  22. training_op = optimizer.minimize(loss)
  23. init = tf.global_variables_initializer()

然后可以正常训练模型。 请注意,数字标签(y_batch)未使用:

  1. n_epochs = 5
  2. batch_size = 150
  3. with tf.Session() as sess:
  4. init.run()
  5. for epoch in range(n_epochs):
  6. n_batches = mnist.train.num_examples // batch_size
  7. for iteration in range(n_batches):
  8. X_batch, y_batch = mnist.train.next_batch(batch_size)
  9. sess.run(training_op, feed_dict={X: X_batch})