代码之家  ›  专栏  ›  技术社区  ›  IanQ

单通道卷积自编码学习

  •  1
  • IanQ  · 技术社区  · 6 年前

    我的代码的相关部分是

    安装程序

    self.encoder_input = tf.placeholder(tf.float32, input_shape, name='x')
    self.regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)
    

    with tf.variable_scope("encoder"):
       conv1 = tf.layers.conv2d(self.encoder_input, filters=32, kernel_size=(2, 2),
       activation=tf.nn.relu, padding='same', kernel_regularizer=self.regularizer)
       mp1 = tf.layers.max_pooling2d(conv1, pool_size=(4, 1), strides=(4, 1))
       conv2 = tf.layers.conv2d(mp1, filters=64, kernel_size=(2, 2),
       activation=None, padding='same', kernel_regularizer=self.regularizer)
       return conv2
    

    其中conv2随后被馈入解码器:

    def _construct_decoder(self, encoded):
       with tf.variable_scope("decoder"):
       upsample1 = tf.image.resize_images(encoded, size=(50, 2), method=tf.image.ResizeMethod.BILINEAR)
       conv4 = tf.layers.conv2d(inputs=upsample1, filters=32, kernel_size=(2, 2), padding='same',
       activation=tf.nn.relu, kernel_regularizer=self.regularizer)
       upsample2 = tf.image.resize_images(conv4, size=(100, 2), method=tf.image.ResizeMethod.BILINEAR)
       conv5 = tf.layers.conv2d(inputs=upsample2, filters=2, kernel_size=(2, 2), padding='same',
       activation=None, kernel_regularizer=self.regularizer)
       self.decoder = conv5
    

    base_loss = tf.losses.mean_squared_error(labels=self.encoder_input, predictions=self.decoder)
    reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
    loss = tf.add_n([base_loss] + reg_losses, name="loss")
    
    cost = tf.reduce_mean(loss)
    tf.summary.scalar('cost', cost)
    optimizer = tf.train.AdamOptimizer(self.lr)
    
    grads = optimizer.compute_gradients(cost)
    # Update the weights wrt to the gradient
    optimizer = optimizer.apply_gradients(grads)
    # Save the grads with tf.summary.histogram
    for index, grad in enumerate(grads):
        tf.summary.histogram("{}-grad".format(grads[index][1].name), grads[index])
    

    我知道这不是在第二个通道上学习,因为我绘制了每个通道的最大值、最小值、s.dev等等,以获得实际值和预测值之间的差异。我真的不知道为什么在第一节课上学习而不是在第二节课上——有人有什么想法吗?

    0 回复  |  直到 6 年前