代码之家  ›  专栏  ›  技术社区  ›  Asif Shahriyar Sushmit

tensorflow是否允许LSTM反褶积(convlstm2d),就像它允许2D卷积一样?

  •  1
  • Asif Shahriyar Sushmit  · 技术社区  · 6 年前

    我正在努力扩大网络。对于卷积部分,我使用的是keras的convlstm2d。是否有执行反褶积的过程(即lstmdeconv2d?)

    2 回复  |  直到 6 年前
        1
  •  0
  •   Stefan    6 年前

    应该可以将任何模型与 TimeDistributed 包装器。因此,您可以创建一个deconv模型,并使用TimeDistributed包装器将其应用于LSTM的输出(这是一个向量序列)。

    一个例子。首先使用Conv2DTranspose层创建一个deconv网络。

    from keras.models import Model
    from keras.layers import LSTM,Conv2DTranspose, Input, Activation, Dense, Reshape, TimeDistributed
    
    # Hyperparameters
    layer_filters = [32, 64]
    
    # Deconv Model 
    # (adapted from https://github.com/keras-team/keras/blob/master/examples/mnist_denoising_autoencoder.py )
    
    deconv_inputs = Input(shape=(lstm_dim,), name='deconv_input')
    feature_map_shape = (None, 50, 50, 64) # deconvolve from [batch_size, 50,50,64] => [batch_size, 200,200,3]
    x = Dense(feature_map_shape[1] * feature_map_shape[2] * feature_map_shape[3])(deconv_inputs)
    x = Reshape((feature_map_shape[1], feature_map_shape[2],feature_map_shape[3]))(x)
    for filters in layer_filters[::-1]:
       x = Conv2DTranspose(filters=16,kernel_size=3,strides=2,activation='relu',padding='same')(x)
    x = Conv2DTranspose(filters=3,kernel_size=3, padding='same')(x) # last layer has 3 channels
    deconv_output = Activation('sigmoid', name='deconv_output')(x)
    deconv_model = Model(deconv_inputs, deconv_output, name='deconv_network')
    

    然后,可以使用时间分布层将此deconv模型应用于LSTM的输出。

    # LSTM
    lstm_input = Input(shape=(None,16), name='lstm_input') # => [batch_size, timesteps, input_dim]
    lstm_outputs =  LSTM(units=64, return_sequences=True)(lstm_input) # => [batch_size, timesteps, output_dim]
    predicted_images = TimeDistributed(deconv_model)(lstm_outputs)
    
    model = Model(lstm_input , predicted_images , name='lstm_deconv')
    model.summary()
    
        2
  •  0
  •   Ayush    6 年前

    Conv3D 为此,请检查此示例 used to predict the next frame