代码之家  ›  专栏  ›  技术社区  ›  duhaime

Keras序列:用三个参数指定输入形状

  •  0
  • duhaime  · 技术社区  · 6 年前

    import numpy as np
    
    X = np.random.rand(100, 20, 3)
    

    这里有100个时间片,20个观测值,每个观测值有3个属性。

    我试图找出如何将上述数据传递到以下Keras序列:

    from keras.models import Sequential, Model
    from keras.layers import Dense, LSTM, Dropout, Activation
    import keras
    
    # config
    stateful = False
    look_back = 3
    lstm_cells = 1024
    dropout_rate = 0.5
    n_features = int(X.shape[1]*3)
    input_shape = (look_back, n_features, 3)
    output_shape = n_features
    
    def loss(y_true, y_pred):
      return keras.losses.mean_squared_error(y_true, y_pred)
    
    model = Sequential()
    model.add(LSTM(lstm_cells, stateful=stateful, return_sequences=True, input_shape=input_shape))
    model.add(Dense(output_shape, activation='relu'))
    model.compile(loss=loss, optimizer='sgd')
    

    运行此命令将抛出:

    ValueError:输入0与层lstm\ U 23不兼容:应为

    有人知道我如何重塑自己吗 X 把它传给模型?任何建议都会有帮助!

    1 回复  |  直到 6 年前
        1
  •  1
  •   duhaime    6 年前

    这似乎让事情进展顺利:

    from keras.models import Sequential, Model
    from keras.layers import Dense, LSTM, Dropout, Activation
    import keras
    
    # config
    stateful = False
    look_back = 3
    lstm_cells = 1024
    dropout_rate = 0.5
    n_features = int(X.shape[1]) * 3
    input_shape = (look_back, n_features)
    output_shape = n_features
    
    def loss(y_true, y_pred):
      return keras.losses.mean_squared_error(y_true, y_pred)
    
    model = Sequential()
    model.add(LSTM(lstm_cells, stateful=stateful, return_sequences=True, input_shape=input_shape))
    model.add(LSTM(lstm_cells, stateful=stateful, return_sequences=True))
    model.add(LSTM(lstm_cells, stateful=stateful))
    model.add(Dense(output_shape, activation='relu'))
    model.compile(loss=loss, optimizer='sgd')
    

    然后可以按如下方式对训练数据进行分区:

    # build training data
    train_x = []
    train_y = []
    n_time = int(X.shape[0])
    n_obs = int(X.shape[1])
    n_attrs = int(X.shape[2])
    
    # note we flatten the last dimension
    for i in range(look_back, n_time-1, 1):
      train_x.append( X[i-look_back:i].reshape(look_back, n_obs * n_attrs ) )
      train_y.append( X[i+1].ravel() )
    
    train_x = np.array(train_x)
    train_y = np.array(train_y)
    

    然后可以训练玩具模型:

    model.fit(train_x, train_y, epochs=10, batch_size=10)