代码之家  ›  专栏  ›  技术社区  ›  Beginner

如何为CNN中的每个层添加0.001 L2权重衰减?

  •  1
  • Beginner  · 技术社区  · 7 年前

    我正在尝试实施这篇声音分类论文: https://raw.githubusercontent.com/karoldvl/paper-2015-esc-convnet/master/Poster/MLSP2015-poster-page-1.gif
    文中提到,每层增加0.001 L2重量衰减。但是,我不知道如何在Tensorflow中做到这一点。

    我发现了一个类似的问题( How to define weight decay for individual layers in TensorFlow? )使用 tf.nn.l2_loss ,但尚不清楚如何将此方法用于我的网络。此外,它没有参数 0.001 在里面 tf。nn。l2\U损失 .

    我的网络:

    net = tf.layers.conv2d(inputs=x, filters=80, kernel_size=[57, 6], strides=[1, 1], padding="same", activation=tf.nn.relu)
    print(net)
    net = tf.layers.max_pooling2d(inputs=net, pool_size=[4, 3], strides=[1, 3])
    print(net)
    net = tf.layers.dropout(inputs=net, rate=keep_prob)
    print(net)
    net = tf.layers.conv2d(inputs=net, filters=80, kernel_size=[1, 3], strides=[1, 1], padding="same", activation=tf.nn.relu)
    print(net)
    net = tf.layers.max_pooling2d(inputs=net, pool_size=[1, 3], strides=[1, 3])
    print(net)
    net = tf.layers.flatten(net)
    print(net)
    # Dense Layer
    net = tf.layers.dense(inputs=net, units=5000, activation=tf.nn.relu)
    print(net)
    net = tf.layers.dropout(inputs=net, rate=keep_prob)
    print(net)
    net = tf.layers.dense(inputs=net, units=5000, activation=tf.nn.relu)
    print(net)
    net = tf.layers.dropout(inputs=net, rate=keep_prob)
    print(net)
    logits = tf.layers.dense(inputs=net, units=num_classes)
    print("logits: ", logits)
    

    输出:

    Tensor("Model/conv2d/Relu:0", shape=(?, 530, 129, 80), dtype=float32)
    Tensor("Model/max_pooling2d/MaxPool:0", shape=(?, 527, 43, 80), dtype=float32)
    Tensor("Model/dropout/Identity:0", shape=(?, 527, 43, 80), dtype=float32)
    Tensor("Model/conv2d_2/Relu:0", shape=(?, 527, 43, 80), dtype=float32)
    Tensor("Model/max_pooling2d_2/MaxPool:0", shape=(?, 527, 14, 80), dtype=float32)
    Tensor("Model/flatten/Reshape:0", shape=(?, 590240), dtype=float32)
    Tensor("Model/dense/Relu:0", shape=(?, 5000), dtype=float32)
    Tensor("Model/dropout_2/Identity:0", shape=(?, 5000), dtype=float32)
    Tensor("Model/dense_2/Relu:0", shape=(?, 5000), dtype=float32)
    Tensor("Model/dropout_3/Identity:0", shape=(?, 5000), dtype=float32)
    logits:  Tensor("Model/dense_3/BiasAdd:0", shape=(?, 20), dtype=float32)
    

    我发现这篇论文的实施: https://github.com/karoldvl/paper-2015-esc-convnet/blob/master/Code/_Networks/Net-DoubleConv.ipynb 但是,它在 pylearn2 .

    如何在代码中添加0.001 L2权重衰减?

    1 回复  |  直到 7 年前
        1
  •  3
  •   Mohamed Elzarei    7 年前

    将正则化添加到 conv2d 使用的图层 kernel_regularizer 参数,即为您的网络实施 0.001 丧失

    net = tf.layers.conv2d(inputs=x, filters=80, kernel_size=[57, 6], strides=[1,1], padding="same", activation=tf.nn.relu, kernel_regularizer=tf.contrib.layers.l2_regularizer(0.001))