代码之家  ›  专栏  ›  技术社区  ›  Filipe Ferminiano

TypeError:“MatMul”Op的输入“b”的类型float32与参数“a”的类型int32不匹配

  •  3
  • Filipe Ferminiano  · 技术社区  · 7 年前

    TypeError: Input 'b' of 'MatMul' Op has type float32 that does not match type int32 of argument 'a'.
    

    在这条线上

    tf。cast(有效嵌入,tf.int32),tf。转换(归一化嵌入,tf.int32),转置b=True)

    这是全部代码:

    graph = tf.Graph()
    
    with graph.as_default():
      # Input data.
      train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
      train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
      valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
      # Ops and variables pinned to the CPU because of missing GPU implementation
      with tf.device('/cpu:0'):
        # Look up embeddings for inputs.
        embeddings = tf.Variable(
            tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
        embed = tf.nn.embedding_lookup(embeddings, train_inputs)
        # Construct the variables for the NCE loss
        nce_weights = tf.Variable(
            tf.truncated_normal([vocabulary_size, embedding_size],
                                stddev=1.0 / math.sqrt(embedding_size)))
        nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
      # Compute the average NCE loss for the batch.
      # tf.nce_loss automatically draws a new sample of the negative labels each
      # time we evaluate the loss.
      loss = tf.reduce_mean(
          tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
                         num_sampled, vocabulary_size))
      # Construct the SGD optimizer using a learning rate of 1.0.
      optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
      # Compute the cosine similarity between minibatch examples and all embeddings.
      norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
      normalized_embeddings = embeddings / norm
      valid_embeddings = tf.nn.embedding_lookup(
          normalized_embeddings, valid_dataset)
      similarity = tf.matmul(
          tf.cast(valid_embeddings,tf.int32), tf.cast(normalized_embeddings,tf.int32), transpose_b=True)
      # Add variable initializer.
      init = tf.initialize_all_variables()
    

    2 回复  |  直到 7 年前
        1
  •  4
  •   Charles Liu    5 年前

    我在Python 3.4中使用Tensorflow r1.4时遇到了同样的问题。

    tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
                     num_sampled, vocabulary_size))
    

    进入

    tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
                     num_sampled, vocabulary_size))
    

    loss = tf.reduce_mean(tf.nn.nce_loss(
            weights = softmax_weights,
            biases = softmax_biases, 
            inputs = embed, 
            labels = train_labels, 
            num_sampled = num_sampled, 
            num_classes = vocabulary_size))
    

    similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
    

    tf.cast(..., tf.int32) 实际上,没有必要使用 tf.cast(..., tf.float32) 因为它已经是tf了。浮动32。

    附笔

    当您在使用时遇到问题时,该解决方案也很有用 tf.nn.sampled_softmax_loss() sampled_softmax_loss() nce_loss() .

        2
  •  1
  •   Alexandre Passos    7 年前