Python 用于二进制分类的 TensorFlow

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/35277898/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 16:13:44  来源:igfitidea点击:

TensorFlow for binary classification

pythonneural-networktensorflow

提问by Ricardo Cruz

I am trying to adapt this MNIST exampleto binary classification.

我正在尝试将此 MNIST 示例调整为二进制分类。

But when changing my NLABELSfrom NLABELS=2to NLABELS=1, the loss function always returns 0 (and accuracy 1).

但是当将 my NLABELSfrom更改NLABELS=2为 时NLABELS=1,损失函数始终返回 0(精度为 1)。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf

# Import data
mnist = input_data.read_data_sets('data', one_hot=True)
NLABELS = 2

sess = tf.InteractiveSession()

# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x-input')
W = tf.Variable(tf.zeros([784, NLABELS]), name='weights')
b = tf.Variable(tf.zeros([NLABELS], name='bias'))

y = tf.nn.softmax(tf.matmul(x, W) + b)

# Add summary ops to collect data
_ = tf.histogram_summary('weights', W)
_ = tf.histogram_summary('biases', b)
_ = tf.histogram_summary('y', y)

# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, NLABELS], name='y-input')

# More name scopes will clean up the graph representation
with tf.name_scope('cross_entropy'):
    cross_entropy = -tf.reduce_mean(y_ * tf.log(y))
    _ = tf.scalar_summary('cross entropy', cross_entropy)
with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(10.).minimize(cross_entropy)

with tf.name_scope('test'):
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    _ = tf.scalar_summary('accuracy', accuracy)

# Merge all the summaries and write them out to /tmp/mnist_logs
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter('logs', sess.graph_def)
tf.initialize_all_variables().run()

# Train the model, and feed in test data and record summaries every 10 steps

for i in range(1000):
    if i % 10 == 0:  # Record summary data and the accuracy
        labels = mnist.test.labels[:, 0:NLABELS]
        feed = {x: mnist.test.images, y_: labels}

        result = sess.run([merged, accuracy, cross_entropy], feed_dict=feed)
        summary_str = result[0]
        acc = result[1]
        loss = result[2]
        writer.add_summary(summary_str, i)
        print('Accuracy at step %s: %s - loss: %f' % (i, acc, loss)) 
   else:
        batch_xs, batch_ys = mnist.train.next_batch(100)
        batch_ys = batch_ys[:, 0:NLABELS]
        feed = {x: batch_xs, y_: batch_ys}
    sess.run(train_step, feed_dict=feed)

I have checked the dimensions of both batch_ys(fed into y) and _yand they are both 1xN matrices when NLABELS=1so the problem seems to be prior to that. Maybe something to do with the matrix multiplication?

我已经检查了两者的尺寸batch_ys(输入y)并且_y它们都是 1xN 矩阵,NLABELS=1所以问题似乎在此之前。也许与矩阵乘法有关?

I actually have got this same problem in a real project, so any help would be appreciated... Thanks!

我实际上在一个真实的项目中遇到了同样的问题,所以任何帮助将不胜感激......谢谢!

采纳答案by mrry

The original MNIST example uses a one-hot encodingto represent the labels in the data: this means that if there are NLABELS = 10classes (as in MNIST), the target output is [1 0 0 0 0 0 0 0 0 0]for class 0, [0 1 0 0 0 0 0 0 0 0]for class 1, etc. The tf.nn.softmax()operator converts the logits computed by tf.matmul(x, W) + binto a probability distribution across the different output classes, which is then compared to the fed-in value for y_.

最初的 MNIST 示例使用one-hot 编码来表示数据中的标签:这意味着如果有NLABELS = 10类(如在 MNIST 中),则目标输出是[1 0 0 0 0 0 0 0 0 0]类 0、[0 1 0 0 0 0 0 0 0 0]类 1 等。tf.nn.softmax()运算符转换 logits计算tf.matmul(x, W) + b成跨不同输出类别的概率分布,然后与 的馈入值进行比较y_

If NLABELS = 1, this acts as if there were only a single class, and the tf.nn.softmax()op would compute a probability of 1.0for that class, leading to a cross-entropy of 0.0, since tf.log(1.0)is 0.0for all of the examples.

如果NLABELS = 1,这种行为,如果当时只有一个类,以及tf.nn.softmax()运算将计算的概率1.0为类,从而导致的交叉熵0.0,因为tf.log(1.0)0.0对所有的例子。

There are (at least) two approaches you could try for binary classification:

有(至少)两种方法可以尝试进行二元分类:

  1. The simplest would be to set NLABELS = 2for the two possible classes, and encode your training data as [1 0]for label 0 and [0 1]for label 1. This answerhas a suggestion for how to do that.

  2. You could keep the labels as integers 0and 1and use tf.nn.sparse_softmax_cross_entropy_with_logits(), as suggested in this answer.

  1. 最简单的方法是设置NLABELS = 2两个可能的类别,并将您的训练数据编码[1 0]为标签 0 和[0 1]标签 1。此答案提供了有关如何执行此操作的建议。

  2. 你可以保持标签作为整数01和使用tf.nn.sparse_softmax_cross_entropy_with_logits(),如建议这个答案

回答by Troy D

I've been looking for good examples of how to implement binary classification in TensorFlow in a similar manner to the way it would be done in Keras. I didn't find any, but after digging through the code a bit, I think I have it figured out. I modified the problem here to implement a solution that uses sigmoid_cross_entropy_with_logits the way Keras does under the hood.

我一直在寻找如何在 TensorFlow 中以类似于在 Keras 中完成的方式实现二进制分类的好例子。我没有找到任何东西,但是在稍微挖掘代码之后,我想我已经弄清楚了。我修改了这里的问题,以实现一个使用 sigmoid_cross_entropy_with_logits 的解决方案,就像 Keras 在幕后所做的那样。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf

# Import data
mnist = input_data.read_data_sets('data', one_hot=True)
NLABELS = 1

sess = tf.InteractiveSession()

# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x-input')
W = tf.get_variable('weights', [784, NLABELS],
                    initializer=tf.truncated_normal_initializer()) * 0.1
b = tf.Variable(tf.zeros([NLABELS], name='bias'))
logits = tf.matmul(x, W) + b

# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, NLABELS], name='y-input')

# More name scopes will clean up the graph representation
with tf.name_scope('cross_entropy'):

    #manual calculation : under the hood math, don't use this it will have gradient problems
    # entropy = tf.multiply(tf.log(tf.sigmoid(logits)), y_) + tf.multiply((1 - y_), tf.log(1 - tf.sigmoid(logits)))
    # loss = -tf.reduce_mean(entropy, name='loss')

    entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_, logits=logits)
    loss = tf.reduce_mean(entropy, name='loss')

with tf.name_scope('train'):
    # Using Adam instead
    # train_step = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
    train_step = tf.train.AdamOptimizer(learning_rate=0.002).minimize(loss)

with tf.name_scope('test'):
    preds = tf.cast((logits > 0.5), tf.float32)
    correct_prediction = tf.equal(preds, y_)
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

tf.initialize_all_variables().run()

# Train the model, and feed in test data and record summaries every 10 steps

for i in range(2000):
    if i % 100 == 0:  # Record summary data and the accuracy
        labels = mnist.test.labels[:, 0:NLABELS]
        feed = {x: mnist.test.images, y_: labels}
        result = sess.run([loss, accuracy], feed_dict=feed)
        print('Accuracy at step %s: %s - loss: %f' % (i, result[1], result[0]))
    else:
        batch_xs, batch_ys = mnist.train.next_batch(100)
        batch_ys = batch_ys[:, 0:NLABELS]
        feed = {x: batch_xs, y_: batch_ys}
    sess.run(train_step, feed_dict=feed)

Training:

训练:

Accuracy at step 0: 0.7373 - loss: 0.758670
Accuracy at step 100: 0.9017 - loss: 0.423321
Accuracy at step 200: 0.9031 - loss: 0.322541
Accuracy at step 300: 0.9085 - loss: 0.255705
Accuracy at step 400: 0.9188 - loss: 0.209892
Accuracy at step 500: 0.9308 - loss: 0.178372
Accuracy at step 600: 0.9453 - loss: 0.155927
Accuracy at step 700: 0.9507 - loss: 0.139031
Accuracy at step 800: 0.9556 - loss: 0.125855
Accuracy at step 900: 0.9607 - loss: 0.115340
Accuracy at step 1000: 0.9633 - loss: 0.106709
Accuracy at step 1100: 0.9667 - loss: 0.099286
Accuracy at step 1200: 0.971 - loss: 0.093048
Accuracy at step 1300: 0.9714 - loss: 0.087915
Accuracy at step 1400: 0.9745 - loss: 0.083300
Accuracy at step 1500: 0.9745 - loss: 0.079019
Accuracy at step 1600: 0.9761 - loss: 0.075164
Accuracy at step 1700: 0.9768 - loss: 0.071803
Accuracy at step 1800: 0.9777 - loss: 0.068825
Accuracy at step 1900: 0.9788 - loss: 0.066270