Tensorflow: feed dict Fehler: Sie müssen einen feed ein Wert für den Platzhalter tensor

Ich habe einen Fehler, den ich nicht, den Grund herauszufinden. Hier ist der code:

with tf.Graph().as_default():
        global_step = tf.Variable(0, trainable=False)

        images = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,33,33,1])
        labels = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,21,21,1])

        logits = inference(images)
        losses = loss(logits, labels)
        train_op = train(losses, global_step)
        saver = tf.train.Saver(tf.all_variables())
        summary_op = tf.merge_all_summaries()
        init = tf.initialize_all_variables()

        sess = tf.Session()
        sess.run(init)                                                 

        summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)

        for step in xrange(FLAGS.max_steps):
            start_time = time.time()

            data_batch, label_batch = SRCNN_inputs.next_batch(np_data, np_label,
                                                              FLAGS.batch_size)


            _, loss_value = sess.run([train_op, losses], feed_dict={images: data_batch, labels: label_batch})

            duration = time.time() - start_time

def next_batch(np_data, np_label, batchsize, 
               training_number = NUM_EXAMPLES_PER_EPOCH_TRAIN):

    perm = np.arange(training_number)
    np.random.shuffle(perm)
    data = np_data[perm]
    label = np_label[perm]
    data_batch = data[0:batchsize,:]
    label_batch = label[0:batchsize,:]


return data_batch, label_batch

wo np_data ist das ganze training samples Lesen im HDF5-Datei und der gleichen zu np_label.

Nachdem ich den code ausgeführt habe, bekam ich die Fehlermeldung wie diese :

2016-07-07 11:16:36.900831: step 0, loss = 55.22 (218.9 examples/sec; 0.585 sec/batch)
Traceback (most recent call last):

  File "<ipython-input-1-19672e1f8f12>", line 1, in <module>
    runfile('/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py', wdir='/home/kang/Documents/work_code_PC1/tf_SRCNN')

  File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 685, in runfile
    execfile(filename, namespace)

  File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 85, in execfile
    exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)

  File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 155, in <module>
    train_test()

  File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 146, in train_test
    summary_str = sess.run(summary_op)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run
    run_metadata_ptr)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 636, in _run
    feed_dict_string, options, run_metadata)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
    target_list, options, run_metadata)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
    raise type(e)(node_def, op, message)

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [128,33,33,1]
     [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[128,33,33,1], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
     [[Node: truediv/_74 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_56_truediv", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'Placeholder', defined at:

So, Es zeigt, dass der Schritt von 0 ist das Ergebnis, was bedeutet, dass die Daten zugeführt werden, die Platzhalter.

Aber warum kommt es der Fehler der Fütterung die Daten in Platzhalter in der nächsten Zeit?

Wenn ich versuche zu kommentieren Sie den code summary_op = tf.merge_all_summaries() und der code funktioniert einwandfrei. warum ist es der Fall?

InformationsquelleAutor karl_TUM | 2016-07-07
Schreibe einen Kommentar