Basic Debugging Skills

  1. Session.run() : Explicitly fetch and run test cases. e.g
    import tensorflow as tf
    
    x = tf.placeholder(tf.float32)
    y = tf.placeholder(tf.float32)
    bias = tf.Variable(1.0)
    y_pred = x ** 2 + bias # x -> x^2 + bias
    loss = (y - y_pred)**2 # l2 loss?
    
    sess = tf.Session()
    ## TF variables are not intialised until the following is called
    init = tf.initialize_all_variables()
    #init = tf.global_variables_initializer()  ##  Variable intitalisation call for new tensorflow version
    sess.run(init)
    
    print('TEST CASE 1 : Loss(x,y) = %.3f' % sess.run(loss, {x: 3.0, y: 9.0}))
    try:
        print('Loss(x,y) = %.3f' % sess.run(loss, {x: 3.0}))
    except:
        print(' TEST CASE 2 FAilED')
    
  2. TensorFlow Objects :  Extract  and  Evaluate :
     logits = model(tf_train_dataset, dropout_probability_train)
     loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
    # Optimizer.
     optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
    #Extract and Evaluate
    _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
  3. tf.Print : During runtime, we can also print the values directly without fetching. It creates a operation and prints the value when evaluating.
    # tf.Print is case-sensitive
    tf.Print(input, data, message=None, first_n=None)
    import tensorflow as tf
    x = tf.Variable([1.0, 2.0])
    x = tf.Print(x,[x])
    x = 2* x
    
    tf.initialize_all_variables()
    
    sess = tf.Session()
    sess.run()
    #Output = [1.0,2.0]
    # vs, since the eval prints the x after execution completion
    print(x.eval() ) # = [2.0, 4.0]
     
  4. tf.Assert :  If condition evaluates to false, the print the list of tensors and an error is throws. Summaries determine how many entries of the tensors to print.
    tf.Assert(condition, data, summarize=None, name=None)
    def multilayer_perceptron(x):
        fc1 = layers.fully_connected(x, 256, activation_fn=tf.nn.relu, scope='fc1')
        fc2 = layers.fully_connected(fc1, 256, activation_fn=tf.nn.relu, scope='fc2')
        out = layers.fully_connected(fc2, 10, activation_fn=None, scope='out')
        # let's ensure that all the outputs in `out` are positive
        tf.Assert(tf.reduce_all(out > 0), [out], name='assert_out_positive')
        out = tf.with_dependencies([assert_op], out)     return out

    You can also store all the assertions in a collection, merge them into a into a single operation and explicitly evaluate them using session.run()

    def multilayer_perceptron(x):
       fc1 = layers.fully_connected(x, 256, activation_fn=tf.nn.relu, scope='fc1')
       fc2 = layers.fully_connected(fc1, 256, activation_fn=tf.nn.relu, scope='fc2')
       out = layers.fully_connected(fc2, 10, activation_fn=None, scope='out')
       tf.add_to_collection('Asserts',
             tf.Assert(tf.reduce_all(out > 0), [out], name='assert_out_gt_0')
       )
       return out
    
    # merge all assertion ops from the collection
    assert_op = tf.group(*tf.get_collection('Asserts'))
    
    ... = session.run([train_op, assert_op], feed_dict={...})
  5. Run any python code in the computation graph:
    def multilayer_perceptron(x):
        fc1 = layers.fully_connected(x, 256, activation_fn=tf.nn.relu, scope='fc1')
        fc2 = layers.fully_connected(fc1, 256, activation_fn=tf.nn.relu, scope='fc2')
        out = layers.fully_connected(fc2, 10, activation_fn=None, scope='out')
    
        def _debug_print_func(fc1_val, fc2_val):
            print 'FC1 : {}, FC2 : {}'.format(fc1_val.shape, fc2_val.shape)
            print 'min, max of FC2 = {}, {}'.format(fc2_val.min(), fc2_val.max())
            return False
    
        debug_print_op = tf.py_func(_debug_print_func, [fc1, fc2], [tf.bool])
        with tf.control_dependencies(debug_print_op):
            out = tf.identity(out, name='out')
        return out

REf:

https://wookayin.github.io/tensorflow-talk-debugging/#24

Advertisements