Variables: Use variables sparingly.
As with any variable’s lifespan,  whensoever they are created dedicated memory is assigned to them. The values then as updated or stored are physically hold in the memory. This process while not computationally and memory wise not intensive for smaller operation,  in case of tensor flow which are dealing with large matrix, its very efficient to use the tensorflow variables as sparingly as possible.

Saves memory foot print and doesnot have to unnecessarily hold them in memory.

Hence instead of the clear solution , it might be wise to go for long complex solution as below, because we have no variable memory footprint for conv1, hidden1, conv2, hidden2, max_pool1, max_pool2 e.t.c. Reduces  performance and time by 100%

max_pool2 = tf.nn.max_pool(( \
#Hidden Layer 2
tf.nn.relu(\
#Conv Layer 2
tf.nn.conv2d(( \
#max pool layer 1
tf.nn.max_pool(\
#hidden_layer 1
( tf.nn.relu(\
#Conv Layer 1
tf.nn.conv2d(data, layer1_weights, strides = [1 ,1 ,1 , 1], padding=’SAME’)\
#Conv Layer 1 end
+ layer1_biases) )\
#hidden_layer 1 end
, ksize = [1,2,2,1], strides = [1,2,2,1], padding = ‘SAME’ )\
#max pool layer 1 end
), layer2_weights, strides = [1 ,1 ,1, 1], padding=’SAME’)\
#Conv Layer 2 end
+ layer2_biases) )\
##Hidden Layer 2  end
, ksize = [1,2,2,1], strides = [1,2,2,1], padding = ‘SAME’ )

Max Pool2 end

VS
strides = [1, 1, 1, 1]   # A stride of sliding windows for each dimension. Here two dimension hence 4 stride param
conv1 = tf.nn.conv2d(data, layer1_weights, strides = [1 ,1 ,1 , 1], padding=’SAME’) # sweep a 2-D filter over a batch of images, with tf.nn.conv2d
hidden1 = tf.nn.relu(conv1 + layer1_biases)
max_pool1 = tf.nn.max_pool(hidden1, ksize = [1,2,2,1], strides = [1,2,2,1], padding = ‘SAME’ )
conv2 = tf.nn.conv2d(max_pool1, layer2_weights, strides = [1 ,1 ,1, 1] , padding=’SAME’)
hidden2 = tf.nn.relu(conv2 + layer2_biases)
max_pool2 = tf.nn.max_pool(hidden2, ksize = [1,2,2,1], strides = [1,2,2,1], padding = ‘SAME’ )
shape = max_pool2.get_shape().as_list()
reshape = tf.reshape(max_pool2, [shape[0], shape[1] * shape[2] * shape[3]])
hidden3 = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden3, layer4_weights) + layer4_biases

 

 

 

 

Advertisements