Batch Normalization with TensorFlowPar Eric Antoine Scuccimarra
I was trying to use batch normalization in order to improve the accuracy of my CIFAR classifier with tf.layers.batch_normalization, and it seemed to have little to no effect. According to this StackOverflow post you need to do something extra, which is not mentioned in the documentation, in order to get the batch normalization to work.
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) sess.run([train_op, extra_update_ops], ...)
The batch norm update operations are added to UPDATE_OPS collection, so you need to create that operation and then feed it into the session along with the training op. Before I had added the extra_update_ops the batch normalization was definitely not running, now it is, whether it helps or not remains to be seen.
Also make sure to use a training=[BOOLEAN | TENSOR] in the call to batch_normalization() to prevent it from being applied during evaluation. I use a placeholder and pass whether it is training or not in via the feed_dict:
training = tf.placeholder(dtype=tf.bool)
And then use this in my batch norm and dropout layers:
There were a few other things I had to do to get batch normalization to work properly:
- I had been using local response normalization, which apparently doesn't help that much. I removed those layers and replaced them with batch normalization layers.
- Remove the activation from the conv2d layers. I run the output through the batch normalize layers and then apply the relu.
Before I made these changes the model with the batch normalization didn't seem to be training at all, the accuracy was just going up and down right around the baseline of .10. After these changes it seems to be training properly now.
Libellés: data_science, machine_learning, tensor_flow