11.5 Tool: tf-explain
tf-explain implements interpretability methods as Tensorflow 2.x callbacks to ease neural network’s understanding. See Introducing tf-explain, Interpretability for Tensorflow 2.0
The code can be found at GitHub
- Implements interpretability methods as Tensorflow 2.x callbacks
- Several methods to look into different aspects
- Activations Visualization
- Vanilla Gradients
- Occlusion Sensitivity
- Grad CAM (Class Activation Maps)
- Integrated Gradients
Visualize gradients on the inputs towards the decision.
Variant of Vanilla Gradients ponderating gradients with input values.
Visualize how parts of the image affects neural network’s confidence by occluding parts iteratively
Visualize how parts of the image affects neural network’s output by looking into the activation maps
Visualize stabilized gradients on the inputs towards the decision.
Visualize an average of the gradients along the construction of the input towards the decision.