Several approaches for understanding and visualizing Convolutional Networks have been developed in the literature, partly as a response the criticism that the learned features in a Neural Network are not interpretable. I have explored some of these approaches in the Jupyter Notebook. These are:
- Visualizing Filters/Weights
- Visualizing the maximisation of weights for optimised input value