Deep Learning Model Interpretation & Style Transfer
PYTORCH
NUMPY
CAPTUM
Convolutional neural networks (CNNs) are powerful models for image classification tasks, but their inner workings are often opaque. This project explores two key areas that make these models more interpretable and expressive: interpretability, which reveals what parts of an image a model uses to make decisions, and style transfer, which creatively recombines the content of one image with the artistic style of another using gradient-based optimization. Built around the lightweight SqueezeNet architecture, this work demonstrates how gradients can be used both to interpret a model's reasoning and to generate visually compelling transformations.
Model Interpretability
To better understand how neural networks classify images, I implemented several visualization methods as listed below following the completion of classifier model training :
Saliency Maps (manual and Captum-based): Highlight pixel-level influences on predictions.
Grad-CAM and Guided Backpropagation: Reveal important image regions contributing to a classification, offering more intuitive, region-focused insights.
Class Visualizations: Synthesized images that maximized the model’s score for specific classes (e.g., “Gorilla” and “Yorkshire Terrier”) with L2 regularization and blurring to enhance clarity.

Neural Style Transfer
The second half of the project focused on implementing a neural style transfer pipeline based on the well-known approach introduced by Gatys et al., which uses CNN feature maps to blend the content of one image with the artistic style of another:
Content and Style Losses were computed from intermediate CNN layers.
Total Variation Loss was added to promote image smoothness.
Stylized outputs combined content from real-world images with artistic styles from paintings like Starry Night and The Scream, along with a custom creative piece.
These components highlight a strong understanding of both model introspection and creative applications of deep learning. The full implementation relied on PyTorch and Captum libraries, with attention to vectorization and efficient gradient handling.
Published April 2025
Go back home