<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Transfer Learning |</title><link>https://annelizekrause.com/tags/transfer-learning/</link><atom:link href="https://annelizekrause.com/tags/transfer-learning/index.xml" rel="self" type="application/rss+xml"/><description>Transfer Learning</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Tue, 03 Feb 2026 00:00:00 +0000</lastBuildDate><item><title>CIFAR-10 Image Classification</title><link>https://annelizekrause.com/projects/cifar-10-image-classification/</link><pubDate>Tue, 03 Feb 2026 00:00:00 +0000</pubDate><guid>https://annelizekrause.com/projects/cifar-10-image-classification/</guid><description>&lt;p&gt;A multi-class image classification pipeline that uses &lt;strong&gt;transfer learning with ResNet50&lt;/strong&gt; to classify 32×32 RGB images from the
into ten everyday categories: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The workflow covers pixel normalisation, a custom classification head on top of a pre-trained ResNet50 backbone, and a two-phase training approach: a frozen-base phase to map ImageNet features onto CIFAR&amp;rsquo;s classes, followed by fine-tuning with a small learning rate to adapt the base model itself. Final test accuracy reached &lt;strong&gt;40.2%&lt;/strong&gt;, four times better than random guessing on a ten-class problem.&lt;/p&gt;
&lt;p&gt;The project also includes an honest analysis of what the model handled well and where it struggled. Classes with distinctive shapes, colour signatures, or consistent backgrounds (ship, automobile, frog) classified strongly, while visually similar pairs (cat-dog, automobile-truck, airplane-ship) confused the model at 32×32 resolution. The clear next steps are data augmentation, training on the full 50,000-image set, dropout regularisation, and lighter architectures like EfficientNet-B0 or MobileNetV2.&lt;/p&gt;
&lt;p&gt;Built locally in VS Code as part of Masterschool&amp;rsquo;s deep learning module.&lt;/p&gt;</description></item></channel></rss>