Francis Burnet – AI Engineering Portfolio

Capstone portfolio spanning AI engineering, applied data science, machine learning, and deep learning.

Francis Burnet headshot

Capstone 10 Evidence Map

Capstone 10 evidence image
Capstone Summary

This source details a technical capstone project within the 2026 Microsoft AI Engineering Program focused on building a face-mask detection system. The project utilizes transfer learning by comparing the performance of two specific deep learning architectures, EfficientNetB0 and ResNet50, implemented via TensorFlow and Keras. A significant portion of the documentation addresses data preparation, including the automated generation of training and testing directories from a raw image dataset. It also explicitly tracks technical discrepancies between the original project requirements and the actual dataset, such as adjusting the output layers to match the available image classes. The final outputs consist of performance metrics, training histories, and visual prediction examples used to identify the most effective model. Comprehensive project artifacts, such as JSON summaries and execution notebooks, are provided to validate the engineering workflow.

Capstone 10 Scope

Capstone 10 converts the copied face-mask transfer-learning assignment into an executed TensorFlow workflow with generated train/test folders, training histories, model comparison, and saved prediction examples.

Primary staged source folders: data/with_mask, data/without_mask, and data/mask_worn_incorrect (labelled mask_weared_incorrect in the original source archive).

The notebook resolves the source folder name via alias lookup and uses all 3 classes for generated train/test splits and model output.

Original Project PDF

The copied project directions are embedded here for direct comparison against the notebook and output artifacts.

Requirement Checklist

1a

- [x] Build a transfer learning model to detect face masks on humans.

Source mapping: Requirements file

1b

- [x] Load image training and test datasets for Task A at image size `128 x 128 x 3`.

Source mapping: Requirements file

1c

- [x] Load Task A training dataset using Keras `ImageDataGenerator` with `validation_split=0.2`.

Source mapping: Requirements file

1d

- [x] Load Task A test dataset using Keras `ImageDataGenerator`.

Source mapping: Requirements file

1e

- [x] Build Task A transfer network using `EfficientNetB0` as initial layers.

Source mapping: Requirements file

1f

- [x] Add `GlobalAveragePooling2D` in Task A model.

Source mapping: Requirements file

1g

- [x] Add `Dropout(0.2)` in Task A model.

Source mapping: Requirements file

1h

- [x] Add final `SoftMax` dense output layer for class prediction in Task A.

Source mapping: Requirements file

1i

- [x] Compile Task A model with Adam, `categorical_crossentropy`, and `accuracy`.

Source mapping: Requirements file

1j

- [x] Train Task A model for up to 25 epochs with ReduceLROnPlateau and EarlyStopping on validation loss.

Source mapping: Requirements file

1k

- [x] Plot Task A training and validation accuracy/loss vs epochs.

Source mapping: Requirements file

1l

- [x] Load image training and test datasets for Task B at image size `128 x 128 x 3`.

Source mapping: Requirements file

1m

- [x] Load Task B training dataset using Keras `ImageDataGenerator` with `validation_split=0.2`.

Source mapping: Requirements file

1n

- [x] Load Task B test dataset using Keras `ImageDataGenerator`.

Source mapping: Requirements file

1o

- [x] Build Task B transfer network using `ResNet50` as initial layers.

Source mapping: Requirements file

1p

- [x] Add `GlobalAveragePooling2D` in Task B model.

Source mapping: Requirements file

1q

- [x] Add `Dropout(0.5)` in Task B model.

Source mapping: Requirements file

1r

- [x] Add final `SoftMax` dense output layer for class prediction in Task B.

Source mapping: Requirements file

1s

- [x] Compile Task B model with Adam, `categorical_crossentropy`, and `accuracy`.

Source mapping: Requirements file

1t

- [x] Train Task B model for up to 25 epochs with ReduceLROnPlateau and EarlyStopping on validation loss.

Source mapping: Requirements file

1u

- [x] Plot Task B training and validation accuracy/loss vs epochs.

Source mapping: Requirements file

1v

- [x] Predict on test set with best model and plot 10 test images with true vs predicted labels.

Source mapping: Requirements file

1w

- [x] Compare `EfficientNetB0` and `ResNet50` performance and identify best model.

Source mapping: Requirements file

1x

- [x] Provide best-model prediction evidence for test images in final outputs.

Source mapping: Requirements file

2a

- [x] Keep PDF task order and deliverables as source of truth for requirement coverage.

Source mapping: Requirements file

2b

- [x] Use approved GitHub-backed dataset artifacts as source of truth for notebook input files.

Source mapping: Requirements file

2c

- [x] Record any mismatch between PDF-expected class list and available staged source classes in notebook outputs.

Source mapping: Requirements file

2d

- [x] Generate runtime `train/` and `test/` folders when staged source is class-folder layout instead of pre-made split folders.

Source mapping: Requirements file

2e

- [x] Keep TensorFlow Playground and Teachable Machine website add-on as optional presentation layer that does not replace graded notebook evidence.

Source mapping: Requirements file

3a

- [x] PDF expected class values recorded: `with_mask`, `without_mask`, `mask_worn_incorrect`.

Source mapping: Requirements file

3b

- [x] GitHub-backed staged source classes recorded from current archive/extracted data.

Source mapping: Requirements file

3c

- [x] Missing PDF-expected classes from source are reported in notebook summary metadata when absent.

Source mapping: Requirements file

Requirement Walkthrough

Each walkthrough block maps the copied PDF requirements to the executed notebook cells, exported outputs, and reviewable evidence staged with this capstone.

10a

Generate The Missing Train And Test Folders From The Staged Source Images

Notebook section: Generated split cells

Requirement: Prepare train and test folder structure for transfer-learning experiments.

The notebook uses all 3 GitHub-backed class folders (with_mask, without_mask, mask_worn_incorrect) to create a fixed generated train/test split and records the counts in the split manifest.

Results Capture
  • Source image counts: {"with_mask":3725,"without_mask":3828,"mask_worn_incorrect":1815}.
  • Generated split counts: {"with_mask":{"train":120,"test":30},"without_mask":{"train":120,"test":30},"mask_worn_incorrect":{"train":120,"test":30}}.
for class_name in CLASS_NAMES:  # ['with_mask', 'without_mask', 'mask_worn_incorrect']
    train_files = source_files[:TRAIN_IMAGES_PER_CLASS]
    test_files = source_files[TRAIN_IMAGES_PER_CLASS:TRAIN_IMAGES_PER_CLASS + TEST_IMAGES_PER_CLASS]
    shutil.copy2(file_path, GENERATED_SPLIT_DIR / 'train' / class_name / file_path.name)
10b

Train EfficientNetB0 And ResNet50 Transfer Models

Notebook section: Model build and fit cells

Requirement: Build the two transfer-learning models, add the pooling/dropout head, and compare their test performance.

The notebook freezes the pretrained bases, uses the copied 128x128 image size, trains both models with ReduceLROnPlateau and EarlyStopping, and exports both training histories.

Results Capture
  • Best current model by test accuracy: ResNet50.
  • The saved outputs preserve both EfficientNetB0 and ResNet50 histories for direct comparison.
base_model = tf.keras.applications.EfficientNetB0(include_top=False, weights='imagenet', input_shape=(128, 128, 3))
model = tf.keras.Sequential([tf.keras.layers.Input(shape=(128, 128, 3)), tf.keras.layers.Lambda(preprocess), base_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dropout(dropout_rate), tf.keras.layers.Dense(NUM_CLASSES, activation='softmax')])  # NUM_CLASSES=3
Associated Artifact

EfficientNetB0 History

Saved training history for the EfficientNetB0 run.

EfficientNetB0 History
Associated Artifact

ResNet50 History

Saved training history for the ResNet50 run.

ResNet50 History
Associated Artifact

Model Comparison

Saved test-accuracy comparison across the two transfer models.

Model Comparison
10c

Export Prediction Examples And Record The PDF Mismatch Notes

Notebook section: Summary and prediction-example cells

Requirement: Compare the models, select the best one, and surface prediction examples from the held-out test images.

The notebook exports a best-model prediction panel plus a JSON summary that preserves the two-class versus three-neuron mismatch and the generated-split note explicitly.

Results Capture
  • Best-model prediction examples are saved as a PNG artifact.
  • The mismatch notes are preserved in session_10_summary.json instead of being hidden or guessed away.
summary = {'pdf_mismatch_notes': [...], 'model_results': evaluations, 'best_model': evaluations_df.iloc[0].to_dict()}
json.dump(summary, handle, indent=2)
Associated Artifact

Best-Model Predictions

Saved panel showing true and predicted labels for sample test images.

Best-Model Predictions

Interactive Neural Network Lab

This TensorFlow Playground embed is a concept sandbox for the classification behavior behind the face-mask project. It does not load the image dataset or run the CNN notebook; instead, it preloads small synthetic problems that make depth, activation choice, and regularization easier to see before you compare them to the real Session 10 training plots and predictions.

What This Is
  • The playground does not run the actual mask-detection model and does not use image pixels from the capstone dataset.
  • Each preset loads a different synthetic dataset and network configuration so you can compare simple versus harder classification behavior.
  • Use it as a visual analogy for why deeper networks and nonlinear activations matter before you return to the real CNN outputs on the page.
How To Use It
  1. Read the preset card closest to the behavior you want to compare, then click its Load button just above the playground.
  2. Press the top-left Play button inside the playground to start training that preset.
  3. Watch the loss and the colored decision regions change as the model learns the class boundary.
  4. Switch presets to compare simple separation, deeper capacity, nonlinear activations, and regularization under noisy conditions.
What To Look For
  • Decision Boundary Basics: expect a clean nonlinear split that introduces the basic binary-classification idea.
  • Hidden Layers On Spiral Data: expect a more difficult pattern that shows why extra model capacity helps.
  • ReLU On XOR: expect a clear example of why nonlinear activations matter for learnable separation.
  • Regularization Under Noise: expect a smoother boundary that supports the overfitting discussion behind image-model generalization.
Preset 1

Decision Boundary Basics

Uses the same circle dataset and compact tanh network, but here the value is intuition for binary image classification: the model starts with a simple class split and gradually learns a curved separation, similar to how a mask-vs-no-mask classifier must learn a nonlinear boundary instead of relying on one obvious pixel rule.

Preset 2

Hidden Layers On Spiral Data

Uses a deeper `8,6,4` tanh network on the spiral dataset to show why extra representational capacity helps on harder visual patterns. This is the closest preset to the feature-hierarchy idea behind the CNN models in the face-mask project, where deeper layers help separate more complex structures.

Preset 3

ReLU On XOR

Uses a ReLU network on XOR to show how nonlinear activations unlock separations that a shallow linear rule cannot achieve. That maps well to the face-mask project because image classifiers also depend on stacked nonlinear transformations rather than a single linear threshold over raw inputs.

Preset 4

Regularization Under Noise

Adds Gaussian noise and regularization so you can watch the boundary stay smoother instead of chasing every noisy point. For the mask-detection project, this is the closest analogy to controlling overfitting when image backgrounds, lighting, and framing vary across examples.

Loaded preset: Decision Boundary Basics

Use these presets as short visual analogies for the Session 10 classifier. They explain why deeper networks, nonlinear activations, and smoother boundaries matter, but they do not replace the real capstone evidence, which still comes from the notebook, screenshots, training-history plots, model comparison, and prediction artifacts.

Colab Notebook

This section provides the notebook preview, launch link, and project file links.

The notebook opens in Google Colab when a launch URL is configured, and the project files and outputs remain available here on the site.

Capstone 10 Notebook Workspace
Launch Colab
Embedded Notebook Preview
Cell 1 Markdown

Capstone Session 10

This notebook is generated from the copied Capstone_Session_10.pdf directions and the staged extracted image folders under data/.

Cell 2 Markdown

Objective

Build and compare transfer-learning classifiers for Session 10 (Task A: EfficientNetB0, Task B: ResNet50, Task C: best-model comparison and prediction evidence) following the copied PDF requirements.

PDF class values for this task are: with_mask, without_mask, and mask_worn_incorrect.

Cell 3 Markdown

Source Governance Note

The PDF directions remain the source of truth for capstone objectives, model tasks, and deliverables. Dataset inputs follow the project workflow rule from Project_DEV_Rules_PROMPT.md: approved GitHub-backed dataset files are the source of truth for notebook inputs.

When the staged dataset does not include every PDF-expected class folder (for example mask_weared_incorrect), the notebook records the gap explicitly and executes with the classes that are actually present in the GitHub-backed source.

Cell 4 Markdown

Runtime Scope Note

The copied PDF expects pre-made train/ and test/ folders. The staged GitHub-backed source for this capstone is an archive/extracted class-folder layout under data/, so this notebook generates runtime train/ and test/ folders before modeling.

To keep transfer-learning runs executable on the current CPU-only Windows environment, a fixed stratified subset is used for generated splits and all outputs are saved under outputs/.

Cell 5 Code · python
print('Kernel probe: cell 5 started')
probe_value = 2 + 2
print('Kernel probe: cell 5 completed, value =', probe_value)
Output
Kernel probe: cell 5 started
Kernel probe: cell 5 completed, value = 4
Cell 6 Code · python
import json
print('Import OK: json')
Output
Import OK: json
Cell 7 Code · python
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
print('Import OK: os and TF log level set')
Output
Import OK: os and TF log level set
Cell 8 Code · python
import random
random.seed(42)
print('Import OK: random (seed=42)')
Output
Import OK: random (seed=42)
Cell 9 Code · python
import shutil
print('Import OK: shutil')
Output
Import OK: shutil
Cell 10 Code · python
import subprocess
print('Import OK: subprocess')
Output
Import OK: subprocess
Cell 11 Code · python
import sys
print('Import OK: sys', sys.version.split()[0])
Output
Import OK: sys 3.12.10
Cell 12 Code · python
import matplotlib.pyplot as plt
print('Import OK: matplotlib')
Output
Import OK: matplotlib
Cell 13 Code · python
import numpy as np
print('Import OK: numpy', np.__version__)
Output
Import OK: numpy 1.26.4
Cell 14 Code · python
import pandas as pd
print('Import OK: pandas', pd.__version__)
Output
Import OK: pandas 3.0.3
Cell 15 Code · python
import seaborn as sns
sns.set_theme(style='whitegrid')
print('Import OK: seaborn')
Output
Import OK: seaborn
Cell 16 Code · python
import tensorflow as tf
tf.keras.utils.set_random_seed(42)
print('Import OK: tensorflow', tf.__version__)
Output
Import OK: tensorflow 2.21.0
Cell 17 Code · python
from pathlib import Path

IS_COLAB = 'google.colab' in sys.modules
GITHUB_REPO_OWNER = 'FrancisBurnet'
GITHUB_REPO_NAME = 'francisburnet'
GITHUB_REPO_BRANCH = 'main'
GITHUB_REPO_URL = f'https://github.com/{GITHUB_REPO_OWNER}/{GITHUB_REPO_NAME}.git'
CAPSTONE_ROOT = Path('Incremental Capstones/Deep Learning Specialization/Capstone Session 10')
PDF_EXPECTED_CLASSES = ['with_mask', 'without_mask', 'mask_worn_incorrect']
PDF_CLASS_ALIASES = {
    'mask_worn_incorrect': {'mask_worn_incorrect', 'mask_weared_incorrect'},
}


def resolve_capstone_dir() -> Path | None:
    current = Path.cwd().resolve()
    capstone_parts = CAPSTONE_ROOT.parts
    for candidate in [current, *current.parents]:
        if len(candidate.parts) >= len(capstone_parts) and candidate.parts[-len(capstone_parts):] == capstone_parts:
            return candidate
        nested_candidate = candidate / CAPSTONE_ROOT
        if nested_candidate.exists():
            return nested_candidate
    return None


def ensure_colab_capstone_dir() -> Path:
    repo_root = Path('/content') / GITHUB_REPO_NAME
    capstone_dir = repo_root / CAPSTONE_ROOT
    data_dir = capstone_dir / 'data'

    if not repo_root.exists():
        subprocess.run(['git', 'clone', '--depth', '1', '--branch', GITHUB_REPO_BRANCH, GITHUB_REPO_URL, str(repo_root)], check=True)

    has_lfs = subprocess.run(['bash', '-lc', 'command -v git-lfs >/dev/null 2>&1'], check=False).returncode == 0
    if not has_lfs:
        subprocess.run(['apt-get', 'update', '-qq'], check=True)
        subprocess.run(['apt-get', 'install', '-y', 'git-lfs'], check=True)

    subprocess.run(['git', 'lfs', 'install', '--local'], cwd=repo_root, check=True)
    subprocess.run(['git', 'lfs', 'pull', '--include', f'{CAPSTONE_ROOT.as_posix()}/data/**'], cwd=repo_root, check=True)

    if not data_dir.exists():
        raise RuntimeError(f'Expected GitHub-backed data directory at {data_dir}, but it was not found after clone + LFS pull.')

    # Verify actual image content was pulled — LFS pointer stubs are <200 bytes
    image_files = [f for f in data_dir.rglob('*') if f.is_file() and f.suffix.lower() in {'.jpg', '.jpeg', '.png'}]
    if not image_files:
        raise RuntimeError(f'No image files found under {data_dir} — LFS pull may not have downloaded content.')
    tiny = [f for f in image_files[:10] if f.stat().st_size < 1000]
    if tiny:
        raise RuntimeError(
            f'LFS pointer file detected — actual image content was not downloaded.\n'
            f'File: {tiny[0]} ({tiny[0].stat().st_size} bytes)\n'
            f'Fix: run `git lfs pull` inside {repo_root}'
        )
    print(f'LFS verified: {len(image_files)} image files found under data/')

    return capstone_dir


if IS_COLAB:
    CAPSTONE_DIR = ensure_colab_capstone_dir()
else:
    CAPSTONE_DIR = resolve_capstone_dir()

if CAPSTONE_DIR is None:
    raise RuntimeError(f'Could not resolve capstone directory from {CAPSTONE_ROOT}')

OUTPUT_ROOT = CAPSTONE_DIR
OUTPUT_MODE = 'permanent capstone outputs'
OUTPUT_DISPLAY = (CAPSTONE_ROOT / 'outputs').as_posix()

SOURCE_DATA_DIR = CAPSTONE_DIR / 'data'
if not SOURCE_DATA_DIR.exists():
    raise RuntimeError(f'Expected staged dataset at {SOURCE_DATA_DIR}, but it does not exist.')

AVAILABLE_CLASS_FOLDERS = sorted([path.name for path in SOURCE_DATA_DIR.iterdir() if path.is_dir()])
if len(AVAILABLE_CLASS_FOLDERS) < 2:
    raise RuntimeError(f'Expected at least two class folders under {SOURCE_DATA_DIR}, found {AVAILABLE_CLASS_FOLDERS}.')

CLASS_NAME_LOOKUP = {}
for expected_name in PDF_EXPECTED_CLASSES:
    alias_set = PDF_CLASS_ALIASES.get(expected_name, {expected_name})
    for candidate_name in AVAILABLE_CLASS_FOLDERS:
        if candidate_name in alias_set:
            CLASS_NAME_LOOKUP[expected_name] = candidate_name
            break

ordered_classes = [CLASS_NAME_LOOKUP[name] for name in PDF_EXPECTED_CLASSES if name in CLASS_NAME_LOOKUP]
remaining_classes = [name for name in AVAILABLE_CLASS_FOLDERS if name not in ordered_classes]
CLASS_NAMES = ordered_classes + remaining_classes
NUM_CLASSES = len(CLASS_NAMES)
MISSING_PDF_CLASSES = [name for name in PDF_EXPECTED_CLASSES if name not in CLASS_NAME_LOOKUP]

OUTPUTS_DIR = (OUTPUT_ROOT / 'outputs').resolve()
PLOTS_DIR = OUTPUTS_DIR / 'plots'
GENERATED_SPLIT_DIR = OUTPUTS_DIR / 'generated_split'
OUTPUTS_DIR.mkdir(parents=True, exist_ok=True)
PLOTS_DIR.mkdir(parents=True, exist_ok=True)
IMAGE_SIZE = (128, 128)
BATCH_SIZE = 16
TRAIN_IMAGES_PER_CLASS = 120
TEST_IMAGES_PER_CLASS = 30

print('Runtime:', 'Google Colab' if IS_COLAB else 'Notebook runtime')
print('GitHub source repo:', f'https://github.com/{GITHUB_REPO_OWNER}/{GITHUB_REPO_NAME}/tree/{GITHUB_REPO_BRANCH}')
print('Capstone artifact path:', CAPSTONE_ROOT.as_posix())
print('Source data (GitHub-backed):', f'{CAPSTONE_ROOT.as_posix()}/data')
print('Output mode:', OUTPUT_MODE)
print('Output target:', OUTPUT_DISPLAY)
print('PDF expected classes:', PDF_EXPECTED_CLASSES)
print('Available source classes:', AVAILABLE_CLASS_FOLDERS)
print('Missing PDF classes from source:', MISSING_PDF_CLASSES if MISSING_PDF_CLASSES else 'None')
print('Classes used for this run:', CLASS_NAMES)
Output
Runtime: Notebook runtime
GitHub source repo: https://github.com/FrancisBurnet/francisburnet/tree/main
Capstone artifact path: Incremental Capstones/Deep Learning Specialization/Capstone Session 10
Source data (GitHub-backed): Incremental Capstones/Deep Learning Specialization/Capstone Session 10/data
Output mode: permanent capstone outputs
Output target: Incremental Capstones/Deep Learning Specialization/Capstone Session 10/outputs
PDF expected classes: ['with_mask', 'without_mask', 'mask_worn_incorrect']
Available source classes: ['mask_worn_incorrect', 'with_mask', 'without_mask']
Missing PDF classes from source: None
Classes used for this run: ['with_mask', 'without_mask', 'mask_worn_incorrect']
Cell 18 Code · python
# Clear any previous generated split so re-runs start clean
if GENERATED_SPLIT_DIR.exists():
    shutil.rmtree(GENERATED_SPLIT_DIR)

split_manifest = {'source_counts': {}, 'generated_counts': {}}
for split_name in ['train', 'test']:
    for class_name in CLASS_NAMES:
        (GENERATED_SPLIT_DIR / split_name / class_name).mkdir(parents=True, exist_ok=True)

for class_name in CLASS_NAMES:
    source_files = sorted([path for path in (SOURCE_DATA_DIR / class_name).iterdir() if path.is_file()])
    split_manifest['source_counts'][class_name] = len(source_files)
    random.shuffle(source_files)
    train_files = source_files[:TRAIN_IMAGES_PER_CLASS]
    test_files = source_files[TRAIN_IMAGES_PER_CLASS:TRAIN_IMAGES_PER_CLASS + TEST_IMAGES_PER_CLASS]
    split_manifest['generated_counts'][class_name] = {'train': len(train_files), 'test': len(test_files)}
    for file_path in train_files:
        shutil.copy2(file_path, GENERATED_SPLIT_DIR / 'train' / class_name / file_path.name)
    for file_path in test_files:
        shutil.copy2(file_path, GENERATED_SPLIT_DIR / 'test' / class_name / file_path.name)

with open(OUTPUTS_DIR / 'session_10_split_manifest.json', 'w', encoding='utf-8') as handle:
    json.dump(split_manifest, handle, indent=2)
split_manifest
Output
{'source_counts': {'with_mask': 3725,
  'without_mask': 3828,
  'mask_worn_incorrect': 1815},
 'generated_counts': {'with_mask': {'train': 120, 'test': 30},
  'without_mask': {'train': 120, 'test': 30},
  'mask_worn_incorrect': {'train': 120, 'test': 30}}}
Cell 19 Code · python
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
    rescale=1.0 / 255.0,
    validation_split=0.2,
    rotation_range=15,
    width_shift_range=0.1,
    height_shift_range=0.1,
    shear_range=0.08,
    zoom_range=0.15,
    horizontal_flip=True,
    brightness_range=(0.85, 1.15),
    fill_mode='nearest',
)
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0 / 255.0)

train_generator = train_datagen.flow_from_directory(
    GENERATED_SPLIT_DIR / 'train',
    target_size=IMAGE_SIZE,
    batch_size=BATCH_SIZE,
    class_mode='categorical',
    subset='training',
    shuffle=True,
)
validation_generator = train_datagen.flow_from_directory(
    GENERATED_SPLIT_DIR / 'train',
    target_size=IMAGE_SIZE,
    batch_size=BATCH_SIZE,
    class_mode='categorical',
    subset='validation',
    shuffle=False,
)
test_generator = test_datagen.flow_from_directory(
    GENERATED_SPLIT_DIR / 'test',
    target_size=IMAGE_SIZE,
    batch_size=BATCH_SIZE,
    class_mode='categorical',
    shuffle=False,
)
class_indices = train_generator.class_indices
class_labels = {index: label for label, index in class_indices.items()}
class_indices
Output
Found 288 images belonging to 3 classes.
Found 72 images belonging to 3 classes.
Found 90 images belonging to 3 classes.
{'mask_worn_incorrect': 0, 'with_mask': 1, 'without_mask': 2}
Cell 20 Code · python
callbacks = [
    tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=2, verbose=0),
    tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, restore_best_weights=True, verbose=0),
]
PRETRAIN_EPOCHS = 15
FINE_TUNE_LR = 1e-5

def build_transfer_model(base_model_name):
    if base_model_name == 'EfficientNetB0':
        preprocess = tf.keras.applications.efficientnet.preprocess_input
        base_model = tf.keras.applications.EfficientNetB0(include_top=False, weights='imagenet', input_shape=(128, 128, 3))
        dropout_rate = 0.2
        fine_tune_at = max(0, len(base_model.layers) - 40)
    elif base_model_name == 'ResNet50':
        preprocess = tf.keras.applications.resnet50.preprocess_input
        base_model = tf.keras.applications.ResNet50(include_top=False, weights='imagenet', input_shape=(128, 128, 3))
        dropout_rate = 0.5
        fine_tune_at = max(0, len(base_model.layers) - 30)
    else:
        raise ValueError(base_model_name)

    base_model.trainable = False
    model = tf.keras.Sequential([
        tf.keras.layers.Input(shape=(128, 128, 3)),
        tf.keras.layers.Lambda(preprocess),
        base_model,
        tf.keras.layers.GlobalAveragePooling2D(),
        tf.keras.layers.Dropout(dropout_rate),
        tf.keras.layers.Dense(NUM_CLASSES, activation='softmax'),
    ])
    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
    model._capstone_fine_tune_at = fine_tune_at
    return model

def fine_tune_model(model, base_model_name):
    base_model = next(layer for layer in model.layers if isinstance(layer, tf.keras.Model))
    fine_tune_at = getattr(model, '_capstone_fine_tune_at', max(0, len(base_model.layers) - 30))
    base_model.trainable = True
    for layer in base_model.layers[:fine_tune_at]:
        layer.trainable = False
    for layer in base_model.layers[fine_tune_at:]:
        if isinstance(layer, tf.keras.layers.BatchNormalization):
            layer.trainable = False
    model.compile(
        optimizer=tf.keras.optimizers.Adam(learning_rate=FINE_TUNE_LR),
        loss='categorical_crossentropy',
        metrics=['accuracy'],
    )
    return model
Cell 21 Code · python
histories = {}
evaluations = []
prediction_examples = {}
for model_name in ['EfficientNetB0', 'ResNet50']:
    tf.keras.backend.clear_session()
    train_generator.reset()
    validation_generator.reset()
    test_generator.reset()
    model = build_transfer_model(model_name)
    frozen_history = model.fit(
        train_generator,
        validation_data=validation_generator,
        epochs=PRETRAIN_EPOCHS,
        callbacks=callbacks,
        verbose=0,
    )
    model = fine_tune_model(model, model_name)
    fine_tune_history = model.fit(
        train_generator,
        validation_data=validation_generator,
        initial_epoch=len(frozen_history.history['loss']),
        epochs=25,
        callbacks=callbacks,
        verbose=0,
    )
    histories[model_name] = pd.concat([
        pd.DataFrame(frozen_history.history),
        pd.DataFrame(fine_tune_history.history),
    ], ignore_index=True)
    test_loss, test_accuracy = model.evaluate(test_generator, verbose=0)
    test_generator.reset()
    probabilities = model.predict(test_generator, verbose=0)
    predicted_labels = np.argmax(probabilities, axis=1)
    true_labels = test_generator.classes.copy()
    evaluations.append({
        'model': model_name,
        'epochs_ran': int(len(histories[model_name])),
        'test_loss': float(test_loss),
        'test_accuracy': float(test_accuracy),
    })
    prediction_examples[model_name] = {
        'filenames': test_generator.filenames[:10],
        'true_labels': [class_labels[index] for index in true_labels[:10]],
        'predicted_labels': [class_labels[index] for index in predicted_labels[:10]],
    }
evaluations_df = pd.DataFrame(evaluations).sort_values('test_accuracy', ascending=False).reset_index(drop=True)
evaluations_df
Output
WARNING:tensorflow:5 out of the last 13 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x000001D9A94B6C00> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
WARNING:tensorflow:5 out of the last 13 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x000001D9E9CA28E0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
            model  epochs_ran  test_loss  test_accuracy
0        ResNet50          24   0.766586       0.655556
1  EfficientNetB0          15   1.097273       0.333333
Cell 22 Code · python
for model_name, history_df in histories.items():
    history_df.to_csv(OUTPUTS_DIR / f'{model_name.lower()}_history.csv', index=False)
    fig, axes = plt.subplots(1, 2, figsize=(12, 4))
    axes[0].plot(history_df['accuracy'], label='train')
    axes[0].plot(history_df['val_accuracy'], label='validation')
    axes[0].set_title(f'{model_name} Accuracy by Epoch')
    axes[0].legend()
    axes[1].plot(history_df['loss'], label='train')
    axes[1].plot(history_df['val_loss'], label='validation')
    axes[1].set_title(f'{model_name} Loss by Epoch')
    axes[1].legend()
    fig.tight_layout()
    fig.savefig(PLOTS_DIR / f'{model_name.lower()}_training_history.png', dpi=150)
    plt.show()
    plt.close(fig)

fig, ax = plt.subplots(figsize=(10, 5))
evaluations_df.plot(x='model', y='test_accuracy', kind='bar', ax=ax, color=['#1f77b4', '#ff7f0e'])
ax.set_title('Session 10 Model Comparison')
ax.set_ylim(0, 1.0)
fig.tight_layout()
fig.savefig(PLOTS_DIR / 'model_comparison.png', dpi=150)
plt.show()
plt.close(fig)
Output
<Figure size 1200x400 with 2 Axes>
<Figure size 1200x400 with 2 Axes>
<Figure size 1000x500 with 1 Axes>
Cell 23 Code · python
best_model_name = evaluations_df.iloc[0]['model']
example_info = prediction_examples[best_model_name]
figure, axes = plt.subplots(2, 5, figsize=(16, 7))
for axis, filename, true_label, predicted_label in zip(axes.flatten(), example_info['filenames'], example_info['true_labels'], example_info['predicted_labels']):
    image_path = GENERATED_SPLIT_DIR / 'test' / filename
    image = tf.keras.utils.load_img(image_path, target_size=IMAGE_SIZE)
    axis.imshow(image)
    axis.set_title(f'T:{true_label}\nP:{predicted_label}', fontsize=10)
    axis.axis('off')
figure.tight_layout()
figure.savefig(PLOTS_DIR / 'best_model_prediction_examples.png', dpi=150)
plt.show()
plt.close(figure)
Output
<Figure size 1600x700 with 10 Axes>
Cell 24 Code · python
evaluations_df.to_csv(OUTPUTS_DIR / 'session_10_model_comparison.csv', index=False)
summary = {
    'source_counts': split_manifest['source_counts'],
    'generated_counts': split_manifest['generated_counts'],
    'class_indices': class_indices,
    'pdf_expected_classes': PDF_EXPECTED_CLASSES,
    'available_source_classes': AVAILABLE_CLASS_FOLDERS,
    'missing_pdf_classes_from_source': MISSING_PDF_CLASSES,
    'source_governance_notes': [
        'PDF directions are the source of truth for task sequence and deliverables (Task A, Task B, Task C).',
        'GitHub-backed dataset assets are the source of truth for notebook inputs per Project_DEV_Rules_PROMPT.md.',
        'When PDF-expected classes are missing from source files, execution uses available classes and records the gap explicitly.',
        'The generated train/test split is a fixed stratified sample to keep transfer-learning runs executable on the current CPU-only environment.',
    ],
    'model_results': evaluations,
    'best_model': evaluations_df.iloc[0].to_dict(),
    'best_model_example_predictions': prediction_examples[best_model_name],
}
with open(OUTPUTS_DIR / 'session_10_summary.json', 'w', encoding='utf-8') as handle:
    json.dump(summary, handle, indent=2)
summary
Output
{'source_counts': {'with_mask': 3725,
  'without_mask': 3828,
  'mask_worn_incorrect': 1815},
 'generated_counts': {'with_mask': {'train': 120, 'test': 30},
  'without_mask': {'train': 120, 'test': 30},
  'mask_worn_incorrect': {'train': 120, 'test': 30}},
 'class_indices': {'mask_worn_incorrect': 0,
  'with_mask': 1,
  'without_mask': 2},
 'pdf_expected_classes': ['with_mask', 'without_mask', 'mask_worn_incorrect'],
 'available_source_classes': ['mask_worn_incorrect',
  'with_mask',
  'without_mask'],
 'missing_pdf_classes_from_source': [],
 'source_governance_notes': ['PDF directions are the source of truth for task sequence and deliverables (Task A, Task B, Task C).',
  'GitHub-backed dataset assets are the source of truth for notebook inputs per Project_DEV_Rules_PROMPT.md.',
  'When PDF-expected classes are missing from source files, execution uses available classes and records the gap explicitly.',
  'The generated train/test split is a fixed stratified sample to keep transfer-learning runs executable on the current CPU-only environment.'],
 'model_results': [{'model': 'EfficientNetB0',
   'epochs_ran': 15,
   'test_loss': 1.0972731113433838,
   'test_accuracy': 0.3333333432674408},
  {'model': 'ResNet50',
   'epochs_ran': 24,
   'test_loss': 0.7665861248970032,
   'test_accuracy': 0.6555555462837219}],
 'best_model': {'model': 'ResNet50',
  'epochs_ran': 24,
  'test_loss': 0.7665861248970032,
  'test_accuracy': 0.6555555462837219},
 'best_model_example_predictions': {'filenames': ['mask_worn_incorrect\\1010.png',
   'mask_worn_incorrect\\1014.png',
   'mask_worn_incorrect\\1059.png',
   'mask_worn_incorrect\\1078.png',
   'mask_worn_incorrect\\1146.png',
   'mask_worn_incorrect\\1220.png',
   'mask_worn_incorrect\\1237.png',
   'mask_worn_incorrect\\143.png',
   'mask_worn_incorrect\\1610.png',
   'mask_worn_incorrect\\1631.png'],
  'true_labels': ['mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect'],
  'predicted_labels': ['mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect',
   'mask_worn_incorrect']}}
Project Notes
  • Generated train/test split from the staged source folders.
  • EfficientNetB0 and ResNet50 transfer-learning runs.
  • Saved training histories and model-comparison outputs.
  • Best-model prediction examples and explicit PDF mismatch notes.
Launch Controls

Notebook Launch

Open the matching notebook in Google Colab or review the tracked notebook source in GitHub.

Project File Links
  • Notebook File: Open Notebook File
    Executed Session 10 transfer-learning notebook.
  • Source ZIP: Open Source ZIP
    Original staged ZIP archive for the face-mask images.
  • Split Manifest JSON: Open Split Manifest JSON
    Generated train/test split manifest based on the staged source images.
  • Summary JSON: Open Summary JSON
    Structured summary of mismatch notes, model results, and best-model evidence.

Outputs And Results

Key Outputs
  • Executed notebook artifact saved as capstone_session_10.ipynb.
  • Generated train/test image folders are staged under outputs/generated_split/.
  • Training histories, model comparison, and best-model prediction examples are saved as reviewable artifacts.
Key Findings
  • The current best transfer-learning result is ResNet50 with accuracy 0.6556.
  • The page now surfaces the train/test-folder and class-count mismatches explicitly rather than guessing away the copied-source differences.
  • TensorFlow Playground remains optional concept support; the notebook and saved outputs are the actual evidence layer.

Live Mask Detection Demo

This section keeps both browser demos visible at once so visitors can compare the hosted Teachable Machine version against the exported Session 10 capstone model using the same sample images, uploads, and optional webcam flow.

Teachable Machine

Hosted browser model for the lightweight comparison path.

Comparison Card

Teachable Machine Model

This version stays online as the fast hosted benchmark. It remains useful because the sample gallery, uploads, and webcam flow are already tuned for a low-friction public interaction.

Capstone Export

Actual Session 10 ResNet50 weights exported for in-browser comparison.

Our Model

ResNet50 Browser Export

This panel uses the actual Session 10 ResNet50 transfer-learning export. The current browser-ready checkpoint tested at 0.5833, so visitors can compare the hosted model against the capstone weights directly on the same page.

Privacy-Friendly

No Camera Required

The primary flow is sample-driven and upload-first, so both demos still work for visitors who do not want to use their webcam or personal photos.

  • Built-in sample thumbnails trigger predictions immediately.
  • Uploading one image is optional, not required.
  • Webcam remains available as a secondary action only.
Capstone Fit

Same Page, Same Story

The paired demos sit directly beside the capstone evidence so users can test the hosted Teachable Machine model and the exported capstone model without leaving the Session 10 story.