VP – Cell Segmentation & Tracking Showcase

GitHub Repo Cloudflare Pages Site
Individual project by Varad Krishna Panchal
CSE488/588 — Cell Segmentation & Tracking Challenge

Project Overview

This project implements a complete segmentation pipeline on the Cell Tracking Challenge dataset using both classical machine-learning models and a deep-learning U-Net.

Dataset

Goals


Workflow Summary

1. Data Preparation

python scripts/setup_data.py Fluo-N2DH-GOWT1 --splits training test
artifacts/datasets/Fluo-N2DH-GOWT1/
  training/
    01, 01_ST, 01_GT, ...
    02, 02_ST, 02_GT, ...
  test/

2. Classical Segmentation Pipeline

3. Deep Model (U-Net)


Segmentation Results (Track 01, ST Masks)

All results below are reported on Fluo-N2DH-GOWT1, Track 01, ST masks, using Mean IoU / Jaccard.

Model Mean IoU
Naive Bayes 0.763
Logistic Regression 0.829
SVM (RBF Kernel) 0.817
SVM (Linear) 0.831
MLP (Shallow NN) 0.866
U-Net (Deep Model) 0.811

IoU Bar Chart

A visual comparison of Mean IoU across all models:

IoU Bar Chart

This bar chart summarizes the segmentation performance of all classical baselines and the deep U-Net on the same track and silver-truth masks.


Qualitative Results (Example Frame t000)

Below is a qualitative comparison for time frame t000.

Raw Image t000
Raw Image (t000)
Ground Truth ST Mask
Ground Truth (ST Mask)
Predicted Mask
Predicted Mask

Classical Pipeline Details

Feature Extraction

Classical Models Overview

Model Description
Naive Bayes Very fast generative baseline; strong bias, high variance on noisy patches
Logistic Regression Linear discriminative model; good trade-off between speed and performance
Linear SVM Margin-based linear classifier; robust on this feature space
RBF SVM Non-linear classifier; higher capacity but limited gain here
MLP (Shallow NN) One-hidden-layer neural network on handcrafted features; best classical IoU

Each model produces a probability or score per pixel, which is converted into a binary segmentation mask and then evaluated via IoU and SEGMeasure.


Deep Model: U-Net

Architecture (Simplified)

Input (1 × H × W)
        ↓
Encoder: [Conv → ReLU → Downsample] × 4
        ↓
      Bottleneck
        ↓
Decoder: [Upsample → Skip Connection → Conv → ReLU] × 4
        ↓
Output (1 × H × W logits)
        ↓
Sigmoid → Binary mask

Skip connections from encoder to decoder help preserve fine-grained spatial detail, which is crucial for accurate cell boundaries.

Training Settings

Parameter Value
Epochs 10
Batch Size 4
Loss BCEWithLogitsLoss
Optimizer Adam
Learning Rate 0.001
Hardware CPU

Even under these modest settings, the U-Net produces sharp and realistic masks, especially for medium-to-large cells.


Reproduction Checklist

This section describes how to reproduce the main experiments from a fresh clone of the repo.

1. Environment Setup

conda env create -f environment.yml
conda activate cse488-cell-tracking
pip install -e .

2. Download Dataset

python scripts/setup_data.py Fluo-N2DH-GOWT1 --splits training test

3. Train Classical Models (Example Commands)

# Naive Bayes
python scripts/train_naive_bayes.py Fluo-N2DH-GOWT1 --track 01

# Logistic Regression
python scripts/train_logreg.py Fluo-N2DH-GOWT1 --track 01

# SVM with RBF kernel
python scripts/train_svm.py Fluo-N2DH-GOWT1 --track 01 --kernel rbf

# Linear SVM
python scripts/train_svm_linear.py Fluo-N2DH-GOWT1 --track 01

# MLP (shallow neural net)
python scripts/train_mlp.py Fluo-N2DH-GOWT1 --track 01

Each script saves the trained model into artifacts/models/ (e.g., nb_track01.pkl, logreg_track01.pkl, etc.).

4. Train U-Net

python scripts/train_unet.py Fluo-N2DH-GOWT1 --track 01 --epochs 10 --batch-size 4 --lr 0.001 --model-path artifacts/models/unet_track01.pt

5. Evaluate Models

Example evaluation commands (with verbose per-label IoU):

# Naive Bayes
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type naive_bayes --model-path artifacts/models/nb_track01.pkl --verbose

# Logistic Regression
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type logreg --model-path artifacts/models/logreg_track01.pkl --verbose

# Linear SVM
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type svm --model-path artifacts/models/svm_linear_track01.pkl --verbose

# MLP
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type mlp --model-path artifacts/models/mlp_track01.pkl --verbose

# U-Net
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type unet --model-path artifacts/models/unet_track01.pt --verbose

The script reports Mean IoU (Jaccard) and per-label IoU values, along with a summary Jaccard index consistent with MySEGMeasure.py.


Conclusion

This project successfully reproduces the required classical segmentation baseline and extends it with a U-Net deep-learning model on the Fluo-N2DH-GOWT1 dataset.

Overall, the repository provides a fully reproducible, modular, and extensible segmentation system, ready for future improvements such as: