This project implements a complete segmentation pipeline on the Cell Tracking Challenge dataset using both classical machine-learning models and a deep-learning U-Net.
MySEGMeasure.py wrapperpython scripts/setup_data.py Fluo-N2DH-GOWT1 --splits training test
artifacts/datasets/Fluo-N2DH-GOWT1/
training/
01, 01_ST, 01_GT, ...
02, 02_ST, 02_GT, ...
test/
All results below are reported on Fluo-N2DH-GOWT1, Track 01, ST masks, using Mean IoU / Jaccard.
| Model | Mean IoU |
|---|---|
| Naive Bayes | 0.763 |
| Logistic Regression | 0.829 |
| SVM (RBF Kernel) | 0.817 |
| SVM (Linear) | 0.831 |
| MLP (Shallow NN) | 0.866 |
| U-Net (Deep Model) | 0.811 |
A visual comparison of Mean IoU across all models:
This bar chart summarizes the segmentation performance of all classical baselines and the deep U-Net on the same track and silver-truth masks.
Below is a qualitative comparison for time frame t000.
| Model | Description |
|---|---|
| Naive Bayes | Very fast generative baseline; strong bias, high variance on noisy patches |
| Logistic Regression | Linear discriminative model; good trade-off between speed and performance |
| Linear SVM | Margin-based linear classifier; robust on this feature space |
| RBF SVM | Non-linear classifier; higher capacity but limited gain here |
| MLP (Shallow NN) | One-hidden-layer neural network on handcrafted features; best classical IoU |
Each model produces a probability or score per pixel, which is converted into a binary segmentation mask and then evaluated via IoU and SEGMeasure.
Input (1 × H × W)
↓
Encoder: [Conv → ReLU → Downsample] × 4
↓
Bottleneck
↓
Decoder: [Upsample → Skip Connection → Conv → ReLU] × 4
↓
Output (1 × H × W logits)
↓
Sigmoid → Binary mask
Skip connections from encoder to decoder help preserve fine-grained spatial detail, which is crucial for accurate cell boundaries.
| Parameter | Value |
|---|---|
| Epochs | 10 |
| Batch Size | 4 |
| Loss | BCEWithLogitsLoss |
| Optimizer | Adam |
| Learning Rate | 0.001 |
| Hardware | CPU |
Even under these modest settings, the U-Net produces sharp and realistic masks, especially for medium-to-large cells.
This section describes how to reproduce the main experiments from a fresh clone of the repo.
conda env create -f environment.yml
conda activate cse488-cell-tracking
pip install -e .
python scripts/setup_data.py Fluo-N2DH-GOWT1 --splits training test
# Naive Bayes
python scripts/train_naive_bayes.py Fluo-N2DH-GOWT1 --track 01
# Logistic Regression
python scripts/train_logreg.py Fluo-N2DH-GOWT1 --track 01
# SVM with RBF kernel
python scripts/train_svm.py Fluo-N2DH-GOWT1 --track 01 --kernel rbf
# Linear SVM
python scripts/train_svm_linear.py Fluo-N2DH-GOWT1 --track 01
# MLP (shallow neural net)
python scripts/train_mlp.py Fluo-N2DH-GOWT1 --track 01
Each script saves the trained model into artifacts/models/
(e.g., nb_track01.pkl, logreg_track01.pkl,
etc.).
python scripts/train_unet.py Fluo-N2DH-GOWT1 --track 01 --epochs 10 --batch-size 4 --lr 0.001 --model-path artifacts/models/unet_track01.pt
Example evaluation commands (with verbose per-label IoU):
# Naive Bayes
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type naive_bayes --model-path artifacts/models/nb_track01.pkl --verbose
# Logistic Regression
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type logreg --model-path artifacts/models/logreg_track01.pkl --verbose
# Linear SVM
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type svm --model-path artifacts/models/svm_linear_track01.pkl --verbose
# MLP
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type mlp --model-path artifacts/models/mlp_track01.pkl --verbose
# U-Net
python scripts/eval_seg.py Fluo-N2DH-GOWT1 --track 01 --model-type unet --model-path artifacts/models/unet_track01.pt --verbose
The script reports Mean IoU (Jaccard) and per-label IoU values, along
with a summary Jaccard index consistent with
MySEGMeasure.py.
This project successfully reproduces the required classical segmentation baseline and extends it with a U-Net deep-learning model on the Fluo-N2DH-GOWT1 dataset.
Overall, the repository provides a fully reproducible, modular, and extensible segmentation system, ready for future improvements such as: