my-site

FFI Stability Classifier

Pipeline Summary

Directory layout

.
├─ assets/
│  └─ plots/                   # Example plots committed to git
├─ config/
│  ├─ train_example.yaml        # Example training config (EDIT data_dir!)
│  └─ slurm_example.yaml        # Example Slurm config template
├─ hpc/
│  └─ train.slurm               # Generic Slurm wrapper (expects env vars)
├─ scripts/
│  └─ submit_slurm.py           # Builds sbatch command from YAML
├─ src/
│  └─ ntrno/
│     ├─ __init__.py            # PROJECT_ROOT resolution
│     ├─ cli/train.py           # main entrypoint: python -m ntrno.cli.train
│     ├─ config.py              # dataclasses + defaults
│     ├─ data.py                # NPZ loading, weights, split, scaling, loaders
│     ├─ inference.py           # inference latency benchmark helper
│     ├─ metrics.py             # threshold sweep + PR/F1 utilities
│     ├─ models.py              # MLP classifier
│     ├─ plots.py               # plot writers (dark/cyber styling)
│     └─ train.py               # training loop + saving artifacts
├─ tests/
│  └─ test_train_smoke.py       # tiny synthetic smoke test
├─ Makefile
├─ requirements.txt
└─ README.md

Data you must provide

By default, the trainer expects a data/ (found at https://zenodo.org/records) directory at the repo root containing exactly these four files:

data/
├─ train_data_stable_zerofluxfac.npz
├─ train_data_stable_oneflavor.npz
├─ train_data_random.npz
└─ train_data_NSM_stable.npz

Expected NPZ keys

Each NPZ must contain the following arrays:

| File                              | Features key    | Labels key             |
| --------------------------------- | --------------- | ---------------------- |
| train_data_stable_zerofluxfac.npz | `X_zerofluxfac` | `unstable_zerofluxfac` |
| train_data_stable_oneflavor.npz   | `X_oneflavor`   | `unstable_oneflavor`   |
| train_data_random.npz             | `X_random`      | `unstable_random`      |
| train_data_NSM_stable.npz         | `X_NSM_stable`  | `unstable_NSM_stable`  |

Makefile shortcuts

The Makefile wires everything up for you:

make venv
make train
make test
make slurm

Plots at a glance

Loss curves (kept just these two):

Seed 17 loss Seed 43 loss

F1 vs threshold sweeps:

Seed 43 sweep Seed 17 sweep

Directory layout

.
├─ assets/
│  └─ plots/                   # Example plots committed to git
├─ config/
│  ├─ train_example.yaml        # Example training config (EDIT data_dir!)
│  └─ slurm_example.yaml        # Example Slurm config template
├─ hpc/
│  └─ train.slurm               # Generic Slurm wrapper (expects env vars)
├─ scripts/
│  └─ submit_slurm.py           # Builds sbatch command from YAML
├─ src/
│  └─ ntrno/
│     ├─ __init__.py            # PROJECT_ROOT resolution
│     ├─ cli/train.py           # main entrypoint: python -m ntrno.cli.train
│     ├─ config.py              # dataclasses + defaults
│     ├─ data.py                # NPZ loading, weights, split, scaling, loaders
│     ├─ inference.py           # inference latency benchmark helper
│     ├─ metrics.py             # threshold sweep + PR/F1 utilities
│     ├─ models.py              # MLP classifier
│     ├─ plots.py               # plot writers (dark/cyber styling)
│     └─ train.py               # training loop + saving artifacts
├─ tests/
│  └─ test_train_smoke.py       # tiny synthetic smoke test
├─ Makefile
├─ requirements.txt
└─ README.md

Quick start (local)

python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
PYTHONPATH=src python -m ntrno.cli.train --data-dir ./data --outputs-dir ./outputs2

Run a quick smoke test (tiny synthetic data):

PYTHONPATH=src python -m unittest tests.test_train_smoke

Makefile shortcuts:

make venv    # create virtualenv using PYTHON (default python3)
make train   # uses config/train_example.yaml and outputs2/
make test    # smoke test
make slurm   # submit via scripts/submit_slurm.py (see config/slurm_example.yaml -> .env/slurm.yaml)

Configs:

On the cluster

Edit the #SBATCH --chdir in slurm/auto_model.slurm to this folder, then:

sbatch slurm/auto_model.slurm

Each run ends with a single-line summary in the exact format you asked:

rm=20 layers=6 hs=384 dr=0.01 wd=0.0001 lr=0.005 bs=32768 | Loss 0.0830 Acc 0.9802 Prec0.5 0.9027 Rec0.5 0.9024 F1@0.5 0.9025 || BEST thr=0.68 Prec 0.9764 Rec 0.8562 F1 0.9124 | Epochs 418 Time 1481.4s

What counts as “C++ compatible”

We export TorchScript and ONNX. Alongside each model you also get a small JSON with the mean and std used for feature scaling so your C++ code can mirror preprocessing.


Data format expected

Your NPZ files are autodetected with keys like:

We concatenate all found pairs. Labels are treated as FFI present = 1.

Reproduce that exact setup

We lock the architecture to 6 hidden layers, hidden size 384, dropout 0.01, and train with:

Everything else is just creature comforts.