Problem
Most work on poset-based metrics is mathematically rich but hard to compare against practical LDPC decoders in a reproducible way.
Approach
I built an experiment pipeline that combines AN/SP metric implementations, AN-aware decoder variants, baseline decoders, and sweep tooling across channel settings.
Outcome
The project produces deterministic, replayable experiment artifacts (metrics.json, CSV summaries, and sweep outputs) and supports paper-aligned checks and figure generation.
What this demonstrates
- Research-to-engineering translation
- Reproducible benchmarking methodology
- Applied coding-theory systems work in Python