How the models work
Each sport gets its own rating system, tuned against real outcomes. The technical details are below; if you want the gist, the yellow boxes are written for someone who has never opened a stats textbook.
CBB · College World Series — per-game logit on 72,809 D1 games
Per-game logit fit on 72,809 D1-to-D1 games from 2014 through 2024, anchored to current PEA Ratings. The rating-diff slope is K_DIFF = 0.37 (95% CI [0.364, 0.379]). Home-field advantage is HFA_LOGIT = 0.23, roughly a 6% bump (CI [0.217, 0.249]). Per-game Gaussian rating noise is σ ≈ 2.0, which captures the variance you get from starting-pitcher identity. Fatigue costs 0.40 rating points per game already played in a regional or super series. Bracket sanity check: across 11 historical NCAA tournaments the host-to-super advancement rate was 61.3%, which lines up with these constants better than the 25-year literature anchor of 68.8%.
NBA · Championship — dual-Elo with in-series state
Dual-Elo system. Each team carries a regular-season Elo and a playoff-only Elo; the two get blended for postseason matchups. In-series state (1-0, 2-1, etc.) modifies win probability on each subsequent game because trailing teams play differently. Rotation ratings shift live for injuries. The bias correction is tuned against 2024 and 2025 playoff outcomes -- 166 backfilled games. The playoff-totals correction is learned live by a nightly self-tuning loop.
CFB · Playoff — 12-team simulator with committee-proxy seeding
12-team CFP simulator. Committee-proxy seeding is learned from 2014–2024 CFP history, so the model picks the kind of teams the committee actually picks. First-round games are on campus, with home-field worth 2.5 points; QFs, SFs, and the Final are at neutral sites (no HFA). Deterministic mode picks the favorite at every node. Sampled mode adds per-game Gaussian noise σ pulled from the ratings JSON, so a single sampled bracket walk has upsets baked in.
NFL · Ratings — offseason-adjusted, calibrated to DK win totals
Off-season-adjusted team ratings with QB and coach overrides layered on top of a prior-year base rating. Calibrated against DraftKings 2026 win totals: 25 of 32 teams land within 1.5 wins of the market line, with a median miss of about 0.8 wins. The team-by-team breakdown lives in our internal docs/over_under_vs_vegas_predictions.md; we are cleaning up a public version for the regular-season run-up.
Weather — NOAA / Open-Meteo / GFS / ECMWF ensemble with EMOS bias correction
Ensemble forecast pulling NOAA, Open-Meteo, GFS, and ECMWF among others. Each member gets bias-corrected per city, the result is converted to probability bands for daily high and daily low. Two known calibration regimes are in our history: pre-EMOS and EMOS v1. The live system runs EMOS v1 with a confidence cap above 30% predicted probability, applied after a 12K-game backtest showed the model gets overconfident in that range. Calibration tables ship to /calibration as resolved bets accumulate.
What we publish that nobody else does — the misses
The misses. Every published rating clears a calibration check: we bin actual outcomes by predicted probability and see whether the line of best fit is y = x. Where it isn't, that is a model bias we either correct or own. Calibration tables live at /calibration as each sport accumulates enough resolved samples.