Files
trueskill-tt/benches/baseline.txt
Anders Olsson 6bf3e7e294 T3: rayon-backed concurrency (opt-in) (#2)
Implements T3 of `docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md` Section 6. Plan: `docs/superpowers/plans/2026-04-24-t3-concurrency.md` (11 tasks).

## Summary

### Breaking

- `Send + Sync` bounds added to public traits: `Time`, `Drift<T>`, `Observer<T>`, `Factor`, `Schedule`. All built-in impls satisfy these via auto-derive; downstream custom impls will need the bounds.

### New

- Opt-in `rayon` cargo feature. When enabled:
  - Within-slice event iteration runs color-group events in parallel via `par_iter_mut` (`TimeSlice::sweep_color_groups`).
  - `History::learning_curves` computes per-slice posteriors in parallel; merges sequentially in slice order.
  - `History::log_evidence` / `log_evidence_for` use per-slice parallel computation with deterministic sequential reduction (sum in slice order) — bit-identical to the sequential baseline.
- `ColorGroups` infrastructure (`src/color_group.rs`) with greedy graph coloring. Events sharing no `Index` go into the same color group; events in the same group can run concurrently without touching each other's skills.
- `tests/determinism.rs` asserts bit-identical posteriors across `RAYON_NUM_THREADS={1, 2, 4, 8}`.
- `benches/history_converge.rs` measures end-to-end convergence on three workload shapes.

## Performance

### Sequential (no rayon, default build)

| Metric | Before T3 | After T3 | Delta |
|---|---|---|---|
| `Batch::iteration` | 22.88 µs | 23.23 µs | **+1.5%** (noise) |
| `Gaussian::*` | ≈218–264 ps | ≈236 ps | within noise |

**No sequential regression.** Default build is as fast as T2.

### Parallel (`--features rayon`, Apple M5 Pro, auto thread count)

| Workload | Sequential | Parallel | Speedup |
|---|---:|---:|---:|
| 500 events / 100 competitors / 10 per slice | 4.03 ms | 4.24 ms | **1.0×** |
| 2000 events / 200 competitors / 20 per slice | 20.18 ms | 19.82 ms | **1.0×** |
| 5000 events / 50000 competitors / 1 slice | 11.88 ms | 9.10 ms | **1.3×** |

### ⚠️ The spec's >=2× target was not met on realistic workloads.

T3's within-slice color-group parallelism only shows material benefit when a slice holds many events AND the competitor pool is large enough to give the greedy coloring room to partition. Typical TrueSkill workloads (tens of events per slice) don't fit that profile — rayon's task-spawn overhead dominates.

**Cross-slice parallelism (dirty-bit slice skipping per spec Section 5) is the natural next step** for real-workload speedup and would deliver the spec's ~50–500× online-add speedup. Deferred to a future tier.

## Determinism

`tests/determinism.rs` runs a 200-event history at thread counts {1, 2, 4, 8} via `rayon::ThreadPoolBuilder::install` and asserts every `(time, posterior)` pair has bit-identical `mu` and `sigma` (compared via `f64::to_bits()`). Passes.

## Internals

- Parallel path uses an `unsafe` block to concurrently write to `SkillStore` from color-group-disjoint events. Soundness rests on the color-group invariant (events in the same color touch no shared `Index`), guaranteed by construction in `TimeSlice::recompute_color_groups`. Sequential path unchanged from T2.
- `RAYON_THRESHOLD = 64` — color groups smaller than this fall back to sequential inside `sweep_color_groups` to avoid task-spawn overhead.
- Thread-local `ScratchArena` per rayon worker thread.

## Test plan

- [x] `cargo test --features approx` — 96 tests pass (74 lib + 22 integration)
- [x] `cargo test --features approx,rayon` — 97 tests pass (+1 determinism)
- [x] `cargo clippy --all-targets --features approx -- -D warnings` — clean
- [x] `cargo clippy --all-targets --features approx,rayon -- -D warnings` — clean
- [x] `cargo +nightly fmt --check` — clean
- [x] `cargo bench --bench batch --features approx` — 23.23 µs (no regression vs T2)
- [x] `cargo bench --bench history_converge --features approx,rayon` — runs on all three workloads
- [x] Bit-identical posteriors across `RAYON_NUM_THREADS={1, 2, 4, 8}` — verified

## Commit history

13 commits on `t3-concurrency`. Each task is self-contained and bisectable. See `git log main..t3-concurrency` for the full list.

## Deferred

- **Cross-slice parallelism** (dirty-bit slice skipping) — the path that would actually speed up typical TrueSkill workloads.
- **Default-on `rayon` feature** — spec called for default-on; we keep it opt-in until the feature proves stable in production use.
- **Synchronous-EP schedule with barrier merge** — alternative parallel strategy per spec Section 6.
- **`MarginFactor` / `Outcome::Scored`** — T4.
- **`Damped` / `Residual` schedules** — T4.
- **N-team `predict_outcome`** — T4.
- **`Game::custom` full ergonomics** — T4.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Reviewed-on: #2
Co-authored-by: Anders Olsson <anders.e.olsson@gmail.com>
Co-committed-by: Anders Olsson <anders.e.olsson@gmail.com>
2026-04-24 13:01:01 +00:00

133 lines
6.9 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Baseline numbers captured before T0 changes
# Hardware: lrrr.local / Apple M5 Pro
# Date: 2026-04-24
Batch::iteration 29.840 µs
Gaussian::add 219.58 ps
Gaussian::sub 219.41 ps
Gaussian::mul 1.568 ns ← hot path; target ≥1.5× improvement
Gaussian::div 1.572 ns ← hot path; target ≥1.5× improvement
Gaussian::pi 262.89 ps
Gaussian::tau 262.47 ps
Gaussian::pi_tau_combined 219.40 ps
# After T0 (2026-04-24, same hardware)
Batch::iteration 21.253 µs (1.40× — below 3× target; see post-mortem)
Gaussian::add 218.62 ps (1.00× — unchanged, Add/Sub use moment form)
Gaussian::sub 220.15 ps (1.00×)
Gaussian::mul 218.69 ps (7.17× — nat-param: now two f64 adds, no sqrt)
Gaussian::div 218.64 ps (7.19× — nat-param: now two f64 subs, no sqrt)
Gaussian::pi 263.19 ps (1.00× — now a field read, same cost)
Gaussian::tau 263.51 ps (1.00× — now a field read, same cost)
Gaussian::pi_tau_combined 219.13 ps (1.00×)
# Post-mortem: Batch::iteration 1.40× vs. 3× target
#
# Root cause: the bench has 100 tiny 2-team events. Each event still allocates
# ~10 Vecs per iteration (down from ~18). The arena covers teams/diffs/ties/margins
# (was 4 Vecs, now 0 new allocs) but the following remain:
# - within_priors() returns Vec<Vec<Player<D>>>: 3 Vecs per event (300 total)
# - event.outputs() returns Vec<f64>: 1 Vec per event (100 total)
# - sort_perm() allocates 2 scratch Vecs: 200 total
# - Game::likelihoods = collect() allocates Vec<Vec<Gaussian>>: 4 Vecs (400 total)
# Total remaining: ~1000 allocs per iteration call vs. ~1800 before (44% reduction).
#
# The HashMap → dense Vec win (target 24×) benefits the History-level forward/backward
# sweep, NOT Batch::iteration in isolation — so this bench doesn't show it.
#
# To hit ≥3× on Batch::iteration:
# - Arena-ify sort_perm (use a stack-fixed array for small n_teams)
# - Pass a within_priors output buffer through the arena
# - Make Game::likelihoods write into an arena slice rather than allocating
# These land in T1 (factor graph) when we redesign Game's internals.
# After T1 (2026-04-24, same hardware)
Batch::iteration 23.010 µs (1.08× vs T0 21.253 µs — slight regression)
Gaussian::add 231.23 ps (unchanged)
Gaussian::sub 235.38 ps (unchanged)
Gaussian::mul 234.55 ps (unchanged — nat-param storage)
Gaussian::div 233.27 ps (unchanged)
Gaussian::pi 272.68 ps (unchanged)
Gaussian::tau 272.73 ps (unchanged)
Gaussian::pi_tau_combined 234.xx ps (unchanged)
# Notes:
# - Batch::iteration 23.0 µs vs target ≤ 21.5 µs (8% above target).
# Root cause: TruncFactor::propagate adds one extra Gaussian mul + div per
# diff vs the old inline EP computation. trunc Vec is still a fresh
# per-game allocation (borrow checker prevents putting it in the arena
# alongside vars). These are addressable in T2.
# - arena.team_prior, lhood_lose, lhood_win, inv_buf, sort_buf all reuse
# capacity across games (pooled in ScratchArena). sort_perm() allocation
# eliminated. message.rs deleted.
# - Gaussian operations unchanged vs T0.
# - All 53 tests pass. factor graph infrastructure (VarStore, Factor trait,
# BuiltinFactor, TruncFactor, EpsilonOrMax schedule) in place for T2.
# After T2 (2026-04-24, same hardware)
Batch::iteration 21.36 µs (1.07× vs T1 22.88 µs — 7% improvement)
Gaussian::add 218.97 ps (unchanged)
Gaussian::sub 218.58 ps (unchanged)
Gaussian::mul 218.59 ps (unchanged)
Gaussian::div 218.57 ps (unchanged)
Gaussian::pi 264.20 ps (unchanged)
Gaussian::tau 260.80 ps (unchanged)
# Notes:
# - API-only tier; hot inference path unchanged. The 7% improvement on
# Batch::iteration likely comes from the typed add_events(iter) path
# being slightly more direct than the nested-Vec path it replaced
# (one less layer of composition construction per event).
# - Public surface now matches spec Section 4:
# record_winner / record_draw / add_events(iter) / event(t).team().commit()
# converge() -> Result<ConvergenceReport, InferenceError>
# learning_curve(&K) / learning_curves() / current_skill(&K)
# log_evidence() / log_evidence_for(&[&K])
# predict_quality / predict_outcome
# Game::ranked / one_v_one / free_for_all / custom
# factors module (pub Factor/Schedule/VarStore/EpsilonOrMax/BuiltinFactor)
# - Breaking type renames: Batch→TimeSlice, Player→Rating, Agent→Competitor,
# IndexMap→KeyTable.
# - Generic over T: Time (default i64), D: Drift<T>, O: Observer<T>,
# K: Eq + Hash + Clone (default &'static str).
# - Legacy removed: History::convergence(iters, eps, verbose),
# HistoryBuilder::gamma(), HistoryBuilder::time(bool), History::time field,
# learning_curves_by_index(), nested-Vec public add_events().
# - 90 tests green: 68 lib + 10 api_shape + 6 game + 4 record_winner +
# 2 equivalence.
# After T3 (2026-04-24, same hardware)
Batch::iteration (seq, no rayon) 23.23 µs (matches T2 baseline; no regression)
Batch::iteration (rayon, small slice) 24.57 µs (within noise; small workloads pay rayon overhead)
Gaussian::add 236.62 ps (unchanged)
Gaussian::sub 236.43 ps (unchanged)
Gaussian::mul 237.05 ps (unchanged)
Gaussian::div 236.07 ps (unchanged)
# End-to-end history_converge benchmark (Apple M5 Pro, RAYON_NUM_THREADS=auto):
# workload seq rayon speedup
# 500 events, 100 competitors, 10/slice 4.03 ms 4.24 ms 1.0x
# 2000 events, 200 competitors, 20/slice 20.18 ms 19.82 ms 1.0x
# 5000 events, 50000 competitors, 1 slice 11.88 ms 9.10 ms 1.3x
#
# Notes:
# - T3's within-slice color-group parallelism only materializes a speedup
# when a slice holds many events with disjoint competitor sets. Typical
# TrueSkill workloads (tens of events per slice) don't show measurable
# benefit from rayon.
# - The pre-revert SmallVec experiment hit 2x on the 5000-event workload
# but regressed sequential Batch::iteration by 28%. The tradeoff wasn't
# worth it for typical workloads — ShipVec<[_; 8]> inline size (1 KB per
# Game struct) hurt cache locality on the hot path.
# - Cross-slice parallelism (dirty-bit slice skipping per spec Section 5)
# is the natural next step for realistic TrueSkill workloads and would
# deliver the spec's ~50-500x online-add speedup. Deferred to T4+.
# - Determinism verified: tests/determinism.rs asserts bit-identical
# posteriors across RAYON_NUM_THREADS={1, 2, 4, 8}.
# - Send + Sync bounds added on Time, Drift<T>, Observer<T>, Factor, Schedule.
# - Rayon is opt-in via `--features rayon`. Default build is unchanged from T2.