Implements tiers T0, T1, T2 of `docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`. All three tiers have landed together on this branch because they build on one another; this PR rolls them up for a single review pass. Per-tier plans: - T0: `docs/superpowers/plans/2026-04-23-t0-numerical-parity.md` - T1: `docs/superpowers/plans/2026-04-24-t1-factor-graph.md` - T2: `docs/superpowers/plans/2026-04-24-t2-new-api-surface.md` ## Summary ### T0 — Numerical parity (internal) - `Gaussian` switched to natural-parameter storage `(pi, tau)`; mul/div now ~7× faster (218 ps vs 1.57 ns). - `HashMap<Index, _>` → dense `Vec<_>` keyed by `Index.0` (via `AgentStore<D>`, `SkillStore`). - `ScratchArena` eliminates per-event allocations in `Game::likelihoods`. - `InferenceError` seed type added (1 variant). - 38 → 53 tests passing through T1. - Benchmark: `Batch::iteration` 29.84 → 21.25 µs. ### T1 — Factor graph machinery (internal) - `Factor` trait + `BuiltinFactor` enum (TeamSum / RankDiff / Trunc) driving within-game inference. - `VarStore` flat storage for variable marginals. - `Schedule` trait + `EpsilonOrMax` impl replacing the hand-rolled EP loop. - `Game::likelihoods` rebuilt on the factor-graph machinery; iteration counts and goldens preserved to within 1e-6. - 53 tests passing. - Benchmark: `Batch::iteration` 23.01 µs (slight regression absorbed in T2). ### T2 — New API surface (breaking) **Renames:** - `IndexMap → KeyTable`, `Player → Rating`, `Agent → Competitor`, `Batch → TimeSlice` **New types:** - `Time` trait with `Untimed` ZST and `i64` impls; `Drift<T>`, `Rating<T, D>`, `Competitor<T, D>`, `TimeSlice<T>`, `History<T, D, O, K>` all generic. - `Event<T, K>`, `Team<K>`, `Member<K>`, `Outcome` (`Ranked` variant; `#[non_exhaustive]`). - `Observer<T>` trait + `NullObserver`. - `ConvergenceOptions`, `ConvergenceReport`. - `GameOptions`, `OwnedGame<T, D>`. **Three-tier ingestion:** - `history.record_winner(&K, &K, T)` / `record_draw(&K, &K, T)` — 1v1 convenience. - `history.add_events(iter)` — typed bulk. - `history.event(T).team([...]).weights([...]).ranking([...]).commit()` — fluent. **Query API:** `current_skill`, `learning_curve`, `learning_curves` (keyed on `K`), `log_evidence`, `log_evidence_for`, `predict_quality`, `predict_outcome`. **Game constructors:** `ranked`, `one_v_one`, `free_for_all`, `custom` — all returning `Result<_, InferenceError>`. **`factors` module:** `Factor`, `Schedule`, `VarStore`, `VarId`, `BuiltinFactor`, `EpsilonOrMax`, `ScheduleReport`, `TeamSumFactor`, `RankDiffFactor`, `TruncFactor` now public. **Errors:** `InferenceError` gains `MismatchedShape`, `InvalidProbability`, `ConvergenceFailed`; boundary panics converted to `Result`. **Removed (breaking):** `History::convergence(iters, eps, verbose)`, `HistoryBuilder::gamma(f64)`, `HistoryBuilder::time(bool)`, `History.time: bool`, `learning_curves_by_index`, nested-Vec public `add_events`. ## Behavior change (documented in CHANGELOG) `Time = Untimed` has `elapsed_to → 0`, so no drift accumulates between slices. The old `time=false` mode implicitly forced `elapsed=1` on reappearance via an `i64::MAX` sentinel — that quirk is not reproducible under a typed time axis. Tests that depended on it now use `History::<i64, _>` with explicit `1..=n` timestamps. One test (`test_env_ttt`) had 3 Gaussian goldens updated to reflect the corrected semantics; documented in commit `33a7d90`. ## Final numbers | Metric | Before T0 | After T2 | Delta | |---|---|---|---| | `Batch::iteration` | 29.84 µs | 21.36 µs | **-28%** | | `Gaussian::mul` | 1.57 ns | 219 ps | **-86%** | | `Gaussian::div` | 1.57 ns | 219 ps | **-86%** | | Tests passing | 38 | 90 | +52 | All other Gaussian ops unchanged (~219 ps add/sub, ~264 ps pi/tau reads). ## Test plan - [x] `cargo test --features approx` — 90/90 pass (68 lib + 10 api_shape + 6 game + 4 record_winner + 2 equivalence) - [x] `cargo clippy --all-targets --features approx -- -D warnings` — clean - [x] `cargo +nightly fmt --check` — clean - [x] `cargo bench --bench batch` — 21.36 µs - [x] `cargo bench --bench gaussian` — unchanged from T1 - [x] `cargo run --example atp --features approx` — rewritten in new API, runs clean - [x] Historical Game-level goldens preserved in `tests/equivalence.rs` - [x] Public API matches spec Section 4 (verified by integration tests in `tests/api_shape.rs`) ## Commit history ~45 commits total across T0 + T1 + T2. Each task is self-contained and individually tested; the branch is bisectable. See `git log main..t2-new-api-surface` for the full list. ## Deferred to later tiers - `Outcome::Scored` + `MarginFactor` — T4 - `Damped` / `Residual` schedules — T4 - `Send + Sync` bounds + Rayon parallelism — T3 - N-team `predict_outcome` — T4 - `Game::custom` full ergonomics — T4 🤖 Generated with [Claude Code](https://claude.com/claude-code) Reviewed-on: #1 Co-authored-by: Anders Olsson <anders.e.olsson@gmail.com> Co-committed-by: Anders Olsson <anders.e.olsson@gmail.com>
88 lines
2.2 KiB
Rust
88 lines
2.2 KiB
Rust
//! Outcome of a match.
|
|
//!
|
|
//! In T2, only `Ranked` is supported; `Scored` will be added together with
|
|
//! `MarginFactor` in T4. The enum is `#[non_exhaustive]` so adding `Scored`
|
|
//! is non-breaking for downstream `match` expressions.
|
|
|
|
use smallvec::SmallVec;
|
|
|
|
/// Final outcome of a match.
|
|
///
|
|
/// `Ranked(ranks)`: lower rank = better. Equal ranks mean a tie between those
|
|
/// teams. `ranks.len()` must equal the number of teams in the event.
|
|
#[derive(Clone, Debug, PartialEq)]
|
|
#[non_exhaustive]
|
|
pub enum Outcome {
|
|
Ranked(SmallVec<[u32; 4]>),
|
|
}
|
|
|
|
impl Outcome {
|
|
/// `N`-team outcome where team `winner` won and everyone else tied for last.
|
|
///
|
|
/// Panics if `winner >= n`.
|
|
pub fn winner(winner: u32, n: u32) -> Self {
|
|
assert!(winner < n, "winner index {winner} out of range 0..{n}");
|
|
let ranks: SmallVec<[u32; 4]> = (0..n).map(|i| if i == winner { 0 } else { 1 }).collect();
|
|
Self::Ranked(ranks)
|
|
}
|
|
|
|
/// All `n` teams tied.
|
|
pub fn draw(n: u32) -> Self {
|
|
Self::Ranked(SmallVec::from_vec(vec![0; n as usize]))
|
|
}
|
|
|
|
/// Explicit per-team ranking.
|
|
pub fn ranking<I: IntoIterator<Item = u32>>(ranks: I) -> Self {
|
|
Self::Ranked(ranks.into_iter().collect())
|
|
}
|
|
|
|
pub fn team_count(&self) -> usize {
|
|
match self {
|
|
Self::Ranked(r) => r.len(),
|
|
}
|
|
}
|
|
|
|
#[allow(dead_code)]
|
|
pub(crate) fn as_ranks(&self) -> &[u32] {
|
|
match self {
|
|
Self::Ranked(r) => r,
|
|
}
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
|
|
#[test]
|
|
fn winner_two_teams() {
|
|
let o = Outcome::winner(0, 2);
|
|
assert_eq!(o.as_ranks(), &[0u32, 1]);
|
|
assert_eq!(o.team_count(), 2);
|
|
}
|
|
|
|
#[test]
|
|
fn winner_three_teams_second_wins() {
|
|
let o = Outcome::winner(1, 3);
|
|
assert_eq!(o.as_ranks(), &[1u32, 0, 1]);
|
|
}
|
|
|
|
#[test]
|
|
fn draw_three_teams() {
|
|
let o = Outcome::draw(3);
|
|
assert_eq!(o.as_ranks(), &[0u32, 0, 0]);
|
|
}
|
|
|
|
#[test]
|
|
fn ranking_from_iter() {
|
|
let o = Outcome::ranking([2, 0, 1]);
|
|
assert_eq!(o.as_ranks(), &[2u32, 0, 1]);
|
|
}
|
|
|
|
#[test]
|
|
#[should_panic(expected = "winner index 2 out of range")]
|
|
fn winner_out_of_range_panics() {
|
|
let _ = Outcome::winner(2, 2);
|
|
}
|
|
}
|