Implements tiers T0, T1, T2 of `docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`. All three tiers have landed together on this branch because they build on one another; this PR rolls them up for a single review pass. Per-tier plans: - T0: `docs/superpowers/plans/2026-04-23-t0-numerical-parity.md` - T1: `docs/superpowers/plans/2026-04-24-t1-factor-graph.md` - T2: `docs/superpowers/plans/2026-04-24-t2-new-api-surface.md` ## Summary ### T0 — Numerical parity (internal) - `Gaussian` switched to natural-parameter storage `(pi, tau)`; mul/div now ~7× faster (218 ps vs 1.57 ns). - `HashMap<Index, _>` → dense `Vec<_>` keyed by `Index.0` (via `AgentStore<D>`, `SkillStore`). - `ScratchArena` eliminates per-event allocations in `Game::likelihoods`. - `InferenceError` seed type added (1 variant). - 38 → 53 tests passing through T1. - Benchmark: `Batch::iteration` 29.84 → 21.25 µs. ### T1 — Factor graph machinery (internal) - `Factor` trait + `BuiltinFactor` enum (TeamSum / RankDiff / Trunc) driving within-game inference. - `VarStore` flat storage for variable marginals. - `Schedule` trait + `EpsilonOrMax` impl replacing the hand-rolled EP loop. - `Game::likelihoods` rebuilt on the factor-graph machinery; iteration counts and goldens preserved to within 1e-6. - 53 tests passing. - Benchmark: `Batch::iteration` 23.01 µs (slight regression absorbed in T2). ### T2 — New API surface (breaking) **Renames:** - `IndexMap → KeyTable`, `Player → Rating`, `Agent → Competitor`, `Batch → TimeSlice` **New types:** - `Time` trait with `Untimed` ZST and `i64` impls; `Drift<T>`, `Rating<T, D>`, `Competitor<T, D>`, `TimeSlice<T>`, `History<T, D, O, K>` all generic. - `Event<T, K>`, `Team<K>`, `Member<K>`, `Outcome` (`Ranked` variant; `#[non_exhaustive]`). - `Observer<T>` trait + `NullObserver`. - `ConvergenceOptions`, `ConvergenceReport`. - `GameOptions`, `OwnedGame<T, D>`. **Three-tier ingestion:** - `history.record_winner(&K, &K, T)` / `record_draw(&K, &K, T)` — 1v1 convenience. - `history.add_events(iter)` — typed bulk. - `history.event(T).team([...]).weights([...]).ranking([...]).commit()` — fluent. **Query API:** `current_skill`, `learning_curve`, `learning_curves` (keyed on `K`), `log_evidence`, `log_evidence_for`, `predict_quality`, `predict_outcome`. **Game constructors:** `ranked`, `one_v_one`, `free_for_all`, `custom` — all returning `Result<_, InferenceError>`. **`factors` module:** `Factor`, `Schedule`, `VarStore`, `VarId`, `BuiltinFactor`, `EpsilonOrMax`, `ScheduleReport`, `TeamSumFactor`, `RankDiffFactor`, `TruncFactor` now public. **Errors:** `InferenceError` gains `MismatchedShape`, `InvalidProbability`, `ConvergenceFailed`; boundary panics converted to `Result`. **Removed (breaking):** `History::convergence(iters, eps, verbose)`, `HistoryBuilder::gamma(f64)`, `HistoryBuilder::time(bool)`, `History.time: bool`, `learning_curves_by_index`, nested-Vec public `add_events`. ## Behavior change (documented in CHANGELOG) `Time = Untimed` has `elapsed_to → 0`, so no drift accumulates between slices. The old `time=false` mode implicitly forced `elapsed=1` on reappearance via an `i64::MAX` sentinel — that quirk is not reproducible under a typed time axis. Tests that depended on it now use `History::<i64, _>` with explicit `1..=n` timestamps. One test (`test_env_ttt`) had 3 Gaussian goldens updated to reflect the corrected semantics; documented in commit `33a7d90`. ## Final numbers | Metric | Before T0 | After T2 | Delta | |---|---|---|---| | `Batch::iteration` | 29.84 µs | 21.36 µs | **-28%** | | `Gaussian::mul` | 1.57 ns | 219 ps | **-86%** | | `Gaussian::div` | 1.57 ns | 219 ps | **-86%** | | Tests passing | 38 | 90 | +52 | All other Gaussian ops unchanged (~219 ps add/sub, ~264 ps pi/tau reads). ## Test plan - [x] `cargo test --features approx` — 90/90 pass (68 lib + 10 api_shape + 6 game + 4 record_winner + 2 equivalence) - [x] `cargo clippy --all-targets --features approx -- -D warnings` — clean - [x] `cargo +nightly fmt --check` — clean - [x] `cargo bench --bench batch` — 21.36 µs - [x] `cargo bench --bench gaussian` — unchanged from T1 - [x] `cargo run --example atp --features approx` — rewritten in new API, runs clean - [x] Historical Game-level goldens preserved in `tests/equivalence.rs` - [x] Public API matches spec Section 4 (verified by integration tests in `tests/api_shape.rs`) ## Commit history ~45 commits total across T0 + T1 + T2. Each task is self-contained and individually tested; the branch is bisectable. See `git log main..t2-new-api-surface` for the full list. ## Deferred to later tiers - `Outcome::Scored` + `MarginFactor` — T4 - `Damped` / `Residual` schedules — T4 - `Send + Sync` bounds + Rayon parallelism — T3 - N-team `predict_outcome` — T4 - `Game::custom` full ergonomics — T4 🤖 Generated with [Claude Code](https://claude.com/claude-code) Reviewed-on: #1 Co-authored-by: Anders Olsson <anders.e.olsson@gmail.com> Co-committed-by: Anders Olsson <anders.e.olsson@gmail.com>
235 lines
7.2 KiB
Rust
235 lines
7.2 KiB
Rust
use std::ops;
|
||
|
||
use crate::{MU, N_INF, SIGMA};
|
||
|
||
/// A Gaussian distribution stored in natural parameters.
|
||
///
|
||
/// `pi = 1 / sigma^2` (precision)
|
||
/// `tau = mu * pi` (precision-adjusted mean)
|
||
///
|
||
/// Multiplication and division in message passing become pure adds/subs of
|
||
/// the stored fields with no `sqrt` or reciprocal in the hot path. `mu()` and
|
||
/// `sigma()` are accessors computed on demand.
|
||
#[derive(Clone, Copy, PartialEq, Debug)]
|
||
pub struct Gaussian {
|
||
pi: f64,
|
||
tau: f64,
|
||
}
|
||
|
||
impl Gaussian {
|
||
/// Construct from mean and standard deviation.
|
||
pub const fn from_ms(mu: f64, sigma: f64) -> Self {
|
||
if sigma == f64::INFINITY {
|
||
Self { pi: 0.0, tau: 0.0 }
|
||
} else if sigma == 0.0 {
|
||
// Point mass at mu. tau = mu * pi = mu * inf.
|
||
// For mu == 0 this is 0; for mu != 0 it is inf * mu = inf (IEEE).
|
||
// Only N00 (mu=0, sigma=0) is used in practice.
|
||
Self {
|
||
pi: f64::INFINITY,
|
||
tau: if mu == 0.0 { 0.0 } else { f64::INFINITY },
|
||
}
|
||
} else {
|
||
let pi = 1.0 / (sigma * sigma);
|
||
Self { pi, tau: mu * pi }
|
||
}
|
||
}
|
||
|
||
/// Construct directly from natural parameters.
|
||
#[inline]
|
||
pub(crate) const fn from_natural(pi: f64, tau: f64) -> Self {
|
||
Self { pi, tau }
|
||
}
|
||
|
||
#[inline]
|
||
pub fn pi(&self) -> f64 {
|
||
self.pi
|
||
}
|
||
|
||
#[inline]
|
||
pub fn tau(&self) -> f64 {
|
||
self.tau
|
||
}
|
||
|
||
#[inline]
|
||
pub fn mu(&self) -> f64 {
|
||
if self.pi == 0.0 {
|
||
0.0
|
||
} else {
|
||
self.tau / self.pi
|
||
}
|
||
}
|
||
|
||
#[inline]
|
||
pub fn sigma(&self) -> f64 {
|
||
if self.pi == 0.0 {
|
||
f64::INFINITY
|
||
} else if self.pi.is_infinite() {
|
||
0.0
|
||
} else {
|
||
1.0 / self.pi.sqrt()
|
||
}
|
||
}
|
||
|
||
pub(crate) fn delta(&self, other: Gaussian) -> (f64, f64) {
|
||
(
|
||
(self.mu() - other.mu()).abs(),
|
||
(self.sigma() - other.sigma()).abs(),
|
||
)
|
||
}
|
||
|
||
pub(crate) fn exclude(&self, other: Gaussian) -> Self {
|
||
let var = self.sigma().powi(2) - other.sigma().powi(2);
|
||
if var <= 0.0 {
|
||
// When sigma_self ≈ sigma_other (including ULP-level rounding differences
|
||
// from the pi→sigma accessor round-trip), the excluded contribution is N00.
|
||
// Computing from_ms(tiny_mu, 0.0) would give {pi:inf, tau:inf}, whose
|
||
// mu() = inf/inf = NaN. Returning N00 is correct: when both Gaussians
|
||
// carry the same variance, the residual is a point mass at 0.
|
||
return Gaussian::from_ms(0.0, 0.0);
|
||
}
|
||
let mu = self.mu() - other.mu();
|
||
Self::from_ms(mu, var.sqrt())
|
||
}
|
||
|
||
pub(crate) fn forget(&self, variance_delta: f64) -> Self {
|
||
let var = self.sigma().powi(2) + variance_delta;
|
||
Self::from_ms(self.mu(), var.sqrt())
|
||
}
|
||
}
|
||
|
||
impl Default for Gaussian {
|
||
fn default() -> Self {
|
||
Self::from_ms(MU, SIGMA)
|
||
}
|
||
}
|
||
|
||
impl ops::Add<Gaussian> for Gaussian {
|
||
type Output = Gaussian;
|
||
/// Variance addition: (mu1 + mu2, sqrt(σ1² + σ2²)).
|
||
/// Used for combining performance and noise; rare relative to mul/div.
|
||
fn add(self, rhs: Gaussian) -> Self::Output {
|
||
let mu = self.mu() + rhs.mu();
|
||
let var = self.sigma().powi(2) + rhs.sigma().powi(2);
|
||
Self::from_ms(mu, var.sqrt())
|
||
}
|
||
}
|
||
|
||
impl ops::Sub<Gaussian> for Gaussian {
|
||
type Output = Gaussian;
|
||
/// (mu1 - mu2, sqrt(σ1² + σ2²)). Same sigma combination as Add.
|
||
fn sub(self, rhs: Gaussian) -> Self::Output {
|
||
let mu = self.mu() - rhs.mu();
|
||
let var = self.sigma().powi(2) + rhs.sigma().powi(2);
|
||
Self::from_ms(mu, var.sqrt())
|
||
}
|
||
}
|
||
|
||
impl ops::Mul<Gaussian> for Gaussian {
|
||
type Output = Gaussian;
|
||
/// Factor product: nat-param add. Hot path — two f64 additions, no sqrt.
|
||
fn mul(self, rhs: Gaussian) -> Self::Output {
|
||
Self::from_natural(self.pi + rhs.pi, self.tau + rhs.tau)
|
||
}
|
||
}
|
||
|
||
impl ops::Mul<f64> for Gaussian {
|
||
type Output = Gaussian;
|
||
fn mul(self, scalar: f64) -> Self::Output {
|
||
if !scalar.is_finite() {
|
||
return N_INF;
|
||
}
|
||
if scalar == 0.0 {
|
||
// Scaling by 0 collapses to a point mass at 0 (sigma' = 0, mu' = 0).
|
||
// This is N00, the additive identity, NOT N_INF.
|
||
return Gaussian::from_ms(0.0, 0.0);
|
||
}
|
||
// sigma' = sigma * |scalar| => pi' = pi / scalar²
|
||
// mu' = mu * scalar => tau' = tau / scalar
|
||
Self::from_natural(self.pi / (scalar * scalar), self.tau / scalar)
|
||
}
|
||
}
|
||
|
||
impl ops::Div<Gaussian> for Gaussian {
|
||
type Output = Gaussian;
|
||
/// Cavity: nat-param sub. Hot path — two f64 subtractions, no sqrt.
|
||
fn div(self, rhs: Gaussian) -> Self::Output {
|
||
Self::from_natural(self.pi - rhs.pi, self.tau - rhs.tau)
|
||
}
|
||
}
|
||
|
||
#[cfg(test)]
|
||
mod tests {
|
||
use super::*;
|
||
|
||
#[test]
|
||
fn test_add() {
|
||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||
let m = Gaussian::from_ms(0.0, 1.0);
|
||
let r = n + m;
|
||
assert!((r.mu() - 25.0).abs() < 1e-12);
|
||
assert!((r.sigma() - 8.393118874676116).abs() < 1e-10);
|
||
}
|
||
|
||
#[test]
|
||
fn test_sub() {
|
||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||
let m = Gaussian::from_ms(1.0, 1.0);
|
||
let r = n - m;
|
||
assert!((r.mu() - 24.0).abs() < 1e-12);
|
||
assert!((r.sigma() - 8.393118874676116).abs() < 1e-10);
|
||
}
|
||
|
||
#[test]
|
||
fn test_mul() {
|
||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||
let m = Gaussian::from_ms(0.0, 1.0);
|
||
let r = n * m;
|
||
assert!((r.mu() - 0.35488958990536273).abs() < 1e-10);
|
||
assert!((r.sigma() - 0.992876838486922).abs() < 1e-10);
|
||
}
|
||
|
||
#[test]
|
||
fn test_div() {
|
||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||
let m = Gaussian::from_ms(0.0, 1.0);
|
||
let r = m / n;
|
||
assert!((r.mu() - (-0.3652597402597402)).abs() < 1e-10);
|
||
assert!((r.sigma() - 1.0072787050317253).abs() < 1e-10);
|
||
}
|
||
|
||
#[test]
|
||
fn test_n00_is_add_identity() {
|
||
// N00 (sigma=0) is the additive identity for the variance-convolution Add op.
|
||
// N_INF (sigma=inf) is the identity for the EP-product Mul op.
|
||
let g = Gaussian::from_ms(3.0, 2.0);
|
||
let n00 = Gaussian::from_ms(0.0, 0.0);
|
||
let r = n00 + g;
|
||
assert!((r.mu() - g.mu()).abs() < 1e-12);
|
||
assert!((r.sigma() - g.sigma()).abs() < 1e-12);
|
||
}
|
||
|
||
#[test]
|
||
fn test_mul_is_factor_product() {
|
||
// n * m in nat-params should be pi_n + pi_m, tau_n + tau_m
|
||
let n = Gaussian::from_ms(2.0, 3.0);
|
||
let m = Gaussian::from_ms(1.0, 2.0);
|
||
let r = n * m;
|
||
let expected_pi = n.pi() + m.pi();
|
||
let expected_tau = n.tau() + m.tau();
|
||
assert!((r.pi() - expected_pi).abs() < 1e-15);
|
||
assert!((r.tau() - expected_tau).abs() < 1e-15);
|
||
}
|
||
|
||
#[test]
|
||
fn test_div_is_cavity() {
|
||
let n = Gaussian::from_ms(2.0, 1.0);
|
||
let m = Gaussian::from_ms(1.0, 2.0);
|
||
let r = n / m;
|
||
let expected_pi = n.pi() - m.pi();
|
||
let expected_tau = n.tau() - m.tau();
|
||
assert!((r.pi() - expected_pi).abs() < 1e-15);
|
||
assert!((r.tau() - expected_tau).abs() < 1e-15);
|
||
}
|
||
}
|