T0 + T1 + T2: engine redesign through new API surface (#1)

Implements tiers T0, T1, T2 of `docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`. All three tiers have landed together on this branch because they build on one another; this PR rolls them up for a single review pass.

Per-tier plans:
- T0: `docs/superpowers/plans/2026-04-23-t0-numerical-parity.md`
- T1: `docs/superpowers/plans/2026-04-24-t1-factor-graph.md`
- T2: `docs/superpowers/plans/2026-04-24-t2-new-api-surface.md`

## Summary

### T0 — Numerical parity (internal)

- `Gaussian` switched to natural-parameter storage `(pi, tau)`; mul/div now ~7× faster (218 ps vs 1.57 ns).
- `HashMap<Index, _>` → dense `Vec<_>` keyed by `Index.0` (via `AgentStore<D>`, `SkillStore`).
- `ScratchArena` eliminates per-event allocations in `Game::likelihoods`.
- `InferenceError` seed type added (1 variant).
- 38 → 53 tests passing through T1.
- Benchmark: `Batch::iteration` 29.84 → 21.25 µs.

### T1 — Factor graph machinery (internal)

- `Factor` trait + `BuiltinFactor` enum (TeamSum / RankDiff / Trunc) driving within-game inference.
- `VarStore` flat storage for variable marginals.
- `Schedule` trait + `EpsilonOrMax` impl replacing the hand-rolled EP loop.
- `Game::likelihoods` rebuilt on the factor-graph machinery; iteration counts and goldens preserved to within 1e-6.
- 53 tests passing.
- Benchmark: `Batch::iteration` 23.01 µs (slight regression absorbed in T2).

### T2 — New API surface (breaking)

**Renames:**
- `IndexMap → KeyTable`, `Player → Rating`, `Agent → Competitor`, `Batch → TimeSlice`

**New types:**
- `Time` trait with `Untimed` ZST and `i64` impls; `Drift<T>`, `Rating<T, D>`, `Competitor<T, D>`, `TimeSlice<T>`, `History<T, D, O, K>` all generic.
- `Event<T, K>`, `Team<K>`, `Member<K>`, `Outcome` (`Ranked` variant; `#[non_exhaustive]`).
- `Observer<T>` trait + `NullObserver`.
- `ConvergenceOptions`, `ConvergenceReport`.
- `GameOptions`, `OwnedGame<T, D>`.

**Three-tier ingestion:**
- `history.record_winner(&K, &K, T)` / `record_draw(&K, &K, T)` — 1v1 convenience.
- `history.add_events(iter)` — typed bulk.
- `history.event(T).team([...]).weights([...]).ranking([...]).commit()` — fluent.

**Query API:** `current_skill`, `learning_curve`, `learning_curves` (keyed on `K`), `log_evidence`, `log_evidence_for`, `predict_quality`, `predict_outcome`.

**Game constructors:** `ranked`, `one_v_one`, `free_for_all`, `custom` — all returning `Result<_, InferenceError>`.

**`factors` module:** `Factor`, `Schedule`, `VarStore`, `VarId`, `BuiltinFactor`, `EpsilonOrMax`, `ScheduleReport`, `TeamSumFactor`, `RankDiffFactor`, `TruncFactor` now public.

**Errors:** `InferenceError` gains `MismatchedShape`, `InvalidProbability`, `ConvergenceFailed`; boundary panics converted to `Result`.

**Removed (breaking):** `History::convergence(iters, eps, verbose)`, `HistoryBuilder::gamma(f64)`, `HistoryBuilder::time(bool)`, `History.time: bool`, `learning_curves_by_index`, nested-Vec public `add_events`.

## Behavior change (documented in CHANGELOG)

`Time = Untimed` has `elapsed_to → 0`, so no drift accumulates between slices. The old `time=false` mode implicitly forced `elapsed=1` on reappearance via an `i64::MAX` sentinel — that quirk is not reproducible under a typed time axis. Tests that depended on it now use `History::<i64, _>` with explicit `1..=n` timestamps. One test (`test_env_ttt`) had 3 Gaussian goldens updated to reflect the corrected semantics; documented in commit `33a7d90`.

## Final numbers

| Metric | Before T0 | After T2 | Delta |
|---|---|---|---|
| `Batch::iteration` | 29.84 µs | 21.36 µs | **-28%** |
| `Gaussian::mul` | 1.57 ns | 219 ps | **-86%** |
| `Gaussian::div` | 1.57 ns | 219 ps | **-86%** |
| Tests passing | 38 | 90 | +52 |

All other Gaussian ops unchanged (~219 ps add/sub, ~264 ps pi/tau reads).

## Test plan

- [x] `cargo test --features approx` — 90/90 pass (68 lib + 10 api_shape + 6 game + 4 record_winner + 2 equivalence)
- [x] `cargo clippy --all-targets --features approx -- -D warnings` — clean
- [x] `cargo +nightly fmt --check` — clean
- [x] `cargo bench --bench batch` — 21.36 µs
- [x] `cargo bench --bench gaussian` — unchanged from T1
- [x] `cargo run --example atp --features approx` — rewritten in new API, runs clean
- [x] Historical Game-level goldens preserved in `tests/equivalence.rs`
- [x] Public API matches spec Section 4 (verified by integration tests in `tests/api_shape.rs`)

## Commit history

~45 commits total across T0 + T1 + T2. Each task is self-contained and individually tested; the branch is bisectable. See `git log main..t2-new-api-surface` for the full list.

## Deferred to later tiers

- `Outcome::Scored` + `MarginFactor` — T4
- `Damped` / `Residual` schedules — T4
- `Send + Sync` bounds + Rayon parallelism — T3
- N-team `predict_outcome` — T4
- `Game::custom` full ergonomics — T4

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Reviewed-on: #1
Co-authored-by: Anders Olsson <anders.e.olsson@gmail.com>
Co-committed-by: Anders Olsson <anders.e.olsson@gmail.com>
This commit was merged in pull request #1.
This commit is contained in:
2026-04-24 11:20:04 +00:00
committed by logaritmisk
parent a14df02089
commit d2aab82c1e
44 changed files with 10541 additions and 1325 deletions

View File

@@ -2,6 +2,90 @@
All notable changes to this project will be documented in this file. All notable changes to this project will be documented in this file.
## Unreleased — T2 new API surface
Breaking: every renamed type and the new public API land together per
`docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`
Section 7 "T2".
### Breaking renames
- `Batch``TimeSlice`
- `Player``Rating` (and the `.player` field on `Competitor` is now `.rating`)
- `Agent``Competitor`
- `IndexMap``KeyTable`
- `History` field `.batches``.time_slices`
### New types
- `Time` trait with `Untimed` ZST and `i64` impls (generic time axis).
- `Drift<T: Time>` — generified from the old `Drift` trait.
- `Event<T, K>`, `Team<K>`, `Member<K>` — typed bulk-ingest event shape.
- `Outcome` (`#[non_exhaustive]`) — `Ranked(SmallVec<[u32; 4]>)` with convenience
constructors `winner`, `draw`, `ranking`. `Scored` lands in T4.
- `Observer<T: Time>` trait + `NullObserver` ZST — structured progress callbacks.
- `ConvergenceOptions`, `ConvergenceReport` — configuration and post-hoc summary.
- `GameOptions`, `OwnedGame<T, D>` — ergonomic Game constructors without lifetime
gymnastics.
- `factors` module — re-exports `Factor`, `BuiltinFactor`, `VarId`, `VarStore`,
`Schedule`, `EpsilonOrMax`, `ScheduleReport`, and the three built-in factor types
(`TeamSumFactor`, `RankDiffFactor`, `TruncFactor`) as public API.
### New `History` API
- Three-tier ingestion:
- Tier 1 (bulk): `add_events<I: IntoIterator<Item = Event<T, K>>>(events) -> Result`
- Tier 2 (one-off): `record_winner(&K, &K, T)`, `record_draw(&K, &K, T)`
- Tier 3 (fluent): `event(T).team([...]).weights([...]).ranking([...]).commit()`
- `converge() -> Result<ConvergenceReport, InferenceError>` — replaces
`convergence(iters, eps, verbose)`.
- `current_skill(&K)`, `learning_curve(&K)`, `learning_curves()` (now keyed on `K`).
- `log_evidence()` zero-arg, `log_evidence_for(&[&K])`.
- `predict_quality(&[&[&K]])`, `predict_outcome(&[&[&K]])` (2-team only in T2;
N-team deferred to T4).
- `intern(&Q)` / `lookup(&Q)` expose the internal `KeyTable<K>` for power users.
- `History<T, D, O, K>` is now fully generic with defaults
`<i64, ConstantDrift, NullObserver, &'static str>`.
### New `Game` API
- `Game::ranked(&[&[Rating]], Outcome, &GameOptions) -> Result<OwnedGame, _>`.
- `Game::one_v_one(&Rating, &Rating, Outcome) -> Result<(Gaussian, Gaussian), _>`.
- `Game::free_for_all(&[&Rating], Outcome, &GameOptions) -> Result<OwnedGame, _>`.
- `Game::custom(...)` minimal escape hatch for user-defined factor graphs
(`#[doc(hidden)]` — full ergonomics in T4).
- `Game::log_evidence()` and `OwnedGame::log_evidence()` accessors.
### Errors
- `InferenceError` now carries `MismatchedShape { kind, expected, got }`,
`InvalidProbability { value }`, `ConvergenceFailed { last_step, iterations }`,
and `NegativePrecision { pi }`. Shape and bounds validation at the API boundary
now returns `Err` rather than panicking.
### Removed (breaking)
- `History::convergence(iters, eps, verbose)` — use `converge()`.
- `HistoryBuilder::gamma(f64)` — use `.drift(ConstantDrift(g))`.
- `HistoryBuilder::time(bool)` and `History.time: bool` — use the `Time` type parameter.
- The nested-`Vec<Vec<Vec<_>>>` public `add_events` signature —
use typed `add_events(iter)`.
- `learning_curves_by_index()` — use `learning_curves()`.
### Performance
`Batch::iteration` bench: **21.36 µs** (T1 was 22.88 µs on the same hardware, a
~7% improvement from the typed-path being slightly more direct). Gaussian
operations unchanged.
### Notes
- `Time = Untimed` returns `elapsed_to → 0`**behavior change** from the old
`time=false` mode, which implicitly generated `elapsed=1` per event via an
`i64::MAX` sentinel in `Agent.last_time`. Tests that relied on the old
`time=false` semantics now use `History::<i64, _>` with explicit
`1..=n` timestamps.
## 0.1.0 - 2026-04-23 ## 0.1.0 - 2026-04-23
### Features ### Features

View File

@@ -16,6 +16,7 @@ harness = false
[dependencies] [dependencies]
approx = { version = "0.5.1", optional = true } approx = { version = "0.5.1", optional = true }
smallvec = "1"
[dev-dependencies] [dev-dependencies]
criterion = "0.5" criterion = "0.5"

100
benches/baseline.txt Normal file
View File

@@ -0,0 +1,100 @@
# Baseline numbers captured before T0 changes
# Hardware: lrrr.local / Apple M5 Pro
# Date: 2026-04-24
Batch::iteration 29.840 µs
Gaussian::add 219.58 ps
Gaussian::sub 219.41 ps
Gaussian::mul 1.568 ns ← hot path; target ≥1.5× improvement
Gaussian::div 1.572 ns ← hot path; target ≥1.5× improvement
Gaussian::pi 262.89 ps
Gaussian::tau 262.47 ps
Gaussian::pi_tau_combined 219.40 ps
# After T0 (2026-04-24, same hardware)
Batch::iteration 21.253 µs (1.40× — below 3× target; see post-mortem)
Gaussian::add 218.62 ps (1.00× — unchanged, Add/Sub use moment form)
Gaussian::sub 220.15 ps (1.00×)
Gaussian::mul 218.69 ps (7.17× — nat-param: now two f64 adds, no sqrt)
Gaussian::div 218.64 ps (7.19× — nat-param: now two f64 subs, no sqrt)
Gaussian::pi 263.19 ps (1.00× — now a field read, same cost)
Gaussian::tau 263.51 ps (1.00× — now a field read, same cost)
Gaussian::pi_tau_combined 219.13 ps (1.00×)
# Post-mortem: Batch::iteration 1.40× vs. 3× target
#
# Root cause: the bench has 100 tiny 2-team events. Each event still allocates
# ~10 Vecs per iteration (down from ~18). The arena covers teams/diffs/ties/margins
# (was 4 Vecs, now 0 new allocs) but the following remain:
# - within_priors() returns Vec<Vec<Player<D>>>: 3 Vecs per event (300 total)
# - event.outputs() returns Vec<f64>: 1 Vec per event (100 total)
# - sort_perm() allocates 2 scratch Vecs: 200 total
# - Game::likelihoods = collect() allocates Vec<Vec<Gaussian>>: 4 Vecs (400 total)
# Total remaining: ~1000 allocs per iteration call vs. ~1800 before (44% reduction).
#
# The HashMap → dense Vec win (target 24×) benefits the History-level forward/backward
# sweep, NOT Batch::iteration in isolation — so this bench doesn't show it.
#
# To hit ≥3× on Batch::iteration:
# - Arena-ify sort_perm (use a stack-fixed array for small n_teams)
# - Pass a within_priors output buffer through the arena
# - Make Game::likelihoods write into an arena slice rather than allocating
# These land in T1 (factor graph) when we redesign Game's internals.
# After T1 (2026-04-24, same hardware)
Batch::iteration 23.010 µs (1.08× vs T0 21.253 µs — slight regression)
Gaussian::add 231.23 ps (unchanged)
Gaussian::sub 235.38 ps (unchanged)
Gaussian::mul 234.55 ps (unchanged — nat-param storage)
Gaussian::div 233.27 ps (unchanged)
Gaussian::pi 272.68 ps (unchanged)
Gaussian::tau 272.73 ps (unchanged)
Gaussian::pi_tau_combined 234.xx ps (unchanged)
# Notes:
# - Batch::iteration 23.0 µs vs target ≤ 21.5 µs (8% above target).
# Root cause: TruncFactor::propagate adds one extra Gaussian mul + div per
# diff vs the old inline EP computation. trunc Vec is still a fresh
# per-game allocation (borrow checker prevents putting it in the arena
# alongside vars). These are addressable in T2.
# - arena.team_prior, lhood_lose, lhood_win, inv_buf, sort_buf all reuse
# capacity across games (pooled in ScratchArena). sort_perm() allocation
# eliminated. message.rs deleted.
# - Gaussian operations unchanged vs T0.
# - All 53 tests pass. factor graph infrastructure (VarStore, Factor trait,
# BuiltinFactor, TruncFactor, EpsilonOrMax schedule) in place for T2.
# After T2 (2026-04-24, same hardware)
Batch::iteration 21.36 µs (1.07× vs T1 22.88 µs — 7% improvement)
Gaussian::add 218.97 ps (unchanged)
Gaussian::sub 218.58 ps (unchanged)
Gaussian::mul 218.59 ps (unchanged)
Gaussian::div 218.57 ps (unchanged)
Gaussian::pi 264.20 ps (unchanged)
Gaussian::tau 260.80 ps (unchanged)
# Notes:
# - API-only tier; hot inference path unchanged. The 7% improvement on
# Batch::iteration likely comes from the typed add_events(iter) path
# being slightly more direct than the nested-Vec path it replaced
# (one less layer of composition construction per event).
# - Public surface now matches spec Section 4:
# record_winner / record_draw / add_events(iter) / event(t).team().commit()
# converge() -> Result<ConvergenceReport, InferenceError>
# learning_curve(&K) / learning_curves() / current_skill(&K)
# log_evidence() / log_evidence_for(&[&K])
# predict_quality / predict_outcome
# Game::ranked / one_v_one / free_for_all / custom
# factors module (pub Factor/Schedule/VarStore/EpsilonOrMax/BuiltinFactor)
# - Breaking type renames: Batch→TimeSlice, Player→Rating, Agent→Competitor,
# IndexMap→KeyTable.
# - Generic over T: Time (default i64), D: Drift<T>, O: Observer<T>,
# K: Eq + Hash + Clone (default &'static str).
# - Legacy removed: History::convergence(iters, eps, verbose),
# HistoryBuilder::gamma(), HistoryBuilder::time(bool), History::time field,
# learning_curves_by_index(), nested-Vec public add_events().
# - 90 tests green: 68 lib + 10 api_shape + 6 game + 4 record_winner +
# 2 equivalence.

View File

@@ -1,45 +1,27 @@
use std::collections::HashMap;
use criterion::{Criterion, criterion_group, criterion_main}; use criterion::{Criterion, criterion_group, criterion_main};
use trueskill_tt::{ use trueskill_tt::{
BETA, GAMMA, IndexMap, MU, P_DRAW, SIGMA, agent::Agent, batch::Batch, drift::ConstantDrift, BETA, Competitor, GAMMA, KeyTable, MU, P_DRAW, Rating, SIGMA, TimeSlice, drift::ConstantDrift,
gaussian::Gaussian, player::Player, gaussian::Gaussian, storage::CompetitorStore,
}; };
fn criterion_benchmark(criterion: &mut Criterion) { fn criterion_benchmark(criterion: &mut Criterion) {
let mut index = IndexMap::new(); let mut index_map = KeyTable::new();
let a = index.get_or_create("a"); let a = index_map.get_or_create("a");
let b = index.get_or_create("b"); let b = index_map.get_or_create("b");
let c = index.get_or_create("c"); let c = index_map.get_or_create("c");
let agents = { let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
let mut map = HashMap::new();
map.insert( for agent in [a, b, c] {
a, agents.insert(
Agent { agent,
player: Player::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)), Competitor {
rating: Rating::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
..Default::default() ..Default::default()
}, },
); );
map.insert( }
b,
Agent {
player: Player::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
..Default::default()
},
);
map.insert(
c,
Agent {
player: Player::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
..Default::default()
},
);
map
};
let mut composition = Vec::new(); let mut composition = Vec::new();
let mut results = Vec::new(); let mut results = Vec::new();
@@ -51,11 +33,11 @@ fn criterion_benchmark(criterion: &mut Criterion) {
weights.push(vec![vec![1.0], vec![1.0]]); weights.push(vec![vec![1.0], vec![1.0]]);
} }
let mut batch = Batch::new(1, P_DRAW); let mut time_slice = TimeSlice::new(1, P_DRAW);
batch.add_events(composition, results, weights, &agents); time_slice.add_events(composition, results, weights, &agents);
criterion.bench_function("Batch::iteration", |b| { criterion.bench_function("Batch::iteration", |b| {
b.iter(|| batch.iteration(0, &agents)) b.iter(|| time_slice.iteration(0, &agents))
}); });
} }

View File

@@ -23,8 +23,11 @@ fn benchmark_gaussian_arithmetic(criterion: &mut Criterion) {
}); });
// Benchmark division // Benchmark division
// NOTE: numerator must have higher precision (smaller sigma) than the
// denominator in this representation; g2 (sigma=1) / g1 (sigma=8.33) is
// well-defined, whereas g1 / g2 underflows and panics in mu_sigma.
criterion.bench_function("Gaussian::div", |bencher| { criterion.bench_function("Gaussian::div", |bencher| {
bencher.iter(|| g1 / g2); bencher.iter(|| g2 / g1);
}); });
// Benchmark natural parameter conversions // Benchmark natural parameter conversions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,619 @@
# TrueSkill-TT Engine Redesign — Design
**Date:** 2026-04-23
**Status:** Approved (pending implementation plan)
## Summary
Comprehensive redesign of the TrueSkill-TT engine targeting four orthogonal goals:
1. **Performance** — substantially faster offline convergence and incremental online updates.
2. **Accuracy and richer match formats** — support for score margins, free-for-all with partial orders, correlated skills.
3. **Better convergence** — replace ad-hoc capped iteration with a pluggable `Schedule` trait covering all three nested loops.
4. **Better API surface** — typed event description, observer-based progress reporting, generic time axis, structured errors, ergonomic builders.
The design is comprehensive (Approach 1 of three considered) but delivered in five tiers so each step is independently shippable and validated by benchmarks.
## Goals & non-goals
**Goals**
- 1030× speedup on the offline convergence path for representative workloads (1000+ players, 1000+ events, 30 iterations)
- Order-of-magnitude speedup on incremental "add a single event" workloads
- Pluggable factor graph allowing new factor types without engine changes
- Optional Rayon-backed parallelism on top of `Send + Sync`-correct internals
- Typed, ergonomic public API; replace nested `Vec<Vec<Vec<_>>>` shapes with `Event<T, K>` / `Team<K>` / `Member<K>`
- Generic time axis: `Untimed`, `i64`, or user-supplied
- Observer-based progress instead of `verbose: bool` + `println!`
- Structured `Result<_, InferenceError>` at API boundaries
**Non-goals**
- WebAssembly support is not a goal; we may break it if a crate or feature requires.
- No GPU offload.
- No `no_std` support.
- No persistent format / serde — possible future feature.
- No replacement of the Gaussian/EP approximation itself in this design (the underlying inference math stays the same; we change layout, dispatch, scheduling, and API around it).
## Workload assumptions
Baseline workload that drives perf decisions:
- ~1000+ players
- ~1000+ events total
- ~5060 events per time slice (per day)
- Both online (incremental adds) and offline (full convergence) are common
- Offline convergence runs frequently
## Section 1 — Core types & traits
The foundation everything else builds on.
### `Gaussian` — natural-parameter storage
Switch storage from `(mu, sigma)` to natural parameters `(pi, tau)` where `pi = sigma⁻²`, `tau = mu · pi`. Multiplication and division dominate the hot path; in nat-params they are direct adds/subs of the components, no `sqrt`. Reads of `mu`/`sigma` become accessor methods (`tau / pi`, `1.0 / pi.sqrt()`). The trade is correct because reads are vanishingly rare compared to writes in EP.
```rust
pub struct Gaussian { pi: f64, tau: f64 }
pub const UNIFORM: Gaussian = Gaussian { pi: 0.0, tau: 0.0 }; // replaces N_INF
```
### `Time` trait
Replaces the bare `i64` time field. Keeps `History` parametric.
```rust
pub trait Time: Copy + Ord + Send + Sync + 'static {
fn elapsed_to(&self, later: &Self) -> i64;
}
pub struct Untimed; // ZST for the no-time-axis case
impl Time for Untimed { fn elapsed_to(&self, _: &Self) -> i64 { 0 } }
impl Time for i64 { fn elapsed_to(&self, later: &Self) -> i64 { later - self } }
// Optional impls behind feature flags: time::OffsetDateTime, chrono types
```
### `Drift<T>` trait
Generic over `T: Time` so seasonal/calendar-aware drift is possible without going through `i64`.
```rust
pub trait Drift<T: Time>: Copy + Send + Sync {
fn variance_delta(&self, from: &T, to: &T) -> f64;
}
```
`ConstantDrift(f64)` impl: `to.elapsed_to(from) as f64 * gamma * gamma`.
### `Index` and `KeyTable<K>`
`Index(usize)` is the handle into dense per-`History` `Vec` storage. Public, but intended for use by power users on hot paths who want to skip the `KeyTable` lookup. Casual API takes `&K`. `KeyTable<K>` (renamed from `IndexMap`, to avoid colliding with the `indexmap` crate's type) maps user keys → `Index`.
### `Observer` trait
Replaces `verbose: bool` + `println!`. Default no-op impls; user overrides what they need.
```rust
pub trait Observer<T: Time>: Send + Sync {
fn on_iteration_end(&self, _iter: usize, _max_step: (f64, f64)) {}
fn on_batch_processed(&self, _time: &T, _idx: usize, _n_events: usize) {}
fn on_converged(&self, _iters: usize, _final_step: (f64, f64)) {}
}
pub struct NullObserver;
impl<T: Time> Observer<T> for NullObserver {}
```
### Trade-offs
- `Gaussian` natural-param representation: anyone reading `mu`/`sigma` in a hot loop pays a sqrt — but that's correct, hot reads are rare.
- `Time` as a trait (not enum) keeps it open-ended at zero runtime cost; default `History<i64, _>` keeps the call sites familiar.
- `Observer` is a trait (not a closure) so different sites can have different signatures without losing type safety. `NullObserver` is a ZST.
## Section 2 — Factor graph architecture
The current `Game::likelihoods` is a hand-rolled, hard-coded graph. To unlock richer formats and let us experiment with EP schedules, the graph itself becomes a data structure.
### Variable / Factor model
Variables hold their current Gaussian marginal. Factors hold their outgoing messages to each connected variable plus do the local computation. Standard EP: factor's update is "divide marginal by old outgoing → cavity → apply local approximation → multiply marginal by new outgoing."
```rust
pub trait Factor: Send + Sync {
fn variables(&self) -> &[VarId];
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64); // returns max delta
fn log_evidence(&self, _vars: &VarStore) -> f64 { 0.0 }
}
```
### Built-in factor catalog
| Factor | Purpose | Status |
|---|---|---|
| `PerformanceFactor` | skill → performance (add β² noise, optional weight) | replaces inline `performance() * weight` |
| `TeamSumFactor` | weighted sum of player perfs → team perf | replaces inline `fold` |
| `RankDiffFactor` | (team_a perf) (team_b perf) → diff var | currently `team[e].posterior_win() team[e+1].posterior_lose()` |
| `TruncFactor` | EP truncation: `P(diff > margin)` or `P(|diff| < margin)` for draws | wraps current `v_w` / `approx` |
| `MarginFactor` *(future)* | use observed score margin as soft evidence | enables richer match formats |
| `SynergyFactor` *(future)* | couples teammates' skills | enables different topology |
| `ScoreFactor` *(future)* | continuous outcome (e.g., points scored) | enables score-based outcomes |
The first four together exactly reproduce today's algorithm. The last three are extension slots.
### Game = factor graph + schedule
```rust
pub struct Game<S: Schedule = DefaultSchedule> {
vars: VarStore, // SoA: Vec<Gaussian> marginals
factors: FactorList, // enum dispatch over BuiltinFactor (see Open Questions)
schedule: S,
}
```
Lean toward **enum dispatch** (`enum BuiltinFactor { Perf(...), Sum(...), RankDiff(...), Trunc(...), ... }`) over `Box<dyn Factor>` for the built-ins:
- avoids per-message vtable overhead in the hottest loop
- keeps factor data inline (no heap indirection)
- still allows user-defined factors via a `BuiltinFactor::Custom(Box<dyn Factor>)` variant
### Schedule trait
Controls iteration order and stopping. Default = current behavior (sweep forward, then backward, until ε or max iters). Pluggable so we can later try damped EP or junction-tree schedules.
### High-level constructors
```rust
Game::ranked(teams, results, options) // dominant case
Game::free_for_all(players, ranking) // FFA with possible ties
Game::custom(builder) // power users build their own graph
```
`GameOptions` carries iteration cap, epsilon, p_draw, and approximation choice. Today these are scattered between method args and module constants.
### Trade-offs
- Enum dispatch over trait objects for built-ins; richer factors drop in via new enum variants.
- Variables and factor messages stored as `Vec<Gaussian>` indexed by `VarId` / edge slot — flat, cache-friendly.
- `Schedule` is a generic parameter (zero-cost); most users get default; experimentation is open.
### Open question
Whether `enum BuiltinFactor` will feel too closed-world. The `Custom(Box<dyn Factor>)` escape hatch helps but inner-loop perf for user factors will be slower. Acceptable for now; flagged for future revisit if it becomes a problem.
## Section 3 — Storage layout (SoA + arenas)
### Dense Vec keyed by `Index`
Every `HashMap<Index, T>` becomes a `Vec<T>` (or `Vec<Option<T>>` for sparse) indexed directly by `Index.0`. The public-facing `KeyTable<K>` continues to map arbitrary keys → `Index`.
### SoA at hot layers, AoS at boundaries
The `Skill` struct stays as a public type for the API (returned from `learning_curves`, etc.), but inside `TimeSlice` we lay it out column-wise:
```rust
struct TimeSliceSkills {
forward: Vec<Gaussian>, // [n_agents]
backward: Vec<Gaussian>,
likelihood: Vec<Gaussian>,
online: Vec<Gaussian>,
elapsed: Vec<i64>,
present: Vec<bool>,
}
```
Within a slice, the inner loops touch one column repeatedly across many events — keeping the column contiguous improves cache utilization and makes the eventual SIMD step (Section 6) straightforward.
`Gaussian` itself stays as a single 16-byte struct in the `Vec<Gaussian>`. Splitting into two parallel `Vec<f64>`s wins for pure SIMD over thousands of Gaussians but loses for the random-access patterns dominant in EP. Revisit if benchmarks demand it.
### Arena allocator inside `Game`
Replace per-event allocations with a `ScratchArena` reused across calls.
```rust
pub struct ScratchArena {
var_buf: Vec<Gaussian>,
factor_buf: Vec<Gaussian>, // edge messages
bool_buf: Vec<bool>,
f64_buf: Vec<f64>,
}
impl ScratchArena {
fn reset(&mut self); // sets len=0, keeps capacity
fn alloc_vars(&mut self, n: usize) -> &mut [Gaussian];
}
```
`TimeSlice` owns one `ScratchArena`; each event borrows it for the duration of its `Game` construction and inference. For the parallel-slice story (Section 6), each Rayon task gets its own arena.
### Per-event storage layout
Inside a `TimeSlice`, each event is stored column-wise as well, with `Item` inlined into team-level parallel arrays:
```rust
struct EventStorage {
teams: SmallVec<[TeamStorage; 4]>,
outcome: Outcome,
weights: SmallVec<[SmallVec<[f64; 4]>; 4]>,
evidence: f64,
}
struct TeamStorage {
competitors: SmallVec<[Index; 4]>, // who's on the team
edge_messages: SmallVec<[Gaussian; 4]>, // outgoing message per slot
output: f64,
}
```
Iteration over `(competitor, edge_message)` pairs zips two slices — no per-element struct.
### SmallVec for typical shapes
Teams ≤ ~5 players, games ≤ ~8 teams. `SmallVec<[T; 8]>` for team membership and `SmallVec<[T; 4]>` for team rosters keeps the common case allocation-free.
### Trade-offs
- Dense `Vec<T>` keyed by `Index` is faster but means agent removal needs tombstones (or just leaves slots present-but-inactive). Acceptable: TrueSkill histories rarely remove players.
- SoA at `TimeSlice` level only, not at `History` level. `History` keeps `Vec<TimeSlice>` because slices are heterogeneous in size.
- One `ScratchArena` per `TimeSlice` keeps the lifetime story simple.
### Open question
The `TimeSliceSkills` sketch above uses (b) **dense + present mask**: one slot per agent in the history, indexed directly by `Index`, with a `present: Vec<bool>` mask for batches the agent didn't participate in. The alternative is (a) **sparse columnar**: a `Vec<Index>` of present agents and parallel `Vec<Gaussian>` columns of length `n_present`, with a separate lookup (binary search or auxiliary table) to find a given `Index`'s slot.
(b) gives O(1) lookup and SIMD-friendly columns but wastes memory for sparsely populated slices. (a) is leaner per-slice but pays per-lookup cost in the inner loop. Bench both during T0 and pick. Default proposal: (b), since modern systems are memory-rich and the parallelism story is cleaner.
## Section 4 — API surface
### Typed event description
```rust
pub struct Event<T: Time, K> {
pub time: T,
pub teams: SmallVec<[Team<K>; 4]>,
pub outcome: Outcome,
}
pub struct Team<K> {
pub members: SmallVec<[Member<K>; 4]>,
}
pub struct Member<K> {
pub key: K,
pub weight: f64, // default 1.0
pub prior: Option<Rating>, // per-event override
}
pub enum Outcome {
Ranked(SmallVec<[u32; 4]>), // rank per team; equal ranks = tie
Scored(SmallVec<[f64; 4]>), // continuous score per team (engages MarginFactor)
}
```
`Outcome::winner(0)`, `Outcome::draw()`, `Outcome::ranking([0,1,2])` are convenience constructors.
### Builders
```rust
let mut history = History::<i64, _>::builder()
.mu(25.0).sigma(25.0/3.0).beta(25.0/6.0)
.drift(ConstantDrift(0.03))
.p_draw(0.10)
.convergence(ConvergenceOptions { max_iter: 30, epsilon: 1e-6 })
.observer(LogObserver::default())
.build();
```
For the no-time case, type inference picks `Untimed`:
```rust
let mut history = History::<Untimed, _>::builder().build();
```
### Three-tier event ingestion
```rust
// 1. Bulk ingestion (high-throughput path)
history.add_events(events_iter)?;
// 2. One-off match (very common in practice)
history.record_winner("alice", "bob", time)?;
history.record_draw("alice", "bob", time)?;
// 3. Builder for irregular shapes
history.event(time)
.team(["alice", "bob"]).weights([1.0, 0.7])
.team(["carol"])
.ranking([1, 0])
.commit()?;
```
### Convergence & queries
```rust
let report: ConvergenceReport = history.converge()?;
let curve: Vec<(i64, Gaussian)> = history.learning_curve(&"alice");
let all = history.learning_curves(); // HashMap<&K, Vec<(T, Gaussian)>>
let now = history.current_skill(&"alice"); // Option<Gaussian>
let ev = history.log_evidence();
let ev_for = history.log_evidence_for(&["alice", "bob"]);
let q = history.predict_quality(&[&["alice"], &["bob"]]);
let p_win = history.predict_outcome(&[&["alice"], &["bob"]]);
```
### Standalone Game
```rust
let g = Game::ranked(&[&[alice], &[bob]], Outcome::winner(0), &options);
let post = g.posteriors();
// Convenience
let (a, b) = Game::one_v_one(&alice, &bob, Outcome::winner(0));
```
### Errors
Replace `debug_assert!`/`panic!` at the API boundary with `Result`.
```rust
pub enum InferenceError {
MismatchedShape { kind: &'static str, expected: usize, got: usize },
InvalidProbability { value: f64 },
ConvergenceFailed { last_step: (f64, f64), iterations: usize },
NegativePrecision { pi: f64 },
}
```
Hot inner loops still use `debug_assert!` for invariants the API has already enforced.
### Trade-offs
- Generic over user's `K`; engine works in `Index`. Public outputs use `&K`.
- `SmallVec` everywhere on the event-description path.
- Three-tier API so casual users don't drown in types and bulk users still get throughput.
- `Outcome` enum replaces the "lower number wins" `&[f64]` convention.
### Open question
Whether to expose `Index` directly to users via an `intern_key(&K) -> Index` method, letting hot-path callers skip the `KeyTable` lookup on every call. Recommendation: yes — public `Index` handle plus `history.lookup<Q: Borrow<K>>(&Q) -> Option<Index>`. The casual API still takes `&K` everywhere; power users can promote to `Index` when profiling demands.
## Section 4½ — Naming pass
| Current | New | Rationale |
|---|---|---|
| `History` | `History` (kept) | Matches upstream; reads cleanly. |
| `Batch` | `TimeSlice` | Says what it is: every event sharing one timestamp. |
| `Player` | `Rating` | The struct holds prior/beta/drift — that's a rating configuration. Resolves the `Player`/`Agent` confusion. |
| `Agent` | `Competitor` | Holds dynamic state for someone competing in the history; fits the domain. |
| `Skill` | `Skill` (kept) | Per-time-slice skill estimate; clearer than `BatchSkill`. |
| `Item` | inlined into `TeamStorage` columns (engine) / `Member<K>` (public) | Eliminates the per-element struct in the hot path; gives API users a clear "team member" name. |
| `Game` | `Game` (kept) | `Match` collides with Rust's `match`. |
| `Index` | `Index` (kept) | Internal handle. |
| `IndexMap` | `KeyTable` | Avoids confusion with the `indexmap` crate. |
## Section 5 — Convergence & message scheduling
### Three nested loops, one mechanism
The system has three nested convergence loops:
1. Within-game: EP sweeps over the factor graph
2. Within-time-slice: re-running games as inputs change
3. Cross-history: forward-pass then backward-pass over all slices
All three implement `Workload`; one `Schedule` impl drives all of them.
```rust
pub trait Schedule {
fn run<W: Workload>(&self, workload: &mut W) -> ScheduleReport;
}
pub trait Workload {
fn step(&mut self) -> (f64, f64);
fn snapshot_evidence(&self) -> f64 { 0.0 }
}
pub struct ScheduleReport {
pub iterations: usize,
pub final_step: (f64, f64),
pub converged: bool,
}
```
### Built-in schedules
| Schedule | Behavior | Use |
|---|---|---|
| `EpsilonOrMax { eps, max }` | Default. Sweep until `(dpi, dtau) ≤ eps` or `max` iters. | All three loops. Replicates current behavior. |
| `Damped { eps, max, alpha }` | Same, but writes `α·new + (1α)·old`. | Stuck oscillations. |
| `Residual { eps, max }` | Priority-queue: re-update factor with largest pending delta first. | Faster convergence on uneven graphs. |
| `OneShot` | Exactly one pass, no convergence check. | Online incremental adds. |
### Stopping in natural-param space
Switch from `(|Δmu|, |Δsigma|) ≤ epsilon` to `(|Δpi|, |Δtau|) ≤ (eps_pi, eps_tau)`:
- `mu` and `sigma` are on different scales; one tolerance is wrong for both
- We store in nat-params anyway — checking convergence in mu/sigma costs free sqrts
- Nat-param delta is the natural geometry of the EP fixed point
Default `EpsilonOrMax::default()` exposes a single `epsilon` for simplicity; advanced ctor exposes both tolerances.
### Within-game improvements
- Replace hard-cap of 10 iterations with `GameOptions::schedule` that propagates `ScheduleReport` upward
- Fast path: graphs with no diff chain (1v1 with 1 iter sufficient) skip the loop entirely
- FFA / many-team ranks benefit from `Residual`; opt-in
### Within-slice and cross-history improvements
- **No more old/new HashMap snapshotting**: track deltas inline as we write under SoA
- **Per-slice dirty bits**: a `TimeSlice` whose neighbor messages haven't changed since its last full sweep doesn't need to re-run. Track `time_slice.dirty` and skip clean ones during the cross-history sweep. Big win for online-add (the locality case).
### `ConvergenceReport`
```rust
pub struct ConvergenceReport {
pub iterations: usize,
pub final_step: (f64, f64),
pub log_evidence: f64,
pub converged: bool,
pub per_iteration_time: SmallVec<[Duration; 32]>,
pub batches_skipped: usize,
}
```
`Observer` continues to receive per-iteration callbacks for live UI; `ConvergenceReport` is the post-hoc summary.
### Trade-offs
- One `Schedule` trait shared across loops — fewer concepts, more composable.
- Convergence checks in nat-param space — slightly different exact threshold than today; tests' epsilons re-tuned mechanically.
- Dirty-bit skipping changes iteration order vs. today; fixed point is the same, iteration counts may shift downward.
- `Residual` and `Damped` are opt-in; default behavior matches today closely.
### Open question
Whether `Schedule::run` should take an optional `Observer` reference. Recommendation: observation lives at a higher layer (`History::converge` calls observer hooks; `Schedule` is purely the loop driver).
## Section 6 — Concurrency & parallelism
### What's parallelizable
| Operation | Parallelism | Strategy |
|---|---|---|
| `History::converge()` (full forward+backward) | Sequential across slices | Within each slice: color-group events in parallel via Rayon |
| `History::add_events(...)` | Sequential append, but ingestion of typed events into `EventStorage` parallelizes trivially | n/a |
| `History::learning_curves()` | Per-key parallel | `into_par_iter()` |
| `History::log_evidence_for(targets)` | Per-batch parallel, reduce sum | `par_iter().map(...).sum()` |
| `Game` inference | Sequential | n/a (too small to amortize Rayon overhead) |
### Within-slice color-group parallelism
When events are added to a slice, partition them into color groups where events in the same color touch no shared `Index`. Within a color, run events in parallel via Rayon. Across colors, run sequentially. Preserves asynchronous-EP semantics exactly.
Alternative: synchronous EP with snapshot. All events read from a frozen skill snapshot, write deltas to thread-local buffers, barrier merges. Trivially parallel but weaker per-iteration convergence — needs damping. Available as a `Schedule` impl, opt-in.
### `Send + Sync` requirements
All public traits (`Time`, `Drift`, `Observer`, `Factor`, `Schedule`) require `Send + Sync`. `Observer` impls must be thread-safe (called from arbitrary worker threads).
### Rayon as default-on feature
`rayon` as default-on feature; with `default-features = false`, parallel paths fall back to sequential iterators behind `cfg(feature = "rayon")`.
### Expected speedup ballpark
For 1000 players, 60 events/slice × 1000 slices, 30 convergence iterations:
| Source | Estimated speedup vs. today |
|---|---|
| `HashMap` → dense `Vec` | 24× |
| Natural-param `Gaussian`, no-sqrt mul/div | 1.52× |
| Pre-allocated `ScratchArena` | 1.21.5× |
| Color-group parallel events in slice (8 cores) | 24× |
| Dirty-bit slice skipping (online add case) | 550× |
| **Combined (offline converge)** | ~1030× |
| **Combined (online add)** | ~50500× depending on locality |
These are pre-implementation estimates. Each tier validates with criterion.
### Trade-offs
- Color-group parallelism requires up-front graph coloring at ingestion. Cost: linear in events, run once per `add_events`. Cheap.
- Default = asynchronous EP (preserves current semantics). Synchronous opt-in only.
- Cross-slice sweep stays sequential; no speculative parallel sweeps.
- Rayon default-on but feature-gated.
### Open question
Whether to expose color-group partitioning to users. Recommendation: hidden by default, escape hatch via `add_events_with_partition(...)` for power users who already know their event independence.
## Section 7 — Migration, testing, and delivery plan
The crate is unreleased, so version-bump ceremony doesn't apply. Tiers are sequencing of work and milestones, not releases.
### Tier sequence
**T0 — Numerical parity (no API change)**
Internal-only. Public surface unchanged.
- Switch `Gaussian` storage to natural parameters `(pi, tau)`. `mu()`/`sigma()` become accessors.
- Replace `HashMap<Index, _>` with dense `Vec<_>` keyed by `Index.0` everywhere.
- Introduce `ScratchArena` inside `Batch` so `Game::new` stops allocating per-event.
- Drop the `panic!` in `mu_sigma`; return `Result` propagated upward.
**Acceptance:** existing test suite passes (bit-equal where possible, ULP-bounded where natural-param arithmetic shifts a rounding); `cargo bench` shows ≥3× win on `batch` benchmark; no API breakage.
**T1 — Factor graph machinery (internal-only)**
- Introduce `Factor`, `VarStore`, `Schedule` as `pub(crate)` types.
- Re-implement `Game::likelihoods()` on top of `BuiltinFactor::{Perf, TeamSum, RankDiff, Trunc}` driven by `EpsilonOrMax`.
- Replace within-game iteration tracking with `ScheduleReport`.
**Acceptance:** existing test suite passes (ULP-bounded); within-game iteration counts unchanged; benchmarks ≥ T0.
**T2 — New API surface (breaking)**
All renames and the new public API land together. No half-renamed intermediate state.
- New types: `Rating`, `TimeSlice`, `Competitor`, `Member<K>`, `Outcome`, `Event<T, K>`, `KeyTable<K>`.
- `Time` trait introduced; `History<T: Time, D: Drift<T>>` is generic.
- Three-tier API surface: `record_winner`, `event(...).team(...).commit()`, bulk `add_events(iter)`.
- `Observer` trait + `ConvergenceReport`; `verbose: bool` deleted.
- `panic!`/`debug_assert!` at API boundary become `Result<_, InferenceError>`.
- Promote `Factor`/`Schedule`/`VarStore` to `pub` under a `factors` module.
**Acceptance:** full test suite rewritten in new API; equivalence tests prove identical posteriors vs. old API on the same inputs.
**T3 — Concurrency**
- `Send + Sync` audit and bounds on all public traits.
- Color-group partitioning at `TimeSlice` ingestion.
- `rayon` as default-on feature with `#[cfg(feature = "rayon")]` fallback.
- Parallel paths: within-slice color groups, `learning_curves`, `log_evidence_for`.
**Acceptance:** deterministic posteriors across `RAYON_NUM_THREADS={1,2,4,8}`; benchmarks show >2× on 8-core for offline converge.
**T4 — Richer factor types & schedules**
Each shipped independently after T3.
- `MarginFactor` → enables `Outcome::Scored`.
- `Damped` and `Residual` schedules.
- `SynergyFactor`, `ScoreFactor` → same pattern when wanted.
Each comes with its own benchmark and a worked example in `examples/`.
### Testing strategy
| Layer | Approach |
|---|---|
| **Numerical correctness** | Keep existing hardcoded golden values from `test_1vs1`, `test_1vs1_draw`, `test_2vs1vs2_mixed`, etc. through T0T1 unchanged. They are a regression net against the original Python port. |
| **API parity** | T2 adds an `equivalence` test module that runs identical inputs through old vs. new construction and compares posteriors within ULPs. |
| **Property tests** | Add `proptest` for: factor graph fixed-point invariance under message order, `Outcome` round-trip, `Gaussian` mul/div associativity in nat-params, schedule convergence regardless of starting state. |
| **Determinism** | T3 adds tests that run identical input across multiple Rayon thread counts and assert identical posteriors. |
| **Benchmark gates** | Each tier has a "must not regress" gate vs. the previous tier on the existing `batch` and `gaussian` criterion suites. T0 must beat baseline by ≥3×; T1 ≥ T0; etc. |
### Risk management
- **T0 risk: rounding drift in tests.** Mitigation: where natural-param arithmetic legitimately changes the last ULPs, update goldens *and* simultaneously add a parity test against a snapshot taken from baseline to prove the difference is bounded.
- **T2 risk: API design mistakes.** Mitigation: review the spec and a worked example before implementing; iterate on feedback.
- **T3 risk: subtle race conditions in color-group partitioning.** Mitigation: `loom` tests for the merge step; deterministic-output assertion across thread counts.
- **Cross-tier risk: scope creep.** Each tier has a closed checklist; new ideas go to the next tier's wishlist.
### What we're explicitly *not* doing
- No GPU offload.
- No `no_std` support.
- No serde / persistence in this design.
- No incremental online API beyond `record_winner` / `add_events`.
## Open questions summary
Collected here for the review pass:
1. **`enum BuiltinFactor` extensibility** — may feel too closed-world; revisit if user-defined factors via `Custom(Box<dyn Factor>)` become common.
2. **Sparse vs. dense per-slice skill storage** — default to dense + `present` mask; sparse columnar is the alternative. Decided by T0 benchmarks.
3. **`Index` exposure for hot paths** — expose `intern_key`/`lookup` so power users can promote `&K` to `Index` and skip the `KeyTable` lookup; casual API still takes `&K` everywhere.
4. **`Schedule::run` and observer wiring** — observation stays at higher layer (`History::converge` calls observer hooks; `Schedule` is purely the loop driver).
5. **Color-group partition exposure** — hidden by default, escape hatch via `add_events_with_partition(...)`.

View File

@@ -1,50 +1,61 @@
use plotters::prelude::*; use plotters::prelude::*;
use smallvec::smallvec;
use time::{Date, Month}; use time::{Date, Month};
use trueskill_tt::{History, IndexMap}; use trueskill_tt::{Event, History, Member, Outcome, Team, drift::ConstantDrift};
fn main() { fn main() {
let mut csv = csv::Reader::open("examples/atp.csv").unwrap(); let mut csv = csv::Reader::open("examples/atp.csv").unwrap();
let mut composition = Vec::new();
let mut results = Vec::new();
let mut times = Vec::new();
let from = Date::from_calendar_date(1900, Month::January, 1).unwrap(); let from = Date::from_calendar_date(1900, Month::January, 1).unwrap();
let time_format = time::format_description::parse("[year]-[month]-[day]").unwrap(); let time_format = time::format_description::parse("[year]-[month]-[day]").unwrap();
let mut index_map = IndexMap::new(); let mut events: Vec<Event<i64, String>> = Vec::new();
for row in csv.records() { for row in csv.records() {
if &row["double"] == "t" {
let w1_id = index_map.get_or_create(&row["w1_id"]);
let w2_id = index_map.get_or_create(&row["w2_id"]);
let l1_id = index_map.get_or_create(&row["l1_id"]);
let l2_id = index_map.get_or_create(&row["l2_id"]);
composition.push(vec![vec![w1_id, w2_id], vec![l1_id, l2_id]]);
} else {
let w1_id = index_map.get_or_create(&row["w1_id"]);
let l1_id = index_map.get_or_create(&row["l1_id"]);
composition.push(vec![vec![w1_id], vec![l1_id]]);
}
results.push(vec![1.0, 0.0]);
let date = Date::parse(&row["time_start"], &time_format).unwrap(); let date = Date::parse(&row["time_start"], &time_format).unwrap();
let time = (date - from).whole_days();
times.push((date - from).whole_days()); if &row["double"] == "t" {
events.push(Event {
time,
teams: smallvec![
Team::with_members([
Member::new(row["w1_id"].to_owned()),
Member::new(row["w2_id"].to_owned()),
]),
Team::with_members([
Member::new(row["l1_id"].to_owned()),
Member::new(row["l2_id"].to_owned()),
]),
],
outcome: Outcome::winner(0, 2),
});
} else {
events.push(Event {
time,
teams: smallvec![
Team::with_members([Member::new(row["w1_id"].to_owned())]),
Team::with_members([Member::new(row["l1_id"].to_owned())]),
],
outcome: Outcome::winner(0, 2),
});
}
} }
let mut hist = History::builder().sigma(1.6).gamma(0.036).build(); let mut hist: History<i64, _, _, String> = History::builder_with_key()
.sigma(1.6)
.drift(ConstantDrift(0.036))
.convergence(trueskill_tt::ConvergenceOptions {
max_iter: 10,
epsilon: 0.01,
})
.build();
hist.add_events(composition, results, times, vec![]); hist.add_events(events).unwrap();
hist.convergence(10, 0.01, true); hist.converge().unwrap();
let players = [ let players = [
("aggasi", "a092", 38800), ("aggasi", "a092", 38800i64),
("borg", "b058", 30300), ("borg", "b058", 30300),
("connors", "c044", 31250), ("connors", "c044", 31250),
("courier", "c243", 35750), ("courier", "c243", 35750),
@@ -61,21 +72,16 @@ fn main() {
("wilander", "w023", 32600), ("wilander", "w023", 32600),
]; ];
let curves = hist.learning_curves();
let mut x_spec = (f64::MAX, f64::MIN); let mut x_spec = (f64::MAX, f64::MIN);
let mut y_spec = (f64::MAX, f64::MIN); let mut y_spec = (f64::MAX, f64::MIN);
for (id, cutoff) in players for &(_, id, cutoff) in &players {
.iter() for (ts, gs) in hist.learning_curve(id) {
.map(|&(_, id, cutoff)| (index_map.get_or_create(id), cutoff)) if ts >= cutoff {
{
for (ts, gs) in &curves[&id] {
if *ts >= cutoff {
continue; continue;
} }
let ts = *ts as f64; let ts = ts as f64;
if ts < x_spec.0 { if ts < x_spec.0 {
x_spec.0 = ts; x_spec.0 = ts;
@@ -85,8 +91,8 @@ fn main() {
x_spec.1 = ts; x_spec.1 = ts;
} }
let upper = gs.mu + gs.sigma; let upper = gs.mu() + gs.sigma();
let lower = gs.mu - gs.sigma; let lower = gs.mu() - gs.sigma();
if lower < y_spec.0 { if lower < y_spec.0 {
y_spec.0 = lower; y_spec.0 = lower;
@@ -111,24 +117,19 @@ fn main() {
chart.configure_mesh().draw().unwrap(); chart.configure_mesh().draw().unwrap();
for (idx, (player, id, cutoff)) in players for (idx, &(player, id, cutoff)) in players.iter().enumerate() {
.iter()
.map(|&(player, id, cutoff)| (player, index_map.get_or_create(id), cutoff))
.enumerate()
{
let mut data = Vec::new(); let mut data = Vec::new();
let mut upper = Vec::new(); let mut upper = Vec::new();
let mut lower = Vec::new(); let mut lower = Vec::new();
for (ts, gs) in curves[&id].iter() { for (ts, gs) in hist.learning_curve(id) {
if *ts >= cutoff { if ts >= cutoff {
continue; continue;
} }
data.push((*ts as f64, gs.mu)); data.push((ts as f64, gs.mu()));
upper.push((ts as f64, gs.mu() + gs.sigma()));
upper.push((*ts as f64, gs.mu + gs.sigma)); lower.push((ts as f64, gs.mu() - gs.sigma()));
lower.push((*ts as f64, gs.mu - gs.sigma));
} }
let color = Palette99::pick(idx); let color = Palette99::pick(idx);

View File

@@ -1,47 +0,0 @@
use crate::{
N_INF,
drift::{ConstantDrift, Drift},
gaussian::Gaussian,
player::Player,
};
#[derive(Debug)]
pub struct Agent<D: Drift = ConstantDrift> {
pub player: Player<D>,
pub message: Gaussian,
pub last_time: i64,
}
impl<D: Drift> Agent<D> {
pub(crate) fn receive(&self, elapsed: i64) -> Gaussian {
if self.message != N_INF {
self.message
.forget(self.player.drift.variance_delta(elapsed))
} else {
self.player.prior
}
}
}
impl Default for Agent<ConstantDrift> {
fn default() -> Self {
Self {
player: Player::default(),
message: N_INF,
last_time: i64::MIN,
}
}
}
pub(crate) fn clean<'a, D: Drift + 'a, A: Iterator<Item = &'a mut Agent<D>>>(
agents: A,
last_time: bool,
) {
for a in agents {
a.message = N_INF;
if last_time {
a.last_time = i64::MIN;
}
}
}

View File

@@ -10,8 +10,8 @@ impl AbsDiffEq for Gaussian {
} }
fn abs_diff_eq(&self, other: &Self, epsilon: Self::Epsilon) -> bool { fn abs_diff_eq(&self, other: &Self, epsilon: Self::Epsilon) -> bool {
f64::abs_diff_eq(&self.mu, &other.mu, epsilon) f64::abs_diff_eq(&self.mu(), &other.mu(), epsilon)
&& f64::abs_diff_eq(&self.sigma, &other.sigma, epsilon) && f64::abs_diff_eq(&self.sigma(), &other.sigma(), epsilon)
} }
} }
@@ -26,8 +26,8 @@ impl RelativeEq for Gaussian {
epsilon: Self::Epsilon, epsilon: Self::Epsilon,
max_relative: Self::Epsilon, max_relative: Self::Epsilon,
) -> bool { ) -> bool {
f64::relative_eq(&self.mu, &other.mu, epsilon, max_relative) f64::relative_eq(&self.mu(), &other.mu(), epsilon, max_relative)
&& f64::relative_eq(&self.sigma, &other.sigma, epsilon, max_relative) && f64::relative_eq(&self.sigma(), &other.sigma(), epsilon, max_relative)
} }
} }
@@ -37,7 +37,7 @@ impl UlpsEq for Gaussian {
} }
fn ulps_eq(&self, other: &Self, epsilon: Self::Epsilon, max_ulps: u32) -> bool { fn ulps_eq(&self, other: &Self, epsilon: Self::Epsilon, max_ulps: u32) -> bool {
f64::ulps_eq(&self.mu, &other.mu, epsilon, max_ulps) f64::ulps_eq(&self.mu(), &other.mu(), epsilon, max_ulps)
&& f64::ulps_eq(&self.sigma, &other.sigma, epsilon, max_ulps) && f64::ulps_eq(&self.sigma(), &other.sigma(), epsilon, max_ulps)
} }
} }

56
src/arena.rs Normal file
View File

@@ -0,0 +1,56 @@
use crate::{factor::VarStore, gaussian::Gaussian};
/// Reusable scratch buffers for `Game::likelihoods`.
///
/// A `TimeSlice` owns one arena; all events in the slice share it across
/// the convergence iterations. All Vecs are cleared (not dropped) on
/// `reset()` so their heap capacity is reused across games.
#[derive(Debug, Default)]
pub struct ScratchArena {
pub(crate) vars: VarStore,
pub(crate) sort_buf: Vec<usize>,
pub(crate) inv_buf: Vec<usize>,
pub(crate) team_prior: Vec<Gaussian>,
pub(crate) lhood_lose: Vec<Gaussian>,
pub(crate) lhood_win: Vec<Gaussian>,
}
impl ScratchArena {
pub fn new() -> Self {
Self::default()
}
#[inline]
pub(crate) fn reset(&mut self) {
self.vars.clear();
self.sort_buf.clear();
self.inv_buf.clear();
self.team_prior.clear();
self.lhood_lose.clear();
self.lhood_win.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{N_INF, gaussian::Gaussian};
#[test]
fn reset_keeps_capacity() {
let mut arena = ScratchArena::new();
arena.vars.alloc(N_INF);
arena.sort_buf.push(42);
arena.team_prior.push(Gaussian::from_ms(0.0, 1.0));
let var_cap = arena.vars.marginals.capacity();
let sort_cap = arena.sort_buf.capacity();
let prior_cap = arena.team_prior.capacity();
arena.reset();
assert_eq!(arena.vars.len(), 0);
assert_eq!(arena.sort_buf.len(), 0);
assert_eq!(arena.team_prior.len(), 0);
assert_eq!(arena.vars.marginals.capacity(), var_cap);
assert_eq!(arena.sort_buf.capacity(), sort_cap);
assert_eq!(arena.team_prior.capacity(), prior_cap);
}
}

71
src/competitor.rs Normal file
View File

@@ -0,0 +1,71 @@
use crate::{
N_INF,
drift::{ConstantDrift, Drift},
gaussian::Gaussian,
rating::Rating,
time::Time,
};
/// Per-history, temporal state for someone competing.
///
/// Renamed from `Agent` in T2; the former `.player` field is now
/// `.rating` to match the `Player → Rating` rename.
#[derive(Debug)]
pub struct Competitor<T: Time = i64, D: Drift<T> = ConstantDrift> {
pub rating: Rating<T, D>,
pub message: Gaussian,
pub last_time: Option<T>,
}
impl<T: Time, D: Drift<T>> Competitor<T, D> {
/// Compute the message received at time `now`, with drift accumulated
/// from `self.last_time` (if any) to `now`.
pub(crate) fn receive(&self, now: &T) -> Gaussian {
if self.message != N_INF {
let elapsed_variance = match &self.last_time {
Some(last) => self.rating.drift.variance_delta(last, now),
None => 0.0,
};
self.message.forget(elapsed_variance)
} else {
self.rating.prior
}
}
/// Compute the message using a pre-cached elapsed count (in `Time::elapsed_to` units).
///
/// Used in convergence sweeps where the elapsed was cached at slice-construction time
/// and should not be recomputed from `last_time` (which may have shifted).
pub(crate) fn receive_for_elapsed(&self, elapsed: i64) -> Gaussian {
if self.message != N_INF {
self.message
.forget(self.rating.drift.variance_for_elapsed(elapsed))
} else {
self.rating.prior
}
}
}
impl Default for Competitor<i64, ConstantDrift> {
fn default() -> Self {
Self {
rating: Rating::default(),
message: N_INF,
last_time: None,
}
}
}
pub(crate) fn clean<'a, T, D, C>(competitors: C, last_time: bool)
where
T: Time + 'a,
D: Drift<T> + 'a,
C: Iterator<Item = &'a mut Competitor<T, D>>,
{
for c in competitors {
c.message = N_INF;
if last_time {
c.last_time = None;
}
}
}

31
src/convergence.rs Normal file
View File

@@ -0,0 +1,31 @@
//! Convergence configuration and reporting.
use std::time::Duration;
use smallvec::SmallVec;
#[derive(Clone, Copy, Debug)]
pub struct ConvergenceOptions {
pub max_iter: usize,
pub epsilon: f64,
}
impl Default for ConvergenceOptions {
fn default() -> Self {
Self {
max_iter: crate::ITERATIONS,
epsilon: crate::EPSILON,
}
}
}
/// Post-hoc summary of a `History::converge` call.
#[derive(Clone, Debug)]
pub struct ConvergenceReport {
pub iterations: usize,
pub final_step: (f64, f64),
pub log_evidence: f64,
pub converged: bool,
pub per_iteration_time: SmallVec<[Duration; 32]>,
pub slices_skipped: usize,
}

View File

@@ -1,14 +1,36 @@
use std::fmt::Debug; use std::fmt::Debug;
pub trait Drift: Copy + Debug { use crate::time::Time;
fn variance_delta(&self, elapsed: i64) -> f64;
/// Governs how much a competitor's skill can drift between two time points.
///
/// Generic over `T: Time` so seasonal or calendar-aware drift is expressible
/// without going through `i64`.
pub trait Drift<T: Time>: Copy + Debug {
/// Variance added to the skill prior for elapsed time `from -> to`.
///
/// Called with `from <= to`; returning zero means no drift accumulates.
fn variance_delta(&self, from: &T, to: &T) -> f64;
/// Variance added for a pre-computed elapsed count (in the same units as
/// `T::elapsed_to`). Used where the elapsed is already cached as `i64`.
fn variance_for_elapsed(&self, elapsed: i64) -> f64;
} }
/// Simple constant-per-unit-time drift.
///
/// For `Time = i64`: variance added is `(to - from) * gamma^2`.
/// For `Time = Untimed`: elapsed is always 0, so drift is always 0.
#[derive(Clone, Copy, Debug)] #[derive(Clone, Copy, Debug)]
pub struct ConstantDrift(pub f64); pub struct ConstantDrift(pub f64);
impl Drift for ConstantDrift { impl<T: Time> Drift<T> for ConstantDrift {
fn variance_delta(&self, elapsed: i64) -> f64 { fn variance_delta(&self, from: &T, to: &T) -> f64 {
elapsed as f64 * self.0 * self.0 let elapsed = from.elapsed_to(to).max(0) as f64;
elapsed * self.0 * self.0
}
fn variance_for_elapsed(&self, elapsed: i64) -> f64 {
elapsed.max(0) as f64 * self.0 * self.0
} }
} }

51
src/error.rs Normal file
View File

@@ -0,0 +1,51 @@
use std::fmt;
#[derive(Debug, Clone, PartialEq)]
pub enum InferenceError {
/// Expected and actual lengths of some array-shaped input differ.
MismatchedShape {
kind: &'static str,
expected: usize,
got: usize,
},
/// A probability value is outside `[0, 1]`.
InvalidProbability { value: f64 },
/// Convergence exceeded `max_iter` without falling below `epsilon`.
ConvergenceFailed {
last_step: (f64, f64),
iterations: usize,
},
/// Negative precision: a Gaussian with `pi < 0` slipped into an API call.
NegativePrecision { pi: f64 },
}
impl fmt::Display for InferenceError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::MismatchedShape {
kind,
expected,
got,
} => {
write!(f, "{kind}: expected length {expected}, got {got}")
}
Self::InvalidProbability { value } => {
write!(f, "probability must be in [0, 1]; got {value}")
}
Self::ConvergenceFailed {
last_step,
iterations,
} => {
write!(
f,
"convergence failed after {iterations} iterations; last step = {last_step:?}"
)
}
Self::NegativePrecision { pi } => {
write!(f, "precision must be non-negative; got {pi}")
}
}
}
}
impl std::error::Error for InferenceError {}

132
src/event.rs Normal file
View File

@@ -0,0 +1,132 @@
//! Typed event description for bulk ingestion.
//!
//! `Event<T, K>` is the new public event shape (spec Section 4). Replaces
//! the nested `Vec<Vec<Vec<Index>>>`, `Vec<Vec<f64>>`, `Vec<Vec<Vec<f64>>>`
//! that the old `add_events_with_prior` took.
use smallvec::SmallVec;
use crate::{gaussian::Gaussian, outcome::Outcome, time::Time};
/// A single match at time `time` involving some number of teams.
#[derive(Clone, Debug)]
pub struct Event<T: Time, K> {
pub time: T,
pub teams: SmallVec<[Team<K>; 4]>,
pub outcome: Outcome,
}
/// A team: list of members competing together.
#[derive(Clone, Debug)]
pub struct Team<K> {
pub members: SmallVec<[Member<K>; 4]>,
}
impl<K> Team<K> {
pub fn new() -> Self {
Self {
members: SmallVec::new(),
}
}
pub fn with_members<I: IntoIterator<Item = Member<K>>>(members: I) -> Self {
Self {
members: members.into_iter().collect(),
}
}
}
impl<K> Default for Team<K> {
fn default() -> Self {
Self::new()
}
}
/// One member of a team, identified by user key `K`.
///
/// `weight` defaults to 1.0; a per-event `prior` can override the competitor's
/// current skill estimate for this event only.
#[derive(Clone, Debug)]
pub struct Member<K> {
pub key: K,
pub weight: f64,
pub prior: Option<Gaussian>,
}
impl<K> Member<K> {
pub fn new(key: K) -> Self {
Self {
key,
weight: 1.0,
prior: None,
}
}
pub fn with_weight(mut self, weight: f64) -> Self {
self.weight = weight;
self
}
pub fn with_prior(mut self, prior: Gaussian) -> Self {
self.prior = Some(prior);
self
}
}
/// Convenience: a member is a user key with default weight 1.0 and no prior.
impl<K> From<K> for Member<K> {
fn from(key: K) -> Self {
Self::new(key)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::Outcome;
#[test]
fn member_new_has_unit_weight_no_prior() {
let m = Member::new("alice");
assert_eq!(m.key, "alice");
assert_eq!(m.weight, 1.0);
assert!(m.prior.is_none());
}
#[test]
fn member_builder_methods_chain() {
let m = Member::new("alice")
.with_weight(0.5)
.with_prior(Gaussian::from_ms(20.0, 5.0));
assert_eq!(m.weight, 0.5);
assert!(m.prior.is_some());
}
#[test]
fn member_from_key() {
let m: Member<&str> = "bob".into();
assert_eq!(m.key, "bob");
assert_eq!(m.weight, 1.0);
}
#[test]
fn team_with_members_collects() {
let t: Team<&str> = Team::with_members([Member::new("a"), Member::new("b")]);
assert_eq!(t.members.len(), 2);
}
#[test]
fn event_construction() {
use smallvec::smallvec;
let e: Event<i64, &str> = Event {
time: 1,
teams: smallvec![
Team::with_members([Member::new("a")]),
Team::with_members([Member::new("b")]),
],
outcome: Outcome::winner(0, 2),
};
assert_eq!(e.teams.len(), 2);
assert_eq!(e.time, 1);
}
}

94
src/event_builder.rs Normal file
View File

@@ -0,0 +1,94 @@
use smallvec::SmallVec;
use crate::{
InferenceError, Outcome,
drift::Drift,
event::{Event, Member, Team},
history::History,
observer::Observer,
time::Time,
};
pub struct EventBuilder<'h, T, D, O, K>
where
T: Time,
D: Drift<T>,
O: Observer<T>,
K: Eq + std::hash::Hash + Clone,
{
history: &'h mut History<T, D, O, K>,
event: Event<T, K>,
current_team_idx: Option<usize>,
}
impl<'h, T, D, O, K> EventBuilder<'h, T, D, O, K>
where
T: Time,
D: Drift<T>,
O: Observer<T>,
K: Eq + std::hash::Hash + Clone,
{
pub(crate) fn new(history: &'h mut History<T, D, O, K>, time: T) -> Self {
Self {
history,
event: Event {
time,
teams: SmallVec::new(),
outcome: Outcome::Ranked(SmallVec::new()),
},
current_team_idx: None,
}
}
/// Add a team by its member keys (weight 1.0 each, no prior overrides).
pub fn team<I: IntoIterator<Item = K>>(mut self, keys: I) -> Self {
let members: SmallVec<[Member<K>; 4]> = keys.into_iter().map(Member::new).collect();
self.event.teams.push(Team { members });
self.current_team_idx = Some(self.event.teams.len() - 1);
self
}
/// Set per-member weights for the most recently added team.
///
/// Panics in debug builds if called before `.team(...)` or if the length
/// doesn't match the team's member count.
pub fn weights<I: IntoIterator<Item = f64>>(mut self, weights: I) -> Self {
let idx = self
.current_team_idx
.expect(".weights(...) called before any .team(...)");
let ws: Vec<f64> = weights.into_iter().collect();
let team = &mut self.event.teams[idx];
debug_assert_eq!(
ws.len(),
team.members.len(),
"weights length must match team size"
);
for (m, w) in team.members.iter_mut().zip(ws) {
m.weight = w;
}
self
}
/// Set explicit ranks per team (length must equal number of teams).
pub fn ranking<I: IntoIterator<Item = u32>>(mut self, ranks: I) -> Self {
self.event.outcome = Outcome::ranking(ranks);
self
}
/// Mark team `winner_idx` as winner; others tied for last.
pub fn winner(mut self, winner_idx: u32) -> Self {
self.event.outcome = Outcome::winner(winner_idx, self.event.teams.len() as u32);
self
}
/// All teams tied.
pub fn draw(mut self) -> Self {
self.event.outcome = Outcome::draw(self.event.teams.len() as u32);
self
}
/// Commit the event to the history.
pub fn commit(self) -> Result<(), InferenceError> {
self.history.add_events(std::iter::once(self.event))
}
}

148
src/factor/mod.rs Normal file
View File

@@ -0,0 +1,148 @@
//! Factor graph machinery for within-game inference.
use crate::gaussian::Gaussian;
/// Identifier for a variable in a `VarStore`.
///
/// Variables hold the current Gaussian marginal and are owned by exactly one
/// `VarStore`. `VarId` is meaningful only within its owning store.
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub struct VarId(pub u32);
/// Flat storage of variable marginals.
///
/// Variables are allocated by `alloc()` and accessed by `VarId`. The store is
/// reused across `Game::ranked_with_arena` calls (it lives in the `ScratchArena`); call
/// `clear()` before reuse.
#[derive(Debug, Default)]
pub struct VarStore {
pub(crate) marginals: Vec<Gaussian>,
}
impl VarStore {
pub fn new() -> Self {
Self::default()
}
pub fn clear(&mut self) {
self.marginals.clear();
}
pub fn len(&self) -> usize {
self.marginals.len()
}
pub fn is_empty(&self) -> bool {
self.marginals.is_empty()
}
pub fn alloc(&mut self, init: Gaussian) -> VarId {
let id = VarId(self.marginals.len() as u32);
self.marginals.push(init);
id
}
pub fn get(&self, id: VarId) -> Gaussian {
self.marginals[id.0 as usize]
}
pub fn set(&mut self, id: VarId, g: Gaussian) {
self.marginals[id.0 as usize] = g;
}
}
/// A factor in the EP graph.
///
/// Factors hold their own outgoing messages and propagate them by reading
/// connected variable marginals from a `VarStore` and writing back updated
/// marginals.
pub trait Factor {
/// Update outgoing messages and write back to the var store.
///
/// Returns the max delta `(|Δmu|, |Δsigma|)` across writes this
/// propagation. Used by the `Schedule` to detect convergence.
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64);
/// Optional log-evidence contribution. Default 0.0 (no contribution).
fn log_evidence(&self, _vars: &VarStore) -> f64 {
0.0
}
}
/// Enum dispatcher for the built-in factor types.
///
/// Using an enum instead of `Box<dyn Factor>` keeps factor data inline and
/// avoids virtual-call overhead in the hot inference loop.
#[derive(Debug)]
pub enum BuiltinFactor {
TeamSum(team_sum::TeamSumFactor),
RankDiff(rank_diff::RankDiffFactor),
Trunc(trunc::TruncFactor),
}
impl Factor for BuiltinFactor {
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
match self {
Self::TeamSum(f) => f.propagate(vars),
Self::RankDiff(f) => f.propagate(vars),
Self::Trunc(f) => f.propagate(vars),
}
}
fn log_evidence(&self, vars: &VarStore) -> f64 {
match self {
Self::Trunc(f) => f.log_evidence(vars),
_ => 0.0,
}
}
}
pub mod rank_diff;
pub mod team_sum;
pub mod trunc;
#[cfg(test)]
mod tests {
use super::*;
use crate::N_INF;
#[test]
fn alloc_assigns_sequential_ids() {
let mut store = VarStore::new();
let a = store.alloc(N_INF);
let b = store.alloc(N_INF);
let c = store.alloc(N_INF);
assert_eq!(a, VarId(0));
assert_eq!(b, VarId(1));
assert_eq!(c, VarId(2));
assert_eq!(store.len(), 3);
}
#[test]
fn get_returns_initial_value() {
let mut store = VarStore::new();
let g = Gaussian::from_ms(2.5, 1.0);
let id = store.alloc(g);
assert_eq!(store.get(id), g);
}
#[test]
fn set_updates_value() {
let mut store = VarStore::new();
let id = store.alloc(N_INF);
let new = Gaussian::from_ms(3.0, 0.5);
store.set(id, new);
assert_eq!(store.get(id), new);
}
#[test]
fn clear_resets_length_keeping_capacity() {
let mut store = VarStore::new();
store.alloc(N_INF);
store.alloc(N_INF);
let cap = store.marginals.capacity();
store.clear();
assert_eq!(store.len(), 0);
assert_eq!(store.marginals.capacity(), cap);
}
}

95
src/factor/rank_diff.rs Normal file
View File

@@ -0,0 +1,95 @@
use crate::factor::{Factor, VarId, VarStore};
/// Maintains the constraint `diff = team_a - team_b` between three vars.
///
/// On each propagation:
/// - Reads marginals at `team_a` and `team_b` (which already incorporate any
/// incoming messages from neighboring factors).
/// - Computes `new_diff = team_a - team_b` (variance addition; see Gaussian::Sub).
/// - Writes the new marginal to `diff`.
/// - Returns the delta against the previous diff value.
///
/// This factor does NOT store an outgoing message; the diff variable is
/// effectively replaced on each propagation. The TruncFactor on the same diff
/// var holds the EP-divide message that produces the cavity.
#[derive(Debug)]
pub struct RankDiffFactor {
pub team_a: VarId,
pub team_b: VarId,
pub diff: VarId,
}
impl Factor for RankDiffFactor {
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
let a = vars.get(self.team_a);
let b = vars.get(self.team_b);
let new_diff = a - b;
let old = vars.get(self.diff);
vars.set(self.diff, new_diff);
old.delta(new_diff)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{N_INF, gaussian::Gaussian};
#[test]
fn diff_of_two_known_gaussians() {
let mut vars = VarStore::new();
let team_a = vars.alloc(Gaussian::from_ms(25.0, 3.0));
let team_b = vars.alloc(Gaussian::from_ms(20.0, 4.0));
let diff = vars.alloc(N_INF);
let mut f = RankDiffFactor {
team_a,
team_b,
diff,
};
f.propagate(&mut vars);
let result = vars.get(diff);
// mu = 25 - 20 = 5; var = 9 + 16 = 25; sigma = 5
assert!((result.mu() - 5.0).abs() < 1e-12);
assert!((result.sigma() - 5.0).abs() < 1e-12);
}
#[test]
fn delta_zero_on_repeat() {
let mut vars = VarStore::new();
let team_a = vars.alloc(Gaussian::from_ms(10.0, 2.0));
let team_b = vars.alloc(Gaussian::from_ms(8.0, 1.0));
let diff = vars.alloc(N_INF);
let mut f = RankDiffFactor {
team_a,
team_b,
diff,
};
f.propagate(&mut vars);
let (dmu, dsig) = f.propagate(&mut vars);
assert!(dmu < 1e-12);
assert!(dsig < 1e-12);
}
#[test]
fn delta_reflects_team_change() {
let mut vars = VarStore::new();
let team_a = vars.alloc(Gaussian::from_ms(10.0, 1.0));
let team_b = vars.alloc(Gaussian::from_ms(0.0, 1.0));
let diff = vars.alloc(N_INF);
let mut f = RankDiffFactor {
team_a,
team_b,
diff,
};
f.propagate(&mut vars);
// change team_a, repropagate; delta should be positive
vars.set(team_a, Gaussian::from_ms(15.0, 1.0));
let (dmu, _dsig) = f.propagate(&mut vars);
assert!(dmu > 4.0, "expected ~5 delta, got {}", dmu);
}
}

98
src/factor/team_sum.rs Normal file
View File

@@ -0,0 +1,98 @@
use crate::{
N00,
factor::{Factor, VarId, VarStore},
gaussian::Gaussian,
};
/// Computes the weighted sum of player performances into a team-perf var.
///
/// Inputs are pre-computed player performance Gaussians (i.e., rating priors
/// already with beta² noise added via `Rating::performance()`). The factor
/// runs once per game and writes the weighted sum to the output var.
#[derive(Debug)]
pub struct TeamSumFactor {
pub inputs: Vec<(Gaussian, f64)>,
pub out: VarId,
}
impl Factor for TeamSumFactor {
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
let perf = self.inputs.iter().fold(N00, |acc, (g, w)| acc + (*g * *w));
let old = vars.get(self.out);
vars.set(self.out, perf);
old.delta(perf)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::N_INF;
#[test]
fn single_player_unit_weight() {
let mut vars = VarStore::new();
let out = vars.alloc(N_INF);
let g = Gaussian::from_ms(25.0, 5.0);
let mut f = TeamSumFactor {
inputs: vec![(g, 1.0)],
out,
};
f.propagate(&mut vars);
let result = vars.get(out);
assert!((result.mu() - 25.0).abs() < 1e-12);
assert!((result.sigma() - 5.0).abs() < 1e-12);
}
#[test]
fn two_players_summed() {
let mut vars = VarStore::new();
let out = vars.alloc(N_INF);
let g1 = Gaussian::from_ms(20.0, 3.0);
let g2 = Gaussian::from_ms(30.0, 4.0);
let mut f = TeamSumFactor {
inputs: vec![(g1, 1.0), (g2, 1.0)],
out,
};
f.propagate(&mut vars);
let result = vars.get(out);
// sum: mu = 20 + 30 = 50, var = 9 + 16 = 25, sigma = 5
assert!((result.mu() - 50.0).abs() < 1e-12);
assert!((result.sigma() - 5.0).abs() < 1e-12);
}
#[test]
fn weighted_inputs() {
let mut vars = VarStore::new();
let out = vars.alloc(N_INF);
let g = Gaussian::from_ms(10.0, 2.0);
let mut f = TeamSumFactor {
inputs: vec![(g, 2.0)],
out,
};
f.propagate(&mut vars);
let result = vars.get(out);
// g * 2.0: mu = 10*2 = 20, sigma = 2*2 = 4
assert!((result.mu() - 20.0).abs() < 1e-12);
assert!((result.sigma() - 4.0).abs() < 1e-12);
}
#[test]
fn delta_is_zero_on_repeat_propagate() {
let mut vars = VarStore::new();
let out = vars.alloc(N_INF);
let g = Gaussian::from_ms(5.0, 1.0);
let mut f = TeamSumFactor {
inputs: vec![(g, 1.0)],
out,
};
f.propagate(&mut vars);
let (dmu, dsig) = f.propagate(&mut vars);
assert!(dmu < 1e-12, "expected ~0 delta on repeat, got {}", dmu);
assert!(dsig < 1e-12);
}
}

130
src/factor/trunc.rs Normal file
View File

@@ -0,0 +1,130 @@
use crate::{
N_INF, approx, cdf,
factor::{Factor, VarId, VarStore},
gaussian::Gaussian,
};
/// EP truncation factor on a diff variable.
///
/// Implements the rectified-Gaussian approximation that turns a diff
/// distribution into a "this team rank-beats that team" or "tied" likelihood.
/// Stores its outgoing message to the diff variable so the cavity computation
/// produces the correct EP message on each propagation.
#[derive(Debug)]
pub struct TruncFactor {
pub diff: VarId,
pub margin: f64,
pub tie: bool,
/// Outgoing message to the diff variable (initial: N_INF, the EP identity).
pub(crate) msg: Gaussian,
/// Cached evidence (linear, not log) computed from the cavity on first propagation.
pub(crate) evidence_cached: Option<f64>,
}
impl TruncFactor {
pub fn new(diff: VarId, margin: f64, tie: bool) -> Self {
Self {
diff,
margin,
tie,
msg: N_INF,
evidence_cached: None,
}
}
}
impl Factor for TruncFactor {
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
let marginal = vars.get(self.diff);
// Cavity: marginal divided by our outgoing message.
let cavity = marginal / self.msg;
// First-time-only: cache the evidence contribution from the cavity.
if self.evidence_cached.is_none() {
self.evidence_cached = Some(cavity_evidence(cavity, self.margin, self.tie));
}
// Apply the truncation approximation to the cavity.
let trunc = approx(cavity, self.margin, self.tie);
// New outgoing message such that cavity * new_msg = trunc.
let new_msg = trunc / cavity;
let old_msg = self.msg;
self.msg = new_msg;
// Update the marginal: marginal_new = cavity * new_msg = trunc.
vars.set(self.diff, trunc);
old_msg.delta(new_msg)
}
fn log_evidence(&self, _vars: &VarStore) -> f64 {
self.evidence_cached.unwrap_or(1.0).ln()
}
}
/// P(diff > margin) for non-tie, P(|diff| < margin) for tie.
fn cavity_evidence(diff: Gaussian, margin: f64, tie: bool) -> f64 {
if tie {
cdf(margin, diff.mu(), diff.sigma()) - cdf(-margin, diff.mu(), diff.sigma())
} else {
1.0 - cdf(margin, diff.mu(), diff.sigma())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::factor::VarStore;
#[test]
fn idempotent_after_convergence() {
// After enough iterations, propagate should return ~0 delta.
let mut vars = VarStore::new();
let diff = vars.alloc(Gaussian::from_ms(2.0, 3.0));
let mut f = TruncFactor::new(diff, 0.0, false);
// Propagate many times; delta should drop toward 0.
let mut last = (f64::INFINITY, f64::INFINITY);
for _ in 0..20 {
last = f.propagate(&mut vars);
}
assert!(last.0 < 1e-10, "expected converged delta, got {}", last.0);
assert!(last.1 < 1e-10);
}
#[test]
fn evidence_cached_on_first_propagate() {
let mut vars = VarStore::new();
let diff = vars.alloc(Gaussian::from_ms(2.0, 3.0));
let mut f = TruncFactor::new(diff, 0.0, false);
assert!(f.evidence_cached.is_none());
f.propagate(&mut vars);
assert!(f.evidence_cached.is_some());
let first = f.evidence_cached.unwrap();
// Evidence should be P(diff > 0) for diff ~ N(2, 9) ≈ 0.748
assert!(first > 0.7);
assert!(first < 0.8);
// Subsequent propagations don't change it.
f.propagate(&mut vars);
assert_eq!(f.evidence_cached.unwrap(), first);
}
#[test]
fn tie_evidence_uses_two_sided() {
let mut vars = VarStore::new();
let diff = vars.alloc(Gaussian::from_ms(0.0, 2.0));
let mut f = TruncFactor::new(diff, 1.0, true);
f.propagate(&mut vars);
// For diff ~ N(0, 4), tie=true with margin=1: P(-1 < diff < 1) ≈ 0.383
let ev = f.evidence_cached.unwrap();
assert!(ev > 0.35 && ev < 0.42);
}
}

13
src/factors.rs Normal file
View File

@@ -0,0 +1,13 @@
//! Factor-graph public API.
//!
//! Power users can construct custom factor graphs via `Game::custom` (T2
//! minimal; full ergonomics in T4) and drive them with custom `Schedule`
//! implementations.
pub use crate::{
factor::{
BuiltinFactor, Factor, VarId, VarStore, rank_diff::RankDiffFactor, team_sum::TeamSumFactor,
trunc::TruncFactor,
},
schedule::{EpsilonOrMax, Schedule, ScheduleReport},
};

View File

@@ -1,16 +1,85 @@
use std::cmp::Ordering;
use crate::{ use crate::{
N_INF, N00, approx, compute_margin, N_INF, N00,
arena::ScratchArena,
compute_margin,
drift::Drift, drift::Drift,
evidence, factor::{Factor, trunc::TruncFactor},
gaussian::Gaussian, gaussian::Gaussian,
message::{DiffMessage, TeamMessage}, rating::Rating,
player::Player, time::Time,
sort_perm, tuple_gt, tuple_max, tuple_gt, tuple_max,
}; };
#[derive(Clone, Copy, Debug)]
pub struct GameOptions {
pub p_draw: f64,
pub convergence: crate::ConvergenceOptions,
}
impl Default for GameOptions {
fn default() -> Self {
Self {
p_draw: crate::P_DRAW,
convergence: crate::ConvergenceOptions::default(),
}
}
}
/// Owned variant of `Game` returned by public constructors.
///
/// Unlike `Game<'a, T, D>` (which borrows its result/weights slices from
/// History's internal state), `OwnedGame<T, D>` owns its inputs so it can
/// be returned freely from public constructors.
#[derive(Debug)] #[derive(Debug)]
pub struct Game<'a, D: Drift> { #[allow(dead_code)]
teams: Vec<Vec<Player<D>>>, pub struct OwnedGame<T: Time, D: Drift<T>> {
teams: Vec<Vec<Rating<T, D>>>,
result: Vec<f64>,
weights: Vec<Vec<f64>>,
p_draw: f64,
pub(crate) likelihoods: Vec<Vec<Gaussian>>,
pub(crate) evidence: f64,
}
impl<T: Time, D: Drift<T>> OwnedGame<T, D> {
pub(crate) fn new(
teams: Vec<Vec<Rating<T, D>>>,
result: Vec<f64>,
weights: Vec<Vec<f64>>,
p_draw: f64,
) -> Self {
let mut arena = ScratchArena::new();
let g = Game::ranked_with_arena(teams.clone(), &result, &weights, p_draw, &mut arena);
let likelihoods = g.likelihoods;
let evidence = g.evidence;
Self {
teams,
result,
weights,
p_draw,
likelihoods,
evidence,
}
}
pub fn posteriors(&self) -> Vec<Vec<Gaussian>> {
self.likelihoods
.iter()
.zip(self.teams.iter())
.map(|(l, t)| l.iter().zip(t.iter()).map(|(&l, r)| l * r.prior).collect())
.collect()
}
pub fn log_evidence(&self) -> f64 {
self.evidence.ln()
}
}
#[derive(Debug)]
pub struct Game<'a, T: Time = i64, D: Drift<T> = crate::drift::ConstantDrift> {
teams: Vec<Vec<Rating<T, D>>>,
result: &'a [f64], result: &'a [f64],
weights: &'a [Vec<f64>], weights: &'a [Vec<f64>],
p_draw: f64, p_draw: f64,
@@ -18,18 +87,18 @@ pub struct Game<'a, D: Drift> {
pub(crate) evidence: f64, pub(crate) evidence: f64,
} }
impl<'a, D: Drift> Game<'a, D> { impl<'a, T: Time, D: Drift<T>> Game<'a, T, D> {
pub fn new( pub(crate) fn ranked_with_arena(
teams: Vec<Vec<Player<D>>>, teams: Vec<Vec<Rating<T, D>>>,
result: &'a [f64], result: &'a [f64],
weights: &'a [Vec<f64>], weights: &'a [Vec<f64>],
p_draw: f64, p_draw: f64,
arena: &mut ScratchArena,
) -> Self { ) -> Self {
debug_assert!( debug_assert!(
(result.len() == teams.len()), result.len() == teams.len(),
"result must have the same length as teams" "result must have the same length as teams"
); );
debug_assert!( debug_assert!(
weights weights
.iter() .iter()
@@ -37,19 +106,17 @@ impl<'a, D: Drift> Game<'a, D> {
.all(|(w, t)| w.len() == t.len()), .all(|(w, t)| w.len() == t.len()),
"weights must have the same dimensions as teams" "weights must have the same dimensions as teams"
); );
debug_assert!( debug_assert!(
(0.0..1.0).contains(&p_draw), (0.0..1.0).contains(&p_draw),
"draw probability.must be >= 0.0 and < 1.0" "draw probability must be >= 0.0 and < 1.0"
); );
debug_assert!( debug_assert!(
p_draw > 0.0 || { p_draw > 0.0 || {
let mut r = result.to_vec(); let mut r = result.to_vec();
r.sort_unstable_by(|a, b| a.partial_cmp(b).unwrap()); r.sort_unstable_by(|a, b| a.partial_cmp(b).unwrap());
r.windows(2).all(|w| w[0] != w[1]) r.windows(2).all(|w| w[0] != w[1])
}, },
"draw must be > 0.0 if there is teams with draw" "draw must be > 0.0 if there are teams with draw"
); );
let mut this = Self { let mut this = Self {
@@ -61,124 +128,144 @@ impl<'a, D: Drift> Game<'a, D> {
evidence: 0.0, evidence: 0.0,
}; };
this.likelihoods(); this.likelihoods(arena);
this this
} }
fn likelihoods(&mut self) { fn likelihoods(&mut self, arena: &mut ScratchArena) {
let o = sort_perm(self.result, true); arena.reset();
let mut team = o let n_teams = self.teams.len();
.iter()
.map(|&e| {
let performance = self.teams[e]
.iter()
.zip(self.weights[e].iter())
.fold(N00, |p, (player, &weight)| {
p + (player.performance() * weight)
});
TeamMessage { // Sort teams by result descending; reuse arena.sort_buf to avoid allocation.
prior: performance, arena.sort_buf.extend(0..n_teams);
..Default::default() arena.sort_buf.sort_by(|&i, &j| {
} self.result[j]
}) .partial_cmp(&self.result[i])
.collect::<Vec<_>>(); .unwrap_or(Ordering::Equal)
});
let mut diff = team // Team performance priors written into arena buffer (capacity reused across games).
.windows(2) arena.team_prior.extend(arena.sort_buf.iter().map(|&t| {
.map(|w| DiffMessage { self.teams[t]
prior: w[0].prior - w[1].prior, .iter()
likelihood: N_INF, .zip(self.weights[t].iter())
}) .fold(N00, |p, (player, &w)| p + (player.performance() * w))
.collect::<Vec<_>>(); }));
let tie = o let n_diffs = n_teams.saturating_sub(1);
.windows(2)
.map(|e| self.result[e[0]] == self.result[e[1]])
.collect::<Vec<_>>();
let margin = if self.p_draw == 0.0 {
vec![0.0; o.len() - 1]
} else {
o.windows(2)
.map(|w| {
let a: f64 = self.teams[w[0]].iter().map(|a| a.beta.powi(2)).sum();
let b: f64 = self.teams[w[1]].iter().map(|a| a.beta.powi(2)).sum();
// One TruncFactor per adjacent sorted-team pair; each owns a diff VarId.
// trunc stays local (fresh state per game; Vec capacity is typically small).
let mut trunc: Vec<TruncFactor> = (0..n_diffs)
.map(|i| {
let tie = self.result[arena.sort_buf[i]] == self.result[arena.sort_buf[i + 1]];
let margin = if self.p_draw == 0.0 {
0.0
} else {
let a: f64 = self.teams[arena.sort_buf[i]]
.iter()
.map(|p| p.beta.powi(2))
.sum();
let b: f64 = self.teams[arena.sort_buf[i + 1]]
.iter()
.map(|p| p.beta.powi(2))
.sum();
compute_margin(self.p_draw, (a + b).sqrt()) compute_margin(self.p_draw, (a + b).sqrt())
}) };
.collect::<Vec<_>>() let vid = arena.vars.alloc(N_INF);
}; TruncFactor::new(vid, margin, tie)
})
.collect();
self.evidence = 1.0; // Per-team messages from neighbouring RankDiff factors (replaces TeamMessage).
arena.lhood_lose.resize(n_teams, N_INF);
arena.lhood_win.resize(n_teams, N_INF);
let mut step = (f64::INFINITY, f64::INFINITY); let mut step = (f64::INFINITY, f64::INFINITY);
let mut iter = 0; let mut iter = 0;
while tuple_gt(step, 1e-6) && iter < 10 { while tuple_gt(step, 1e-6) && iter < 10 {
step = (0.0, 0.0); step = (0.0_f64, 0.0_f64);
for e in 0..diff.len() - 1 { // Forward sweep: diffs 0 .. n_diffs-2 (all but the last).
diff[e].prior = team[e].posterior_win() - team[e + 1].posterior_lose(); for (e, tf) in trunc[..n_diffs.saturating_sub(1)].iter_mut().enumerate() {
let pw = arena.team_prior[e] * arena.lhood_lose[e];
let pl = arena.team_prior[e + 1] * arena.lhood_win[e + 1];
let raw = pw - pl;
arena.vars.set(tf.diff, raw * tf.msg);
let d = tf.propagate(&mut arena.vars);
step = tuple_max(step, d);
if iter == 0 { let new_ll = pw - tf.msg;
self.evidence *= evidence(&diff, &margin, &tie, e); step = tuple_max(step, arena.lhood_lose[e + 1].delta(new_ll));
} arena.lhood_lose[e + 1] = new_ll;
diff[e].likelihood = approx(diff[e].prior, margin[e], tie[e]) / diff[e].prior;
let likelihood_lose = team[e].posterior_win() - diff[e].likelihood;
step = tuple_max(step, team[e + 1].likelihood_lose.delta(likelihood_lose));
team[e + 1].likelihood_lose = likelihood_lose;
} }
for e in (1..diff.len()).rev() { // Backward sweep: diffs n_diffs-1 .. 1 (reverse, all but the first).
diff[e].prior = team[e].posterior_win() - team[e + 1].posterior_lose(); for (rev_i, tf) in trunc[1..].iter_mut().rev().enumerate() {
let e = n_diffs - 1 - rev_i;
let pw = arena.team_prior[e] * arena.lhood_lose[e];
let pl = arena.team_prior[e + 1] * arena.lhood_win[e + 1];
let raw = pw - pl;
arena.vars.set(tf.diff, raw * tf.msg);
let d = tf.propagate(&mut arena.vars);
step = tuple_max(step, d);
if iter == 0 && e == diff.len() - 1 { let new_lw = pl + tf.msg;
self.evidence *= evidence(&diff, &margin, &tie, e); step = tuple_max(step, arena.lhood_win[e].delta(new_lw));
} arena.lhood_win[e] = new_lw;
diff[e].likelihood = approx(diff[e].prior, margin[e], tie[e]) / diff[e].prior;
let likelihood_win = team[e + 1].posterior_lose() + diff[e].likelihood;
step = tuple_max(step, team[e].likelihood_win.delta(likelihood_win));
team[e].likelihood_win = likelihood_win;
} }
iter += 1; iter += 1;
} }
if diff.len() == 1 { // Special case: exactly 1 diff (2-team game); loop body was empty.
self.evidence = evidence(&diff, &margin, &tie, 0); if n_diffs == 1 {
let raw = (arena.team_prior[0] * arena.lhood_lose[0])
diff[0].prior = team[0].posterior_win() - team[1].posterior_lose(); - (arena.team_prior[1] * arena.lhood_win[1]);
diff[0].likelihood = approx(diff[0].prior, margin[0], tie[0]) / diff[0].prior; arena.vars.set(trunc[0].diff, raw * trunc[0].msg);
trunc[0].propagate(&mut arena.vars);
} }
let t_end = team.len() - 1; // Boundary updates: close the chain at both ends.
let d_end = diff.len() - 1; if n_diffs > 0 {
let pl1 = arena.team_prior[1] * arena.lhood_win[1];
arena.lhood_win[0] = pl1 + trunc[0].msg;
let pw_last = arena.team_prior[n_teams - 2] * arena.lhood_lose[n_teams - 2];
arena.lhood_lose[n_teams - 1] = pw_last - trunc[n_diffs - 1].msg;
}
team[0].likelihood_win = team[1].posterior_lose() + diff[0].likelihood; // Evidence = product of per-diff evidences (each cached on first propagation).
team[t_end].likelihood_lose = team[t_end - 1].posterior_win() - diff[d_end].likelihood; self.evidence = trunc
.iter()
.map(|t| t.evidence_cached.unwrap_or(1.0))
.product();
let m_t_ft = o.into_iter().map(|e| team[e].likelihood()); // Inverse permutation: inv_buf[orig_i] = sorted_i.
arena.inv_buf.resize(n_teams, 0);
for (si, &orig_i) in arena.sort_buf.iter().enumerate() {
arena.inv_buf[orig_i] = si;
}
self.likelihoods = self self.likelihoods = self
.teams .teams
.iter() .iter()
.zip(self.weights.iter()) .zip(self.weights.iter())
.zip(m_t_ft) .enumerate()
.map(|((p, w), m)| { .map(|(orig_i, (players, weights))| {
let performance = p.iter().zip(w.iter()).fold(N00, |p, (player, &weight)| { let si = arena.inv_buf[orig_i];
p + (player.performance() * weight) let m = arena.lhood_win[si] * arena.lhood_lose[si];
}); let performance = players
.iter()
p.iter() .zip(weights.iter())
.zip(w.iter()) .fold(N00, |p, (player, &w)| p + (player.performance() * w));
.map(|(p, &w)| { players
((m - performance.exclude(p.performance() * w)) * (1.0 / w)) .iter()
.forget(p.beta.powi(2)) .zip(weights.iter())
.map(|(player, &w)| {
((m - performance.exclude(player.performance() * w)) * (1.0 / w))
.forget(player.beta.powi(2))
}) })
.collect::<Vec<_>>() .collect::<Vec<_>>()
}) })
@@ -197,6 +284,68 @@ impl<'a, D: Drift> Game<'a, D> {
}) })
.collect::<Vec<_>>() .collect::<Vec<_>>()
} }
pub fn log_evidence(&self) -> f64 {
self.evidence.ln()
}
}
impl<T: Time, D: Drift<T>> Game<'_, T, D> {
pub fn ranked(
teams: &[&[Rating<T, D>]],
outcome: crate::Outcome,
options: &GameOptions,
) -> Result<OwnedGame<T, D>, crate::InferenceError> {
if !(0.0..1.0).contains(&options.p_draw) {
return Err(crate::InferenceError::InvalidProbability {
value: options.p_draw,
});
}
if outcome.team_count() != teams.len() {
return Err(crate::InferenceError::MismatchedShape {
kind: "outcome ranks vs teams",
expected: teams.len(),
got: outcome.team_count(),
});
}
let ranks = outcome.as_ranks();
let max_rank = ranks.iter().copied().max().unwrap_or(0) as f64;
let result: Vec<f64> = ranks.iter().map(|&r| max_rank - r as f64).collect();
let teams_owned: Vec<Vec<Rating<T, D>>> = teams.iter().map(|t| t.to_vec()).collect();
let weights: Vec<Vec<f64>> = teams.iter().map(|t| vec![1.0; t.len()]).collect();
Ok(OwnedGame::new(teams_owned, result, weights, options.p_draw))
}
pub fn one_v_one(
a: &Rating<T, D>,
b: &Rating<T, D>,
outcome: crate::Outcome,
) -> Result<(Gaussian, Gaussian), crate::InferenceError> {
let game = Self::ranked(&[&[*a], &[*b]], outcome, &GameOptions::default())?;
let post = game.posteriors();
Ok((post[0][0], post[1][0]))
}
pub fn free_for_all(
players: &[&Rating<T, D>],
outcome: crate::Outcome,
options: &GameOptions,
) -> Result<OwnedGame<T, D>, crate::InferenceError> {
let teams: Vec<Vec<Rating<T, D>>> = players.iter().map(|p| vec![**p]).collect();
let team_refs: Vec<&[Rating<T, D>]> = teams.iter().map(|t| t.as_slice()).collect();
Self::ranked(&team_refs, outcome, options)
}
#[doc(hidden)]
pub fn custom<S: crate::factors::Schedule>(
factors: &mut [crate::factors::BuiltinFactor],
vars: &mut crate::factors::VarStore,
schedule: &S,
) -> crate::factors::ScheduleReport {
schedule.run(factors, vars)
}
} }
#[cfg(test)] #[cfg(test)]
@@ -204,23 +353,31 @@ mod tests {
use ::approx::assert_ulps_eq; use ::approx::assert_ulps_eq;
use super::*; use super::*;
use crate::{ConstantDrift, GAMMA, Gaussian, N_INF, Player}; use crate::{ConstantDrift, GAMMA, Gaussian, N_INF, Rating, arena::ScratchArena};
type R = Rating<i64, ConstantDrift>;
#[test] #[test]
fn test_1vs1() { fn test_1vs1() {
let t_a = Player::new( let t_a = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_b = Player::new( let t_b = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let w = [vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0]];
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 1.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b]],
&[0.0, 1.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
@@ -229,19 +386,25 @@ mod tests {
assert_ulps_eq!(a, Gaussian::from_ms(20.794779, 7.194481), epsilon = 1e-6); assert_ulps_eq!(a, Gaussian::from_ms(20.794779, 7.194481), epsilon = 1e-6);
assert_ulps_eq!(b, Gaussian::from_ms(29.205220, 7.194481), epsilon = 1e-6); assert_ulps_eq!(b, Gaussian::from_ms(29.205220, 7.194481), epsilon = 1e-6);
let t_a = Player::new( let t_a = R::new(
Gaussian::from_ms(29.0, 1.0), Gaussian::from_ms(29.0, 1.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(GAMMA), ConstantDrift(GAMMA),
); );
let t_b = Player::new( let t_b = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(GAMMA), ConstantDrift(GAMMA),
); );
let w = [vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0]];
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 1.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b]],
&[0.0, 1.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
@@ -250,11 +413,17 @@ mod tests {
assert_ulps_eq!(a, Gaussian::from_ms(28.896475, 0.996604), epsilon = 1e-6); assert_ulps_eq!(a, Gaussian::from_ms(28.896475, 0.996604), epsilon = 1e-6);
assert_ulps_eq!(b, Gaussian::from_ms(32.189211, 6.062063), epsilon = 1e-6); assert_ulps_eq!(b, Gaussian::from_ms(32.189211, 6.062063), epsilon = 1e-6);
let t_a = Player::new(Gaussian::from_ms(1.139, 0.531), 1.0, ConstantDrift(0.2125)); let t_a = R::new(Gaussian::from_ms(1.139, 0.531), 1.0, ConstantDrift(0.2125));
let t_b = Player::new(Gaussian::from_ms(15.568, 0.51), 1.0, ConstantDrift(0.2125)); let t_b = R::new(Gaussian::from_ms(15.568, 0.51), 1.0, ConstantDrift(0.2125));
let w = [vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0]];
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 1.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b]],
&[0.0, 1.0],
&w,
0.0,
&mut ScratchArena::new(),
);
assert_eq!(g.likelihoods[0][0], N_INF); assert_eq!(g.likelihoods[0][0], N_INF);
assert_eq!(g.likelihoods[1][0], N_INF); assert_eq!(g.likelihoods[1][0], N_INF);
@@ -263,17 +432,17 @@ mod tests {
#[test] #[test]
fn test_1vs1vs1() { fn test_1vs1vs1() {
let teams = vec![ let teams = vec![
vec![Player::new( vec![R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
)], )],
vec![Player::new( vec![R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
)], )],
vec![Player::new( vec![R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
@@ -281,7 +450,13 @@ mod tests {
]; ];
let w = [vec![1.0], vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0], vec![1.0]];
let g = Game::new(teams.clone(), &[1.0, 2.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
teams.clone(),
&[1.0, 2.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
@@ -291,7 +466,13 @@ mod tests {
assert_ulps_eq!(b, Gaussian::from_ms(31.311358, 6.698818), epsilon = 1e-6); assert_ulps_eq!(b, Gaussian::from_ms(31.311358, 6.698818), epsilon = 1e-6);
let w = [vec![1.0], vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0], vec![1.0]];
let g = Game::new(teams.clone(), &[2.0, 1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
teams.clone(),
&[2.0, 1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
@@ -301,33 +482,40 @@ mod tests {
assert_ulps_eq!(b, Gaussian::from_ms(25.000000, 6.238469), epsilon = 1e-6); assert_ulps_eq!(b, Gaussian::from_ms(25.000000, 6.238469), epsilon = 1e-6);
let w = [vec![1.0], vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0], vec![1.0]];
let g = Game::new(teams, &[1.0, 2.0, 0.0], &w, 0.5); let g = Game::ranked_with_arena(teams, &[1.0, 2.0, 0.0], &w, 0.5, &mut ScratchArena::new());
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
let b = p[1][0]; let b = p[1][0];
let c = p[2][0]; let c = p[2][0];
assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 6.092561), epsilon = 1e-6); // T1 ULP shift: mu rounds to 25.0 (was 24.999999) under natural-parameter storage.
assert_ulps_eq!(a, Gaussian::from_ms(25.0, 6.092561), epsilon = 1e-6);
assert_ulps_eq!(b, Gaussian::from_ms(33.379314, 6.483575), epsilon = 1e-6); assert_ulps_eq!(b, Gaussian::from_ms(33.379314, 6.483575), epsilon = 1e-6);
assert_ulps_eq!(c, Gaussian::from_ms(16.620685, 6.483575), epsilon = 1e-6); assert_ulps_eq!(c, Gaussian::from_ms(16.620685, 6.483575), epsilon = 1e-6);
} }
#[test] #[test]
fn test_1vs1_draw() { fn test_1vs1_draw() {
let t_a = Player::new( let t_a = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_b = Player::new( let t_b = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let w = [vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0]];
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 0.0], &w, 0.25); let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b]],
&[0.0, 0.0],
&w,
0.25,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
@@ -336,19 +524,25 @@ mod tests {
assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 6.469480), epsilon = 1e-6); assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 6.469480), epsilon = 1e-6);
assert_ulps_eq!(b, Gaussian::from_ms(24.999999, 6.469480), epsilon = 1e-6); assert_ulps_eq!(b, Gaussian::from_ms(24.999999, 6.469480), epsilon = 1e-6);
let t_a = Player::new( let t_a = R::new(
Gaussian::from_ms(25.0, 3.0), Gaussian::from_ms(25.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_b = Player::new( let t_b = R::new(
Gaussian::from_ms(29.0, 2.0), Gaussian::from_ms(29.0, 2.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let w = [vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0]];
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 0.0], &w, 0.25); let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b]],
&[0.0, 0.0],
&w,
0.25,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
let a = p[0][0]; let a = p[0][0];
@@ -360,28 +554,29 @@ mod tests {
#[test] #[test]
fn test_1vs1vs1_draw() { fn test_1vs1vs1_draw() {
let t_a = Player::new( let t_a = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_b = Player::new( let t_b = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_c = Player::new( let t_c = R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let w = [vec![1.0], vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0], vec![1.0]];
let g = Game::new( let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b], vec![t_c]], vec![vec![t_a], vec![t_b], vec![t_c]],
&[0.0, 0.0, 0.0], &[0.0, 0.0, 0.0],
&w, &w,
0.25, 0.25,
&mut ScratchArena::new(),
); );
let p = g.posteriors(); let p = g.posteriors();
@@ -389,32 +584,35 @@ mod tests {
let b = p[1][0]; let b = p[1][0];
let c = p[2][0]; let c = p[2][0];
assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 5.729068), epsilon = 1e-6); // Goldens updated for natural-parameter storage: mu rounds to 25.0 (was 24.999999),
assert_ulps_eq!(b, Gaussian::from_ms(25.000000, 5.707423), epsilon = 1e-6); // sigma shifts by ~3e-7 ULPs (within 1e-6 of original). Both bounded differences.
assert_ulps_eq!(c, Gaussian::from_ms(24.999999, 5.729068), epsilon = 1e-6); assert_ulps_eq!(a, Gaussian::from_ms(25.0, 5.729069), epsilon = 1e-6);
assert_ulps_eq!(b, Gaussian::from_ms(25.0, 5.707424), epsilon = 1e-6);
assert_ulps_eq!(c, Gaussian::from_ms(25.0, 5.729069), epsilon = 1e-6);
let t_a = Player::new( let t_a = R::new(
Gaussian::from_ms(25.0, 3.0), Gaussian::from_ms(25.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_b = Player::new( let t_b = R::new(
Gaussian::from_ms(25.0, 3.0), Gaussian::from_ms(25.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let t_c = Player::new( let t_c = R::new(
Gaussian::from_ms(29.0, 2.0), Gaussian::from_ms(29.0, 2.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
); );
let w = [vec![1.0], vec![1.0], vec![1.0]]; let w = [vec![1.0], vec![1.0], vec![1.0]];
let g = Game::new( let g = Game::ranked_with_arena(
vec![vec![t_a], vec![t_b], vec![t_c]], vec![vec![t_a], vec![t_b], vec![t_c]],
&[0.0, 0.0, 0.0], &[0.0, 0.0, 0.0],
&w, &w,
0.25, 0.25,
&mut ScratchArena::new(),
); );
let p = g.posteriors(); let p = g.posteriors();
@@ -430,29 +628,29 @@ mod tests {
#[test] #[test]
fn test_2vs1vs2_mixed() { fn test_2vs1vs2_mixed() {
let t_a = vec![ let t_a = vec![
Player::new( R::new(
Gaussian::from_ms(12.0, 3.0), Gaussian::from_ms(12.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
), ),
Player::new( R::new(
Gaussian::from_ms(18.0, 3.0), Gaussian::from_ms(18.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
), ),
]; ];
let t_b = vec![Player::new( let t_b = vec![R::new(
Gaussian::from_ms(30.0, 3.0), Gaussian::from_ms(30.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
)]; )];
let t_c = vec![ let t_c = vec![
Player::new( R::new(
Gaussian::from_ms(14.0, 3.0), Gaussian::from_ms(14.0, 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
), ),
Player::new( R::new(
Gaussian::from_ms(16., 3.0), Gaussian::from_ms(16., 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
@@ -460,7 +658,13 @@ mod tests {
]; ];
let w = [vec![1.0, 1.0], vec![1.0], vec![1.0, 1.0]]; let w = [vec![1.0, 1.0], vec![1.0], vec![1.0, 1.0]];
let g = Game::new(vec![t_a, t_b, t_c], &[1.0, 0.0, 0.0], &w, 0.25); let g = Game::ranked_with_arena(
vec![t_a, t_b, t_c],
&[1.0, 0.0, 0.0],
&w,
0.25,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!(p[0][0], Gaussian::from_ms(13.051, 2.864), epsilon = 1e-3); assert_ulps_eq!(p[0][0], Gaussian::from_ms(13.051, 2.864), epsilon = 1e-3);
@@ -475,19 +679,25 @@ mod tests {
let w_a = vec![1.0]; let w_a = vec![1.0];
let w_b = vec![2.0]; let w_b = vec![2.0];
let t_a = vec![Player::new( let t_a = vec![R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
)]; )];
let t_b = vec![Player::new( let t_b = vec![R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
)]; )];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a.clone(), t_b.clone()],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -505,7 +715,13 @@ mod tests {
let w_b = vec![0.7]; let w_b = vec![0.7];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a.clone(), t_b.clone()],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -523,7 +739,13 @@ mod tests {
let w_b = vec![0.7]; let w_b = vec![0.7];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a, t_b], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a, t_b],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -540,19 +762,17 @@ mod tests {
let w_a = vec![1.0]; let w_a = vec![1.0];
let w_b = vec![0.0]; let w_b = vec![0.0];
let t_a = vec![Player::new( let t_a = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
Gaussian::from_ms(2.0, 6.0), let t_b = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
1.0,
ConstantDrift(0.0),
)];
let t_b = vec![Player::new(
Gaussian::from_ms(2.0, 6.0),
1.0,
ConstantDrift(0.0),
)];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a, t_b], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a, t_b],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -569,19 +789,17 @@ mod tests {
let w_a = vec![1.0]; let w_a = vec![1.0];
let w_b = vec![-1.0]; let w_b = vec![-1.0];
let t_a = vec![Player::new( let t_a = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
Gaussian::from_ms(2.0, 6.0), let t_b = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
1.0,
ConstantDrift(0.0),
)];
let t_b = vec![Player::new(
Gaussian::from_ms(2.0, 6.0),
1.0,
ConstantDrift(0.0),
)];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a, t_b], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a, t_b],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!(p[0][0], p[1][0], epsilon = 1e-6); assert_ulps_eq!(p[0][0], p[1][0], epsilon = 1e-6);
@@ -590,12 +808,12 @@ mod tests {
#[test] #[test]
fn test_2vs2_weighted() { fn test_2vs2_weighted() {
let t_a = vec![ let t_a = vec![
Player::new( R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
), ),
Player::new( R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
@@ -604,12 +822,12 @@ mod tests {
let w_a = vec![0.4, 0.8]; let w_a = vec![0.4, 0.8];
let t_b = vec![ let t_b = vec![
Player::new( R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
), ),
Player::new( R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
@@ -618,7 +836,13 @@ mod tests {
let w_b = vec![0.9, 0.6]; let w_b = vec![0.9, 0.6];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a.clone(), t_b.clone()],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -646,7 +870,13 @@ mod tests {
let w_b = vec![0.7, 0.4]; let w_b = vec![0.7, 0.4];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a.clone(), t_b.clone()],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -674,7 +904,13 @@ mod tests {
let w_b = vec![0.7, 2.4]; let w_b = vec![0.7, 2.4];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a.clone(), t_b.clone()],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
@@ -699,10 +935,10 @@ mod tests {
); );
let w = [vec![1.0, 1.0], vec![1.0]]; let w = [vec![1.0, 1.0], vec![1.0]];
let g = Game::new( let g = Game::ranked_with_arena(
vec![ vec![
t_a.clone(), t_a.clone(),
vec![Player::new( vec![R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(0.0), ConstantDrift(0.0),
@@ -711,6 +947,7 @@ mod tests {
&[1.0, 0.0], &[1.0, 0.0],
&w, &w,
0.0, 0.0,
&mut ScratchArena::new(),
); );
let post_2vs1 = g.posteriors(); let post_2vs1 = g.posteriors();
@@ -718,7 +955,13 @@ mod tests {
let w_b = vec![1.0, 0.0]; let w_b = vec![1.0, 0.0];
let w = [w_a, w_b]; let w = [w_a, w_b];
let g = Game::new(vec![t_a, t_b.clone()], &[1.0, 0.0], &w, 0.0); let g = Game::ranked_with_arena(
vec![t_a, t_b.clone()],
&[1.0, 0.0],
&w,
0.0,
&mut ScratchArena::new(),
);
let p = g.posteriors(); let p = g.posteriors();
assert_ulps_eq!(p[0][0], post_2vs1[0][0], epsilon = 1e-6); assert_ulps_eq!(p[0][0], post_2vs1[0][0], epsilon = 1e-6);

View File

@@ -2,143 +2,159 @@ use std::ops;
use crate::{MU, N_INF, SIGMA}; use crate::{MU, N_INF, SIGMA};
/// A Gaussian distribution stored in natural parameters.
///
/// `pi = 1 / sigma^2` (precision)
/// `tau = mu * pi` (precision-adjusted mean)
///
/// Multiplication and division in message passing become pure adds/subs of
/// the stored fields with no `sqrt` or reciprocal in the hot path. `mu()` and
/// `sigma()` are accessors computed on demand.
#[derive(Clone, Copy, PartialEq, Debug)] #[derive(Clone, Copy, PartialEq, Debug)]
pub struct Gaussian { pub struct Gaussian {
pub mu: f64, pi: f64,
pub sigma: f64, tau: f64,
} }
impl Gaussian { impl Gaussian {
/// Construct from mean and standard deviation.
pub const fn from_ms(mu: f64, sigma: f64) -> Self { pub const fn from_ms(mu: f64, sigma: f64) -> Self {
Gaussian { mu, sigma } if sigma == f64::INFINITY {
} Self { pi: 0.0, tau: 0.0 }
} else if sigma == 0.0 {
fn pi(&self) -> f64 { // Point mass at mu. tau = mu * pi = mu * inf.
if self.sigma > 0.0 { // For mu == 0 this is 0; for mu != 0 it is inf * mu = inf (IEEE).
self.sigma.powi(-2) // Only N00 (mu=0, sigma=0) is used in practice.
Self {
pi: f64::INFINITY,
tau: if mu == 0.0 { 0.0 } else { f64::INFINITY },
}
} else { } else {
f64::INFINITY let pi = 1.0 / (sigma * sigma);
Self { pi, tau: mu * pi }
} }
} }
fn tau(&self) -> f64 { /// Construct directly from natural parameters.
if self.sigma > 0.0 { #[inline]
self.mu * self.pi() pub(crate) const fn from_natural(pi: f64, tau: f64) -> Self {
Self { pi, tau }
}
#[inline]
pub fn pi(&self) -> f64 {
self.pi
}
#[inline]
pub fn tau(&self) -> f64 {
self.tau
}
#[inline]
pub fn mu(&self) -> f64 {
if self.pi == 0.0 {
0.0
} else { } else {
self.tau / self.pi
}
}
#[inline]
pub fn sigma(&self) -> f64 {
if self.pi == 0.0 {
f64::INFINITY f64::INFINITY
} else if self.pi.is_infinite() {
0.0
} else {
1.0 / self.pi.sqrt()
} }
} }
pub(crate) fn delta(&self, m: Gaussian) -> (f64, f64) { pub(crate) fn delta(&self, other: Gaussian) -> (f64, f64) {
((self.mu - m.mu).abs(), (self.sigma - m.sigma).abs()) (
(self.mu() - other.mu()).abs(),
(self.sigma() - other.sigma()).abs(),
)
} }
pub(crate) fn exclude(&self, m: Gaussian) -> Self { pub(crate) fn exclude(&self, other: Gaussian) -> Self {
Self { let var = self.sigma().powi(2) - other.sigma().powi(2);
mu: self.mu - m.mu, if var <= 0.0 {
sigma: (self.sigma.powi(2) - m.sigma.powi(2)).sqrt(), // When sigma_self ≈ sigma_other (including ULP-level rounding differences
// from the pi→sigma accessor round-trip), the excluded contribution is N00.
// Computing from_ms(tiny_mu, 0.0) would give {pi:inf, tau:inf}, whose
// mu() = inf/inf = NaN. Returning N00 is correct: when both Gaussians
// carry the same variance, the residual is a point mass at 0.
return Gaussian::from_ms(0.0, 0.0);
} }
let mu = self.mu() - other.mu();
Self::from_ms(mu, var.sqrt())
} }
pub(crate) fn forget(&self, variance_delta: f64) -> Self { pub(crate) fn forget(&self, variance_delta: f64) -> Self {
Self { let var = self.sigma().powi(2) + variance_delta;
mu: self.mu, Self::from_ms(self.mu(), var.sqrt())
sigma: (self.sigma.powi(2) + variance_delta).sqrt(),
}
} }
} }
impl Default for Gaussian { impl Default for Gaussian {
fn default() -> Self { fn default() -> Self {
Self { Self::from_ms(MU, SIGMA)
mu: MU,
sigma: SIGMA,
}
} }
} }
impl ops::Add<Gaussian> for Gaussian { impl ops::Add<Gaussian> for Gaussian {
type Output = Gaussian; type Output = Gaussian;
/// Variance addition: (mu1 + mu2, sqrt(σ1² + σ2²)).
/// Used for combining performance and noise; rare relative to mul/div.
fn add(self, rhs: Gaussian) -> Self::Output { fn add(self, rhs: Gaussian) -> Self::Output {
Gaussian { let mu = self.mu() + rhs.mu();
mu: self.mu + rhs.mu, let var = self.sigma().powi(2) + rhs.sigma().powi(2);
sigma: (self.sigma.powi(2) + rhs.sigma.powi(2)).sqrt(), Self::from_ms(mu, var.sqrt())
}
} }
} }
impl ops::Sub<Gaussian> for Gaussian { impl ops::Sub<Gaussian> for Gaussian {
type Output = Gaussian; type Output = Gaussian;
/// (mu1 - mu2, sqrt(σ1² + σ2²)). Same sigma combination as Add.
fn sub(self, rhs: Gaussian) -> Self::Output { fn sub(self, rhs: Gaussian) -> Self::Output {
Gaussian { let mu = self.mu() - rhs.mu();
mu: self.mu - rhs.mu, let var = self.sigma().powi(2) + rhs.sigma().powi(2);
sigma: (self.sigma.powi(2) + rhs.sigma.powi(2)).sqrt(), Self::from_ms(mu, var.sqrt())
}
} }
} }
impl ops::Mul<Gaussian> for Gaussian { impl ops::Mul<Gaussian> for Gaussian {
type Output = Gaussian; type Output = Gaussian;
/// Factor product: nat-param add. Hot path — two f64 additions, no sqrt.
fn mul(self, rhs: Gaussian) -> Self::Output { fn mul(self, rhs: Gaussian) -> Self::Output {
let (mu, sigma) = if self.sigma == 0.0 || rhs.sigma == 0.0 { Self::from_natural(self.pi + rhs.pi, self.tau + rhs.tau)
let mu = self.mu / (self.sigma.powi(2) / rhs.sigma.powi(2) + 1.0)
+ rhs.mu / (rhs.sigma.powi(2) / self.sigma.powi(2) + 1.0);
let sigma = (1.0 / ((1.0 / self.sigma.powi(2)) + (1.0 / rhs.sigma.powi(2)))).sqrt();
(mu, sigma)
} else {
mu_sigma(self.tau() + rhs.tau(), self.pi() + rhs.pi())
};
Gaussian { mu, sigma }
} }
} }
impl ops::Mul<f64> for Gaussian { impl ops::Mul<f64> for Gaussian {
type Output = Gaussian; type Output = Gaussian;
fn mul(self, scalar: f64) -> Self::Output {
fn mul(self, rhs: f64) -> Self::Output { if !scalar.is_finite() {
if rhs.is_finite() { return N_INF;
Self {
mu: self.mu * rhs,
sigma: self.sigma * rhs,
}
} else {
N_INF
} }
if scalar == 0.0 {
// Scaling by 0 collapses to a point mass at 0 (sigma' = 0, mu' = 0).
// This is N00, the additive identity, NOT N_INF.
return Gaussian::from_ms(0.0, 0.0);
}
// sigma' = sigma * |scalar| => pi' = pi / scalar²
// mu' = mu * scalar => tau' = tau / scalar
Self::from_natural(self.pi / (scalar * scalar), self.tau / scalar)
} }
} }
impl ops::Div<Gaussian> for Gaussian { impl ops::Div<Gaussian> for Gaussian {
type Output = Gaussian; type Output = Gaussian;
/// Cavity: nat-param sub. Hot path — two f64 subtractions, no sqrt.
fn div(self, rhs: Gaussian) -> Self::Output { fn div(self, rhs: Gaussian) -> Self::Output {
let (mu, sigma) = if self.sigma == 0.0 || rhs.sigma == 0.0 { Self::from_natural(self.pi - rhs.pi, self.tau - rhs.tau)
let mu = self.mu / (1.0 - self.sigma.powi(2) / rhs.sigma.powi(2))
+ rhs.mu / (rhs.sigma.powi(2) / self.sigma.powi(2) - 1.0);
let sigma = (1.0 / ((1.0 / self.sigma.powi(2)) - (1.0 / rhs.sigma.powi(2)))).sqrt();
(mu, sigma)
} else {
mu_sigma(self.tau() - rhs.tau(), self.pi() - rhs.pi())
};
Gaussian { mu, sigma }
}
}
fn mu_sigma(tau: f64, pi: f64) -> (f64, f64) {
if pi > 0.0 {
(tau / pi, (1.0 / pi).sqrt())
} else if (pi + 1e-5) < 0.0 {
panic!("precision should be greater than 0");
} else {
(0.0, f64::INFINITY)
} }
} }
@@ -148,85 +164,71 @@ mod tests {
#[test] #[test]
fn test_add() { fn test_add() {
let n = Gaussian { let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
mu: 25.0, let m = Gaussian::from_ms(0.0, 1.0);
sigma: 25.0 / 3.0, let r = n + m;
}; assert!((r.mu() - 25.0).abs() < 1e-12);
assert!((r.sigma() - 8.393118874676116).abs() < 1e-10);
let m = Gaussian {
mu: 0.0,
sigma: 1.0,
};
assert_eq!(
n + m,
Gaussian {
mu: 25.0,
sigma: 8.393118874676116
}
);
} }
#[test] #[test]
fn test_sub() { fn test_sub() {
let n = Gaussian { let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
mu: 25.0, let m = Gaussian::from_ms(1.0, 1.0);
sigma: 25.0 / 3.0, let r = n - m;
}; assert!((r.mu() - 24.0).abs() < 1e-12);
assert!((r.sigma() - 8.393118874676116).abs() < 1e-10);
let m = Gaussian {
mu: 1.0,
sigma: 1.0,
};
assert_eq!(
n - m,
Gaussian {
mu: 24.0,
sigma: 8.393118874676116
}
);
} }
#[test] #[test]
fn test_mul() { fn test_mul() {
let n = Gaussian { let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
mu: 25.0, let m = Gaussian::from_ms(0.0, 1.0);
sigma: 25.0 / 3.0, let r = n * m;
}; assert!((r.mu() - 0.35488958990536273).abs() < 1e-10);
assert!((r.sigma() - 0.992876838486922).abs() < 1e-10);
let m = Gaussian {
mu: 0.0,
sigma: 1.0,
};
assert_eq!(
n * m,
Gaussian {
mu: 0.35488958990536273,
sigma: 0.992876838486922
}
);
} }
#[test] #[test]
fn test_div() { fn test_div() {
let n = Gaussian { let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
mu: 25.0, let m = Gaussian::from_ms(0.0, 1.0);
sigma: 25.0 / 3.0, let r = m / n;
}; assert!((r.mu() - (-0.3652597402597402)).abs() < 1e-10);
assert!((r.sigma() - 1.0072787050317253).abs() < 1e-10);
}
let m = Gaussian { #[test]
mu: 0.0, fn test_n00_is_add_identity() {
sigma: 1.0, // N00 (sigma=0) is the additive identity for the variance-convolution Add op.
}; // N_INF (sigma=inf) is the identity for the EP-product Mul op.
let g = Gaussian::from_ms(3.0, 2.0);
let n00 = Gaussian::from_ms(0.0, 0.0);
let r = n00 + g;
assert!((r.mu() - g.mu()).abs() < 1e-12);
assert!((r.sigma() - g.sigma()).abs() < 1e-12);
}
assert_eq!( #[test]
m / n, fn test_mul_is_factor_product() {
Gaussian { // n * m in nat-params should be pi_n + pi_m, tau_n + tau_m
mu: -0.3652597402597402, let n = Gaussian::from_ms(2.0, 3.0);
sigma: 1.0072787050317253 let m = Gaussian::from_ms(1.0, 2.0);
} let r = n * m;
); let expected_pi = n.pi() + m.pi();
let expected_tau = n.tau() + m.tau();
assert!((r.pi() - expected_pi).abs() < 1e-15);
assert!((r.tau() - expected_tau).abs() < 1e-15);
}
#[test]
fn test_div_is_cavity() {
let n = Gaussian::from_ms(2.0, 1.0);
let m = Gaussian::from_ms(1.0, 2.0);
let r = n / m;
let expected_pi = n.pi() - m.pi();
let expected_tau = n.tau() - m.tau();
assert!((r.pi() - expected_pi).abs() < 1e-15);
assert!((r.tau() - expected_tau).abs() < 1e-15);
} }
} }

File diff suppressed because it is too large Load Diff

72
src/key_table.rs Normal file
View File

@@ -0,0 +1,72 @@
use std::{
borrow::{Borrow, ToOwned},
collections::HashMap,
hash::Hash,
};
use crate::Index;
/// Maps user keys to internal `Index` handles.
///
/// Renamed from the former `IndexMap` to avoid colliding with the `indexmap`
/// crate. Power users can promote `&K` to `Index` via `get_or_create` and
/// skip the lookup on subsequent hot-path calls.
#[derive(Debug)]
pub struct KeyTable<K>(HashMap<K, Index>);
impl<K> KeyTable<K>
where
K: Eq + Hash,
{
pub fn new() -> Self {
Self(HashMap::new())
}
pub fn get<Q: ?Sized + Hash + Eq>(&self, k: &Q) -> Option<Index>
where
K: Borrow<Q>,
{
self.0.get(k).cloned()
}
pub fn get_or_create<Q: ?Sized + Hash + Eq + ToOwned<Owned = K>>(&mut self, k: &Q) -> Index
where
K: Borrow<Q>,
{
if let Some(idx) = self.0.get(k) {
*idx
} else {
let idx = Index::from(self.0.len());
self.0.insert(k.to_owned(), idx);
idx
}
}
pub fn key(&self, idx: Index) -> Option<&K> {
self.0
.iter()
.find(|&(_, value)| *value == idx)
.map(|(key, _)| key)
}
pub fn keys(&self) -> impl Iterator<Item = &K> {
self.0.keys()
}
pub fn len(&self) -> usize {
self.0.len()
}
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
}
impl<K> Default for KeyTable<K>
where
K: Eq + Hash,
{
fn default() -> Self {
KeyTable::new()
}
}

View File

@@ -1,30 +1,49 @@
use std::{ use std::{
borrow::{Borrow, ToOwned},
cmp::Reverse, cmp::Reverse,
collections::HashMap,
f64::consts::{FRAC_1_SQRT_2, FRAC_2_SQRT_PI, SQRT_2}, f64::consts::{FRAC_1_SQRT_2, FRAC_2_SQRT_PI, SQRT_2},
hash::Hash,
}; };
pub mod agent;
#[cfg(feature = "approx")] #[cfg(feature = "approx")]
mod approx; mod approx;
pub mod batch; pub(crate) mod arena;
mod time;
mod time_slice;
pub use time_slice::TimeSlice;
mod competitor;
mod convergence;
pub mod drift; pub mod drift;
mod error;
mod event;
mod event_builder;
pub(crate) mod factor;
pub mod factors;
mod game; mod game;
pub mod gaussian; pub mod gaussian;
mod history; mod history;
mod key_table;
mod matrix; mod matrix;
mod message; mod observer;
pub mod player; mod outcome;
mod rating;
pub(crate) mod schedule;
pub mod storage;
pub use competitor::Competitor;
pub use convergence::{ConvergenceOptions, ConvergenceReport};
pub use drift::{ConstantDrift, Drift}; pub use drift::{ConstantDrift, Drift};
pub use game::Game; pub use error::InferenceError;
pub use event::{Event, Member, Team};
pub use event_builder::EventBuilder;
pub use game::{Game, GameOptions, OwnedGame};
pub use gaussian::Gaussian; pub use gaussian::Gaussian;
pub use history::History; pub use history::History;
pub use key_table::KeyTable;
use matrix::Matrix; use matrix::Matrix;
use message::DiffMessage; pub use observer::{NullObserver, Observer};
pub use player::Player; pub use outcome::Outcome;
pub use rating::Rating;
pub use schedule::ScheduleReport;
pub use time::{Time, Untimed};
pub const BETA: f64 = 1.0; pub const BETA: f64 = 1.0;
pub const MU: f64 = 0.0; pub const MU: f64 = 0.0;
@@ -49,61 +68,6 @@ impl From<usize> for Index {
} }
} }
pub struct IndexMap<K>(HashMap<K, Index>);
impl<K> IndexMap<K>
where
K: Eq + Hash,
{
pub fn new() -> Self {
Self(HashMap::new())
}
pub fn get<Q: ?Sized>(&self, k: &Q) -> Option<Index>
where
K: Borrow<Q>,
Q: Hash + Eq + ToOwned<Owned = K>,
{
self.0.get(k).cloned()
}
pub fn get_or_create<Q: ?Sized>(&mut self, k: &Q) -> Index
where
K: Borrow<Q>,
Q: Hash + Eq + ToOwned<Owned = K>,
{
if let Some(idx) = self.0.get(k) {
*idx
} else {
let idx = Index::from(self.0.len());
self.0.insert(k.to_owned(), idx);
idx
}
}
pub fn key(&self, idx: Index) -> Option<&K> {
self.0
.iter()
.find(|&(_, value)| *value == idx)
.map(|(key, _)| key)
}
pub fn keys(&self) -> impl Iterator<Item = &K> {
self.0.keys()
}
}
impl<K> Default for IndexMap<K>
where
K: Eq + Hash,
{
fn default() -> Self {
IndexMap::new()
}
}
fn erfc(x: f64) -> f64 { fn erfc(x: f64) -> f64 {
let z = x.abs(); let z = x.abs();
let t = 1.0 / (1.0 + z / 2.0); let t = 1.0 / (1.0 + z / 2.0);
@@ -158,7 +122,7 @@ fn compute_margin(p_draw: f64, sd: f64) -> f64 {
ppf(0.5 - p_draw / 2.0, 0.0, sd).abs() ppf(0.5 - p_draw / 2.0, 0.0, sd).abs()
} }
fn cdf(x: f64, mu: f64, sigma: f64) -> f64 { pub(crate) fn cdf(x: f64, mu: f64, sigma: f64) -> f64 {
let z = -(x - mu) / (sigma * SQRT_2); let z = -(x - mu) / (sigma * SQRT_2);
0.5 * erfc(z) 0.5 * erfc(z)
@@ -203,9 +167,9 @@ fn trunc(mu: f64, sigma: f64, margin: f64, tie: bool) -> (f64, f64) {
} }
pub(crate) fn approx(n: Gaussian, margin: f64, tie: bool) -> Gaussian { pub(crate) fn approx(n: Gaussian, margin: f64, tie: bool) -> Gaussian {
let (mu, sigma) = trunc(n.mu, n.sigma, margin, tie); let (mu, sigma) = trunc(n.mu(), n.sigma(), margin, tie);
Gaussian { mu, sigma } Gaussian::from_ms(mu, sigma)
} }
pub(crate) fn tuple_max(v1: (f64, f64), v2: (f64, f64)) -> (f64, f64) { pub(crate) fn tuple_max(v1: (f64, f64), v2: (f64, f64)) -> (f64, f64) {
@@ -219,39 +183,18 @@ pub(crate) fn tuple_gt(t: (f64, f64), e: f64) -> bool {
t.0 > e || t.1 > e t.0 > e || t.1 > e
} }
pub(crate) fn sort_perm(x: &[f64], reverse: bool) -> Vec<usize> { pub(crate) fn sort_time<T: Copy + Ord>(xs: &[T], reverse: bool) -> Vec<usize> {
let mut v = x.iter().enumerate().collect::<Vec<_>>(); let mut x: Vec<(usize, T)> = xs.iter().enumerate().map(|(i, &t)| (i, t)).collect();
if reverse { if reverse {
v.sort_by(|(_, a), (_, b)| b.partial_cmp(a).unwrap()); x.sort_by_key(|&(_, t)| Reverse(t));
} else { } else {
v.sort_by(|(_, a), (_, b)| a.partial_cmp(b).unwrap()); x.sort_by_key(|&(_, t)| t);
}
v.into_iter().map(|(i, _)| i).collect()
}
pub(crate) fn sort_time(xs: &[i64], reverse: bool) -> Vec<usize> {
let mut x = xs.iter().enumerate().collect::<Vec<_>>();
if reverse {
x.sort_by_key(|&(_, x)| Reverse(x));
} else {
x.sort_by_key(|&(_, x)| x);
} }
x.into_iter().map(|(i, _)| i).collect() x.into_iter().map(|(i, _)| i).collect()
} }
pub(crate) fn evidence(d: &[DiffMessage], margin: &[f64], tie: &[bool], e: usize) -> f64 {
if tie[e] {
cdf(margin[e], d[e].prior.mu, d[e].prior.sigma)
- cdf(-margin[e], d[e].prior.mu, d[e].prior.sigma)
} else {
1.0 - cdf(margin[e], d[e].prior.mu, d[e].prior.sigma)
}
}
/// Calculates the match quality of the given rating groups. A result is the draw probability in the association /// Calculates the match quality of the given rating groups. A result is the draw probability in the association
pub fn quality(rating_groups: &[&[Gaussian]], beta: f64) -> f64 { pub fn quality(rating_groups: &[&[Gaussian]], beta: f64) -> f64 {
let flatten_ratings = rating_groups let flatten_ratings = rating_groups
@@ -266,13 +209,13 @@ pub fn quality(rating_groups: &[&[Gaussian]], beta: f64) -> f64 {
let mut mean_matrix = Matrix::new(length, 1); let mut mean_matrix = Matrix::new(length, 1);
for (i, rating) in flatten_ratings.iter().enumerate() { for (i, rating) in flatten_ratings.iter().enumerate() {
mean_matrix[(i, 0)] = rating.mu; mean_matrix[(i, 0)] = rating.mu();
} }
let mut variance_matrix = Matrix::new(length, length); let mut variance_matrix = Matrix::new(length, length);
for (i, rating) in flatten_ratings.iter().enumerate() { for (i, rating) in flatten_ratings.iter().enumerate() {
variance_matrix[(i, i)] = rating.sigma.powi(2); variance_matrix[(i, i)] = rating.sigma().powi(2);
} }
let mut rotated_a_matrix = Matrix::new(rating_groups.len() - 1, length); let mut rotated_a_matrix = Matrix::new(rating_groups.len() - 1, length);
@@ -320,14 +263,9 @@ mod tests {
use super::*; use super::*;
#[test]
fn test_sort_perm() {
assert_eq!(sort_perm(&[0.0, 1.0, 2.0, 0.0], true), vec![2, 1, 0, 3]);
}
#[test] #[test]
fn test_sort_time() { fn test_sort_time() {
assert_eq!(sort_time(&[0, 1, 2, 0], true), vec![2, 1, 0, 3]); assert_eq!(sort_time(&[0i64, 1, 2, 0], true), vec![2, 1, 0, 3]);
} }
#[test] #[test]

View File

@@ -1,81 +0,0 @@
use crate::{N_INF, gaussian::Gaussian};
pub(crate) struct TeamMessage {
pub(crate) prior: Gaussian,
pub(crate) likelihood_lose: Gaussian,
pub(crate) likelihood_win: Gaussian,
pub(crate) likelihood_draw: Gaussian,
}
impl TeamMessage {
/*
pub(crate) fn p(&self) -> Gaussian {
self.prior * self.likelihood_lose * self.likelihood_win * self.likelihood_draw
}
*/
#[inline]
pub(crate) fn posterior_win(&self) -> Gaussian {
self.prior * self.likelihood_lose * self.likelihood_draw
}
#[inline]
pub(crate) fn posterior_lose(&self) -> Gaussian {
self.prior * self.likelihood_win * self.likelihood_draw
}
#[inline]
pub(crate) fn likelihood(&self) -> Gaussian {
self.likelihood_win * self.likelihood_lose * self.likelihood_draw
}
}
impl Default for TeamMessage {
fn default() -> Self {
Self {
prior: N_INF,
likelihood_lose: N_INF,
likelihood_win: N_INF,
likelihood_draw: N_INF,
}
}
}
/*
pub(crate) struct DrawMessage {
pub(crate) prior: Gaussian,
pub(crate) prior_team: Gaussian,
pub(crate) likelihood_lose: Gaussian,
pub(crate) likelihood_win: Gaussian,
}
impl DrawMessage {
pub(crate) fn p(&self) -> Gaussian {
self.prior_team * self.likelihood_lose * self.likelihood_win
}
pub(crate) fn posterior_win(&self) -> Gaussian {
self.prior_team * self.likelihood_lose
}
pub(crate) fn posterior_lose(&self) -> Gaussian {
self.prior_team * self.likelihood_win
}
pub(crate) fn likelihood(&self) -> Gaussian {
self.likelihood_win * self.likelihood_lose
}
}
*/
pub(crate) struct DiffMessage {
pub(crate) prior: Gaussian,
pub(crate) likelihood: Gaussian,
}
impl DiffMessage {
/*
pub(crate) fn p(&self) -> Gaussian {
self.prior * self.likelihood
}
*/
}

48
src/observer.rs Normal file
View File

@@ -0,0 +1,48 @@
//! Observer trait for progress reporting during convergence.
//!
//! Replaces the old `verbose: bool` + `println!` path. Callers wire in any
//! observer that implements the trait; default methods are no-ops so users
//! override only what they need.
use crate::time::Time;
/// Receives progress callbacks during `History::converge`.
///
/// All methods have default no-op implementations; implement only what's
/// interesting. Send/Sync is NOT required in T2 (added in T3 along with
/// Rayon support).
pub trait Observer<T: Time> {
/// Called after each convergence iteration across the whole history.
fn on_iteration_end(&self, _iter: usize, _max_step: (f64, f64)) {}
/// Called after each time slice is processed within an iteration.
fn on_batch_processed(&self, _time: &T, _slice_idx: usize, _n_events: usize) {}
/// Called once when convergence completes (or max iters is reached).
fn on_converged(&self, _iters: usize, _final_step: (f64, f64), _converged: bool) {}
}
/// ZST no-op observer; the default when none is configured.
#[derive(Copy, Clone, Debug, Default)]
pub struct NullObserver;
impl<T: Time> Observer<T> for NullObserver {}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn null_observer_compiles_for_i64() {
let o = NullObserver;
<NullObserver as Observer<i64>>::on_iteration_end(&o, 1, (0.0, 0.0));
<NullObserver as Observer<i64>>::on_converged(&o, 5, (1e-6, 1e-6), true);
}
#[test]
fn null_observer_compiles_for_untimed() {
use crate::Untimed;
let o = NullObserver;
<NullObserver as Observer<Untimed>>::on_iteration_end(&o, 1, (0.0, 0.0));
}
}

87
src/outcome.rs Normal file
View File

@@ -0,0 +1,87 @@
//! Outcome of a match.
//!
//! In T2, only `Ranked` is supported; `Scored` will be added together with
//! `MarginFactor` in T4. The enum is `#[non_exhaustive]` so adding `Scored`
//! is non-breaking for downstream `match` expressions.
use smallvec::SmallVec;
/// Final outcome of a match.
///
/// `Ranked(ranks)`: lower rank = better. Equal ranks mean a tie between those
/// teams. `ranks.len()` must equal the number of teams in the event.
#[derive(Clone, Debug, PartialEq)]
#[non_exhaustive]
pub enum Outcome {
Ranked(SmallVec<[u32; 4]>),
}
impl Outcome {
/// `N`-team outcome where team `winner` won and everyone else tied for last.
///
/// Panics if `winner >= n`.
pub fn winner(winner: u32, n: u32) -> Self {
assert!(winner < n, "winner index {winner} out of range 0..{n}");
let ranks: SmallVec<[u32; 4]> = (0..n).map(|i| if i == winner { 0 } else { 1 }).collect();
Self::Ranked(ranks)
}
/// All `n` teams tied.
pub fn draw(n: u32) -> Self {
Self::Ranked(SmallVec::from_vec(vec![0; n as usize]))
}
/// Explicit per-team ranking.
pub fn ranking<I: IntoIterator<Item = u32>>(ranks: I) -> Self {
Self::Ranked(ranks.into_iter().collect())
}
pub fn team_count(&self) -> usize {
match self {
Self::Ranked(r) => r.len(),
}
}
#[allow(dead_code)]
pub(crate) fn as_ranks(&self) -> &[u32] {
match self {
Self::Ranked(r) => r,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn winner_two_teams() {
let o = Outcome::winner(0, 2);
assert_eq!(o.as_ranks(), &[0u32, 1]);
assert_eq!(o.team_count(), 2);
}
#[test]
fn winner_three_teams_second_wins() {
let o = Outcome::winner(1, 3);
assert_eq!(o.as_ranks(), &[1u32, 0, 1]);
}
#[test]
fn draw_three_teams() {
let o = Outcome::draw(3);
assert_eq!(o.as_ranks(), &[0u32, 0, 0]);
}
#[test]
fn ranking_from_iter() {
let o = Outcome::ranking([2, 0, 1]);
assert_eq!(o.as_ranks(), &[2u32, 0, 1]);
}
#[test]
#[should_panic(expected = "winner index 2 out of range")]
fn winner_out_of_range_panics() {
let _ = Outcome::winner(2, 2);
}
}

View File

@@ -1,32 +0,0 @@
use crate::{
BETA, GAMMA,
drift::{ConstantDrift, Drift},
gaussian::Gaussian,
};
#[derive(Clone, Copy, Debug)]
pub struct Player<D: Drift = ConstantDrift> {
pub(crate) prior: Gaussian,
pub(crate) beta: f64,
pub(crate) drift: D,
}
impl<D: Drift> Player<D> {
pub fn new(prior: Gaussian, beta: f64, drift: D) -> Self {
Self { prior, beta, drift }
}
pub(crate) fn performance(&self) -> Gaussian {
self.prior.forget(self.beta.powi(2))
}
}
impl Default for Player<ConstantDrift> {
fn default() -> Self {
Self {
prior: Gaussian::default(),
beta: BETA,
drift: ConstantDrift(GAMMA),
}
}
}

46
src/rating.rs Normal file
View File

@@ -0,0 +1,46 @@
use std::marker::PhantomData;
use crate::{
BETA, GAMMA,
drift::{ConstantDrift, Drift},
gaussian::Gaussian,
time::Time,
};
/// Static rating configuration: prior skill, performance noise `beta`, drift.
///
/// Renamed from `Player` in T2; `Rating` better describes the data
/// (a configuration) vs. a person (who's a `Competitor` with state).
#[derive(Clone, Copy, Debug)]
pub struct Rating<T: Time = i64, D: Drift<T> = ConstantDrift> {
pub(crate) prior: Gaussian,
pub(crate) beta: f64,
pub(crate) drift: D,
pub(crate) _time: PhantomData<T>,
}
impl<T: Time, D: Drift<T>> Rating<T, D> {
pub fn new(prior: Gaussian, beta: f64, drift: D) -> Self {
Self {
prior,
beta,
drift,
_time: PhantomData,
}
}
pub(crate) fn performance(&self) -> Gaussian {
self.prior.forget(self.beta.powi(2))
}
}
impl Default for Rating<i64, ConstantDrift> {
fn default() -> Self {
Self {
prior: Gaussian::default(),
beta: BETA,
drift: ConstantDrift(GAMMA),
_time: PhantomData,
}
}
}

126
src/schedule.rs Normal file
View File

@@ -0,0 +1,126 @@
//! Schedule trait and built-in implementations.
//!
//! A schedule drives factor propagation to convergence. The default
//! `EpsilonOrMax` performs one TeamSum sweep (setup) then alternating
//! forward/backward sweeps over the iterating factors until the max
//! delta drops below epsilon or `max` iterations is reached.
use crate::factor::{BuiltinFactor, Factor, VarStore};
/// Result returned by a `Schedule::run` call.
#[derive(Debug, Clone, Copy)]
pub struct ScheduleReport {
pub iterations: usize,
pub final_step: (f64, f64),
pub converged: bool,
}
/// Drives factor propagation to convergence.
pub trait Schedule {
fn run(&self, factors: &mut [BuiltinFactor], vars: &mut VarStore) -> ScheduleReport;
}
/// Default schedule: sweep forward then backward until step ≤ eps or iter == max.
///
/// Matches the existing `Game::likelihoods` loop bit-for-bit when given the
/// same factor layout (TeamSums first, then alternating RankDiff/Trunc pairs).
#[derive(Debug, Clone, Copy)]
pub struct EpsilonOrMax {
pub eps: f64,
pub max: usize,
}
impl Default for EpsilonOrMax {
fn default() -> Self {
// Matches today's hard-coded tolerance and iteration cap.
Self { eps: 1e-6, max: 10 }
}
}
impl Schedule for EpsilonOrMax {
fn run(&self, factors: &mut [BuiltinFactor], vars: &mut VarStore) -> ScheduleReport {
// Partition: leading run of TeamSum factors run exactly once (setup).
let n_setup = factors
.iter()
.position(|f| !matches!(f, BuiltinFactor::TeamSum(_)))
.unwrap_or(factors.len());
for f in factors[..n_setup].iter_mut() {
f.propagate(vars);
}
let mut iterations = 0;
let mut final_step = (f64::INFINITY, f64::INFINITY);
let mut converged = false;
if n_setup < factors.len() {
for _ in 0..self.max {
let mut step = (0.0_f64, 0.0_f64);
// Forward sweep over iterating factors.
for f in factors[n_setup..].iter_mut() {
let d = f.propagate(vars);
step.0 = step.0.max(d.0);
step.1 = step.1.max(d.1);
}
// Backward sweep.
for f in factors[n_setup..].iter_mut().rev() {
let d = f.propagate(vars);
step.0 = step.0.max(d.0);
step.1 = step.1.max(d.1);
}
iterations += 1;
final_step = step;
if step.0 <= self.eps && step.1 <= self.eps {
converged = true;
break;
}
}
}
ScheduleReport {
iterations,
final_step,
converged,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{N_INF, factor::team_sum::TeamSumFactor, gaussian::Gaussian};
#[test]
fn schedule_runs_setup_factors_once() {
// Single TeamSum factor; schedule should propagate it exactly once and report 0 iterations.
let mut vars = VarStore::new();
let out = vars.alloc(N_INF);
let mut factors = vec![BuiltinFactor::TeamSum(TeamSumFactor {
inputs: vec![(Gaussian::from_ms(5.0, 1.0), 1.0)],
out,
})];
let schedule = EpsilonOrMax::default();
let report = schedule.run(&mut factors, &mut vars);
assert_eq!(report.iterations, 0);
// The team-perf var should hold the sum.
let result = vars.get(out);
assert!((result.mu() - 5.0).abs() < 1e-12);
}
#[test]
fn report_marks_converged_when_no_iterating_factors() {
// No iterating factors → 0 iterations, converged stays false (loop never ran).
let mut vars = VarStore::new();
let out = vars.alloc(N_INF);
let mut factors = vec![BuiltinFactor::TeamSum(TeamSumFactor {
inputs: vec![(Gaussian::from_ms(0.0, 1.0), 1.0)],
out,
})];
let report = EpsilonOrMax::default().run(&mut factors, &mut vars);
assert_eq!(report.iterations, 0);
}
}

View File

@@ -0,0 +1,127 @@
use crate::{Index, competitor::Competitor, drift::Drift, time::Time};
/// Dense Vec-backed store for competitor state in History.
///
/// Indexed directly by Index.0, eliminating HashMap hashing in the
/// forward/backward sweep. Uses `Vec<Option<Competitor<T, D>>>` so slots can be
/// absent without an explicit present mask.
#[derive(Debug)]
pub struct CompetitorStore<T: Time = i64, D: Drift<T> = crate::drift::ConstantDrift> {
competitors: Vec<Option<Competitor<T, D>>>,
n_present: usize,
}
impl<T: Time, D: Drift<T>> Default for CompetitorStore<T, D> {
fn default() -> Self {
Self {
competitors: Vec::new(),
n_present: 0,
}
}
}
impl<T: Time, D: Drift<T>> CompetitorStore<T, D> {
pub fn new() -> Self {
Self::default()
}
fn ensure_capacity(&mut self, idx: usize) {
if idx >= self.competitors.len() {
self.competitors.resize_with(idx + 1, || None);
}
}
pub fn insert(&mut self, idx: Index, competitor: Competitor<T, D>) {
self.ensure_capacity(idx.0);
if self.competitors[idx.0].is_none() {
self.n_present += 1;
}
self.competitors[idx.0] = Some(competitor);
}
pub fn get(&self, idx: Index) -> Option<&Competitor<T, D>> {
self.competitors.get(idx.0).and_then(|slot| slot.as_ref())
}
pub fn get_mut(&mut self, idx: Index) -> Option<&mut Competitor<T, D>> {
self.competitors
.get_mut(idx.0)
.and_then(|slot| slot.as_mut())
}
pub fn contains(&self, idx: Index) -> bool {
self.get(idx).is_some()
}
pub fn len(&self) -> usize {
self.n_present
}
pub fn is_empty(&self) -> bool {
self.n_present == 0
}
pub fn iter(&self) -> impl Iterator<Item = (Index, &Competitor<T, D>)> {
self.competitors
.iter()
.enumerate()
.filter_map(|(i, slot)| slot.as_ref().map(|a| (Index(i), a)))
}
pub fn iter_mut(&mut self) -> impl Iterator<Item = (Index, &mut Competitor<T, D>)> {
self.competitors
.iter_mut()
.enumerate()
.filter_map(|(i, slot)| slot.as_mut().map(|a| (Index(i), a)))
}
pub fn values_mut(&mut self) -> impl Iterator<Item = &mut Competitor<T, D>> {
self.competitors.iter_mut().filter_map(|s| s.as_mut())
}
}
impl<T: Time, D: Drift<T>> std::ops::Index<Index> for CompetitorStore<T, D> {
type Output = Competitor<T, D>;
fn index(&self, idx: Index) -> &Competitor<T, D> {
self.get(idx).expect("competitor not found at index")
}
}
impl<T: Time, D: Drift<T>> std::ops::IndexMut<Index> for CompetitorStore<T, D> {
fn index_mut(&mut self, idx: Index) -> &mut Competitor<T, D> {
self.get_mut(idx).expect("competitor not found at index")
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{competitor::Competitor, drift::ConstantDrift};
#[test]
fn insert_then_get() {
let mut store: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
let idx = Index(7);
store.insert(idx, Competitor::default());
assert!(store.contains(idx));
assert_eq!(store.len(), 1);
assert!(store.get(idx).is_some());
}
#[test]
fn iter_in_index_order() {
let mut store: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
store.insert(Index(2), Competitor::default());
store.insert(Index(0), Competitor::default());
store.insert(Index(5), Competitor::default());
let keys: Vec<Index> = store.iter().map(|(i, _)| i).collect();
assert_eq!(keys, vec![Index(0), Index(2), Index(5)]);
}
#[test]
fn index_operator_works() {
let mut store: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
store.insert(Index(3), Competitor::default());
let _ = &store[Index(3)];
}
}

5
src/storage/mod.rs Normal file
View File

@@ -0,0 +1,5 @@
mod competitor_store;
mod skill_store;
pub use competitor_store::CompetitorStore;
pub(crate) use skill_store::SkillStore;

130
src/storage/skill_store.rs Normal file
View File

@@ -0,0 +1,130 @@
use crate::{Index, time_slice::Skill};
/// Dense Vec-backed store for per-agent skill state within a TimeSlice.
///
/// Indexed directly by Index.0, eliminating HashMap hashing in the inner
/// convergence loop. Uses a parallel `present` mask so iteration skips
/// absent slots without incurring per-slot Option overhead in the hot path.
#[derive(Debug, Default)]
pub struct SkillStore {
skills: Vec<Skill>,
present: Vec<bool>,
n_present: usize,
}
impl SkillStore {
pub fn new() -> Self {
Self::default()
}
fn ensure_capacity(&mut self, idx: usize) {
if idx >= self.skills.len() {
self.skills.resize_with(idx + 1, Skill::default);
self.present.resize(idx + 1, false);
}
}
pub fn insert(&mut self, idx: Index, skill: Skill) {
self.ensure_capacity(idx.0);
if !self.present[idx.0] {
self.n_present += 1;
}
self.skills[idx.0] = skill;
self.present[idx.0] = true;
}
pub fn get(&self, idx: Index) -> Option<&Skill> {
if idx.0 < self.present.len() && self.present[idx.0] {
Some(&self.skills[idx.0])
} else {
None
}
}
pub fn get_mut(&mut self, idx: Index) -> Option<&mut Skill> {
if idx.0 < self.present.len() && self.present[idx.0] {
Some(&mut self.skills[idx.0])
} else {
None
}
}
#[allow(dead_code)]
pub fn contains(&self, idx: Index) -> bool {
idx.0 < self.present.len() && self.present[idx.0]
}
#[allow(dead_code)]
pub fn len(&self) -> usize {
self.n_present
}
#[allow(dead_code)]
pub fn is_empty(&self) -> bool {
self.n_present == 0
}
pub fn iter(&self) -> impl Iterator<Item = (Index, &Skill)> {
self.present.iter().enumerate().filter_map(|(i, &p)| {
if p {
Some((Index(i), &self.skills[i]))
} else {
None
}
})
}
pub fn iter_mut(&mut self) -> impl Iterator<Item = (Index, &mut Skill)> {
self.skills
.iter_mut()
.zip(self.present.iter())
.enumerate()
.filter_map(|(i, (s, &p))| if p { Some((Index(i), s)) } else { None })
}
pub fn keys(&self) -> impl Iterator<Item = Index> + '_ {
self.present
.iter()
.enumerate()
.filter_map(|(i, &p)| if p { Some(Index(i)) } else { None })
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn insert_then_get() {
let mut store = SkillStore::new();
let idx = Index(3);
store.insert(idx, Skill::default());
assert!(store.contains(idx));
assert_eq!(store.len(), 1);
assert!(store.get(idx).is_some());
}
#[test]
fn missing_returns_none() {
let store = SkillStore::new();
assert!(store.get(Index(0)).is_none());
assert!(!store.contains(Index(42)));
}
#[test]
fn iter_skips_absent_slots() {
let mut store = SkillStore::new();
store.insert(Index(0), Skill::default());
store.insert(Index(5), Skill::default());
let keys: Vec<Index> = store.keys().collect();
assert_eq!(keys, vec![Index(0), Index(5)]);
}
#[test]
fn double_insert_does_not_double_count() {
let mut store = SkillStore::new();
store.insert(Index(2), Skill::default());
store.insert(Index(2), Skill::default());
assert_eq!(store.len(), 1);
}
}

54
src/time.rs Normal file
View File

@@ -0,0 +1,54 @@
//! Generic time axis for `History`.
//!
//! Users pick the `Time` type based on their domain: `Untimed` when no
//! time axis is meaningful, `i64` for integer day/second timestamps.
//! Additional impls can be added behind feature flags.
/// A timestamp on the global ordering axis.
///
/// Must be `Ord + Copy` so slices can sort events, and `'static` so
/// `History` can store it by value without lifetimes.
pub trait Time: Copy + Ord + 'static {
/// How much time elapsed between `self` and `later`.
///
/// Used by `Drift<T>::variance_delta` to compute skill drift. Returning
/// zero means no drift accumulates between the two points. Return value
/// must be non-negative for `self <= later`.
fn elapsed_to(&self, later: &Self) -> i64;
}
/// Zero-sized type representing "no time axis."
///
/// Used as the default `Time` when events are unordered. Elapsed is always 0,
/// so no drift accumulates across slices.
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Untimed;
impl Time for Untimed {
fn elapsed_to(&self, _later: &Self) -> i64 {
0
}
}
impl Time for i64 {
fn elapsed_to(&self, later: &Self) -> i64 {
later - self
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn untimed_elapsed_is_zero() {
assert_eq!(Untimed.elapsed_to(&Untimed), 0);
}
#[test]
fn i64_elapsed_is_difference() {
assert_eq!(5i64.elapsed_to(&10), 5);
assert_eq!(10i64.elapsed_to(&5), -5);
assert_eq!(0i64.elapsed_to(&0), 0);
}
}

View File

@@ -1,7 +1,18 @@
//! A single time step's worth of events.
//!
//! Renamed from `Batch` in T2.
use std::collections::HashMap; use std::collections::HashMap;
use crate::{ use crate::{
Index, N_INF, agent::Agent, drift::Drift, game::Game, gaussian::Gaussian, player::Player, Index, N_INF,
arena::ScratchArena,
drift::Drift,
game::Game,
gaussian::Gaussian,
rating::Rating,
storage::{CompetitorStore, SkillStore},
time::Time,
tuple_gt, tuple_max, tuple_gt, tuple_max,
}; };
@@ -39,22 +50,22 @@ struct Item {
} }
impl Item { impl Item {
fn within_prior<D: Drift>( fn within_prior<T: Time, D: Drift<T>>(
&self, &self,
online: bool, online: bool,
forward: bool, forward: bool,
skills: &HashMap<Index, Skill>, skills: &SkillStore,
agents: &HashMap<Index, Agent<D>>, agents: &CompetitorStore<T, D>,
) -> Player<D> { ) -> Rating<T, D> {
let r = &agents[&self.agent].player; let r = &agents[self.agent].rating;
let skill = &skills[&self.agent]; let skill = skills.get(self.agent).unwrap();
if online { if online {
Player::new(skill.online, r.beta, r.drift) Rating::new(skill.online, r.beta, r.drift)
} else if forward { } else if forward {
Player::new(skill.forward, r.beta, r.drift) Rating::new(skill.forward, r.beta, r.drift)
} else { } else {
Player::new(skill.posterior() / self.likelihood, r.beta, r.drift) Rating::new(skill.posterior() / self.likelihood, r.beta, r.drift)
} }
} }
} }
@@ -80,13 +91,13 @@ impl Event {
.collect::<Vec<_>>() .collect::<Vec<_>>()
} }
pub(crate) fn within_priors<D: Drift>( pub(crate) fn within_priors<T: Time, D: Drift<T>>(
&self, &self,
online: bool, online: bool,
forward: bool, forward: bool,
skills: &HashMap<Index, Skill>, skills: &SkillStore,
agents: &HashMap<Index, Agent<D>>, agents: &CompetitorStore<T, D>,
) -> Vec<Vec<Player<D>>> { ) -> Vec<Vec<Rating<T, D>>> {
self.teams self.teams
.iter() .iter()
.map(|team| { .map(|team| {
@@ -100,29 +111,31 @@ impl Event {
} }
#[derive(Debug)] #[derive(Debug)]
pub struct Batch { pub struct TimeSlice<T: Time = i64> {
pub(crate) events: Vec<Event>, pub(crate) events: Vec<Event>,
pub(crate) skills: HashMap<Index, Skill>, pub(crate) skills: SkillStore,
pub(crate) time: i64, pub(crate) time: T,
p_draw: f64, p_draw: f64,
arena: ScratchArena,
} }
impl Batch { impl<T: Time> TimeSlice<T> {
pub fn new(time: i64, p_draw: f64) -> Self { pub fn new(time: T, p_draw: f64) -> Self {
Self { Self {
events: Vec::new(), events: Vec::new(),
skills: HashMap::new(), skills: SkillStore::new(),
time, time,
p_draw, p_draw,
arena: ScratchArena::new(),
} }
} }
pub fn add_events<D: Drift>( pub fn add_events<D: Drift<T>>(
&mut self, &mut self,
composition: Vec<Vec<Vec<Index>>>, composition: Vec<Vec<Vec<Index>>>,
results: Vec<Vec<f64>>, results: Vec<Vec<f64>>,
weights: Vec<Vec<Vec<f64>>>, weights: Vec<Vec<Vec<f64>>>,
agents: &HashMap<Index, Agent<D>>, agents: &CompetitorStore<T, D>,
) { ) {
let mut unique = Vec::with_capacity(10); let mut unique = Vec::with_capacity(10);
@@ -137,16 +150,16 @@ impl Batch {
}); });
for idx in this_agent { for idx in this_agent {
let elapsed = compute_elapsed(agents[&idx].last_time, self.time); let elapsed = compute_elapsed(agents[*idx].last_time.as_ref(), &self.time);
if let Some(skill) = self.skills.get_mut(idx) { if let Some(skill) = self.skills.get_mut(*idx) {
skill.elapsed = elapsed; skill.elapsed = elapsed;
skill.forward = agents[&idx].receive(elapsed); skill.forward = agents[*idx].receive(&self.time);
} else { } else {
self.skills.insert( self.skills.insert(
*idx, *idx,
Skill { Skill {
forward: agents[&idx].receive(elapsed), forward: agents[*idx].receive(&self.time),
elapsed, elapsed,
..Default::default() ..Default::default()
}, },
@@ -204,23 +217,28 @@ impl Batch {
pub(crate) fn posteriors(&self) -> HashMap<Index, Gaussian> { pub(crate) fn posteriors(&self) -> HashMap<Index, Gaussian> {
self.skills self.skills
.iter() .iter()
.map(|(&idx, skill)| (idx, skill.posterior())) .map(|(idx, skill)| (idx, skill.posterior()))
.collect::<HashMap<_, _>>() .collect::<HashMap<_, _>>()
} }
pub fn iteration<D: Drift>(&mut self, from: usize, agents: &HashMap<Index, Agent<D>>) { pub fn iteration<D: Drift<T>>(&mut self, from: usize, agents: &CompetitorStore<T, D>) {
for event in self.events.iter_mut().skip(from) { for event in self.events.iter_mut().skip(from) {
let teams = event.within_priors(false, false, &self.skills, agents); let teams = event.within_priors(false, false, &self.skills, agents);
let result = event.outputs(); let result = event.outputs();
let g = Game::new(teams, &result, &event.weights, self.p_draw); let g = Game::ranked_with_arena(
teams,
&result,
&event.weights,
self.p_draw,
&mut self.arena,
);
for (t, team) in event.teams.iter_mut().enumerate() { for (t, team) in event.teams.iter_mut().enumerate() {
for (i, item) in team.items.iter_mut().enumerate() { for (i, item) in team.items.iter_mut().enumerate() {
self.skills.get_mut(&item.agent).unwrap().likelihood = let old_likelihood = self.skills.get(item.agent).unwrap().likelihood;
(self.skills[&item.agent].likelihood / item.likelihood) let new_likelihood = (old_likelihood / item.likelihood) * g.likelihoods[t][i];
* g.likelihoods[t][i]; self.skills.get_mut(item.agent).unwrap().likelihood = new_likelihood;
item.likelihood = g.likelihoods[t][i]; item.likelihood = g.likelihoods[t][i];
} }
} }
@@ -230,7 +248,7 @@ impl Batch {
} }
#[allow(dead_code)] #[allow(dead_code)]
pub(crate) fn convergence<D: Drift>(&mut self, agents: &HashMap<Index, Agent<D>>) -> usize { pub(crate) fn convergence<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) -> usize {
let epsilon = 1e-6; let epsilon = 1e-6;
let iterations = 20; let iterations = 20;
@@ -255,56 +273,60 @@ impl Batch {
} }
pub(crate) fn forward_prior_out(&self, agent: &Index) -> Gaussian { pub(crate) fn forward_prior_out(&self, agent: &Index) -> Gaussian {
let skill = &self.skills[agent]; let skill = self.skills.get(*agent).unwrap();
skill.forward * skill.likelihood skill.forward * skill.likelihood
} }
pub(crate) fn backward_prior_out<D: Drift>( pub(crate) fn backward_prior_out<D: Drift<T>>(
&self, &self,
agent: &Index, agent: &Index,
agents: &HashMap<Index, Agent<D>>, agents: &CompetitorStore<T, D>,
) -> Gaussian { ) -> Gaussian {
let skill = &self.skills[agent]; let skill = self.skills.get(*agent).unwrap();
let n = skill.likelihood * skill.backward; let n = skill.likelihood * skill.backward;
n.forget(
n.forget(agents[agent].player.drift.variance_delta(skill.elapsed)) agents[*agent]
.rating
.drift
.variance_for_elapsed(skill.elapsed),
)
} }
pub(crate) fn new_backward_info<D: Drift>(&mut self, agents: &HashMap<Index, Agent<D>>) { pub(crate) fn new_backward_info<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) {
for (agent, skill) in self.skills.iter_mut() { for (agent, skill) in self.skills.iter_mut() {
skill.backward = agents[agent].message; skill.backward = agents[agent].message;
} }
self.iteration(0, agents); self.iteration(0, agents);
} }
pub(crate) fn new_forward_info<D: Drift>(&mut self, agents: &HashMap<Index, Agent<D>>) { pub(crate) fn new_forward_info<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) {
for (agent, skill) in self.skills.iter_mut() { for (agent, skill) in self.skills.iter_mut() {
skill.forward = agents[agent].receive(skill.elapsed); skill.forward = agents[agent].receive_for_elapsed(skill.elapsed);
} }
self.iteration(0, agents); self.iteration(0, agents);
} }
pub(crate) fn log_evidence<D: Drift>( pub(crate) fn log_evidence<D: Drift<T>>(
&self, &self,
online: bool, online: bool,
targets: &[Index], targets: &[Index],
forward: bool, forward: bool,
agents: &HashMap<Index, Agent<D>>, agents: &CompetitorStore<T, D>,
) -> f64 { ) -> f64 {
// log_evidence is infrequent; a local arena avoids needing &mut self.
let mut arena = ScratchArena::new();
if targets.is_empty() { if targets.is_empty() {
if online || forward { if online || forward {
self.events self.events
.iter() .iter()
.enumerate() .map(|event| {
.map(|(_, event)| { Game::ranked_with_arena(
Game::new(
event.within_priors(online, forward, &self.skills, agents), event.within_priors(online, forward, &self.skills, agents),
&event.outputs(), &event.outputs(),
&event.weights, &event.weights,
self.p_draw, self.p_draw,
&mut arena,
) )
.evidence .evidence
.ln() .ln()
@@ -325,11 +347,12 @@ impl Batch {
.any(|item| targets.contains(&item.agent)) .any(|item| targets.contains(&item.agent))
}) })
.map(|(_, event)| { .map(|(_, event)| {
Game::new( Game::ranked_with_arena(
event.within_priors(online, forward, &self.skills, agents), event.within_priors(online, forward, &self.skills, agents),
&event.outputs(), &event.outputs(),
&event.weights, &event.weights,
self.p_draw, self.p_draw,
&mut arena,
) )
.evidence .evidence
.ln() .ln()
@@ -377,14 +400,8 @@ impl Batch {
} }
} }
pub(crate) fn compute_elapsed(last_time: i64, actual_time: i64) -> i64 { pub(crate) fn compute_elapsed<T: Time>(last: Option<&T>, current: &T) -> i64 {
if last_time == i64::MIN { last.map(|l| l.elapsed_to(current).max(0)).unwrap_or(0)
0
} else if last_time == i64::MAX {
1
} else {
actual_time - last_time
}
} }
#[cfg(test)] #[cfg(test)]
@@ -392,11 +409,14 @@ mod tests {
use approx::assert_ulps_eq; use approx::assert_ulps_eq;
use super::*; use super::*;
use crate::{IndexMap, agent::Agent, drift::ConstantDrift, player::Player}; use crate::{
KeyTable, competitor::Competitor, drift::ConstantDrift, rating::Rating,
storage::CompetitorStore,
};
#[test] #[test]
fn test_one_event_each() { fn test_one_event_each() {
let mut index_map = IndexMap::new(); let mut index_map = KeyTable::new();
let a = index_map.get_or_create("a"); let a = index_map.get_or_create("a");
let b = index_map.get_or_create("b"); let b = index_map.get_or_create("b");
@@ -405,13 +425,13 @@ mod tests {
let e = index_map.get_or_create("e"); let e = index_map.get_or_create("e");
let f = index_map.get_or_create("f"); let f = index_map.get_or_create("f");
let mut agents = HashMap::new(); let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
for agent in [a, b, c, d, e, f] { for agent in [a, b, c, d, e, f] {
agents.insert( agents.insert(
agent, agent,
Agent { Competitor {
player: Player::new( rating: Rating::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
@@ -421,9 +441,9 @@ mod tests {
); );
} }
let mut batch = Batch::new(0, 0.0); let mut time_slice = TimeSlice::new(0i64, 0.0);
batch.add_events( time_slice.add_events(
vec![ vec![
vec![vec![a], vec![b]], vec![vec![a], vec![b]],
vec![vec![c], vec![d]], vec![vec![c], vec![d]],
@@ -434,7 +454,7 @@ mod tests {
&agents, &agents,
); );
let post = batch.posteriors(); let post = time_slice.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
post[&a], post[&a],
@@ -467,12 +487,12 @@ mod tests {
epsilon = 1e-6 epsilon = 1e-6
); );
assert_eq!(batch.convergence(&agents), 1); assert_eq!(time_slice.convergence(&agents), 1);
} }
#[test] #[test]
fn test_same_strength() { fn test_same_strength() {
let mut index_map = IndexMap::new(); let mut index_map = KeyTable::new();
let a = index_map.get_or_create("a"); let a = index_map.get_or_create("a");
let b = index_map.get_or_create("b"); let b = index_map.get_or_create("b");
@@ -481,13 +501,13 @@ mod tests {
let e = index_map.get_or_create("e"); let e = index_map.get_or_create("e");
let f = index_map.get_or_create("f"); let f = index_map.get_or_create("f");
let mut agents = HashMap::new(); let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
for agent in [a, b, c, d, e, f] { for agent in [a, b, c, d, e, f] {
agents.insert( agents.insert(
agent, agent,
Agent { Competitor {
player: Player::new( rating: Rating::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
@@ -497,9 +517,9 @@ mod tests {
); );
} }
let mut batch = Batch::new(0, 0.0); let mut time_slice = TimeSlice::new(0i64, 0.0);
batch.add_events( time_slice.add_events(
vec![ vec![
vec![vec![a], vec![b]], vec![vec![a], vec![b]],
vec![vec![a], vec![c]], vec![vec![a], vec![c]],
@@ -510,7 +530,7 @@ mod tests {
&agents, &agents,
); );
let post = batch.posteriors(); let post = time_slice.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
post[&a], post[&a],
@@ -528,9 +548,9 @@ mod tests {
epsilon = 1e-6 epsilon = 1e-6
); );
assert!(batch.convergence(&agents) > 1); assert!(time_slice.convergence(&agents) > 1);
let post = batch.posteriors(); let post = time_slice.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
post[&a], post[&a],
@@ -551,7 +571,7 @@ mod tests {
#[test] #[test]
fn test_add_events() { fn test_add_events() {
let mut index_map = IndexMap::new(); let mut index_map = KeyTable::new();
let a = index_map.get_or_create("a"); let a = index_map.get_or_create("a");
let b = index_map.get_or_create("b"); let b = index_map.get_or_create("b");
@@ -560,13 +580,13 @@ mod tests {
let e = index_map.get_or_create("e"); let e = index_map.get_or_create("e");
let f = index_map.get_or_create("f"); let f = index_map.get_or_create("f");
let mut agents = HashMap::new(); let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
for agent in [a, b, c, d, e, f] { for agent in [a, b, c, d, e, f] {
agents.insert( agents.insert(
agent, agent,
Agent { Competitor {
player: Player::new( rating: Rating::new(
Gaussian::from_ms(25.0, 25.0 / 3.0), Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0, 25.0 / 6.0,
ConstantDrift(25.0 / 300.0), ConstantDrift(25.0 / 300.0),
@@ -576,9 +596,9 @@ mod tests {
); );
} }
let mut batch = Batch::new(0, 0.0); let mut time_slice = TimeSlice::new(0i64, 0.0);
batch.add_events( time_slice.add_events(
vec![ vec![
vec![vec![a], vec![b]], vec![vec![a], vec![b]],
vec![vec![a], vec![c]], vec![vec![a], vec![c]],
@@ -589,9 +609,9 @@ mod tests {
&agents, &agents,
); );
batch.convergence(&agents); time_slice.convergence(&agents);
let post = batch.posteriors(); let post = time_slice.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
post[&a], post[&a],
@@ -609,7 +629,7 @@ mod tests {
epsilon = 1e-6 epsilon = 1e-6
); );
batch.add_events( time_slice.add_events(
vec![ vec![
vec![vec![a], vec![b]], vec![vec![a], vec![b]],
vec![vec![a], vec![c]], vec![vec![a], vec![c]],
@@ -620,11 +640,11 @@ mod tests {
&agents, &agents,
); );
assert_eq!(batch.events.len(), 6); assert_eq!(time_slice.events.len(), 6);
batch.convergence(&agents); time_slice.convergence(&agents);
let post = batch.posteriors(); let post = time_slice.posteriors();
assert_ulps_eq!( assert_ulps_eq!(
post[&a], post[&a],

225
tests/api_shape.rs Normal file
View File

@@ -0,0 +1,225 @@
//! Tests for the new T2 public API surface: typed add_events(iter) and the
//! fluent event builder (added in Task 16).
use smallvec::smallvec;
use trueskill_tt::{ConstantDrift, ConvergenceOptions, Event, History, Member, Outcome, Team};
#[test]
fn add_events_bulk_via_iter() {
let mut h = History::builder()
.mu(0.0)
.sigma(2.0)
.beta(1.0)
.p_draw(0.0)
.drift(ConstantDrift(0.0))
.convergence(ConvergenceOptions {
max_iter: 30,
epsilon: 1e-6,
})
.build();
let events: Vec<Event<i64, &'static str>> = vec![
Event {
time: 1,
teams: smallvec![
Team::with_members([Member::new("a")]),
Team::with_members([Member::new("b")]),
],
outcome: Outcome::winner(0, 2),
},
Event {
time: 2,
teams: smallvec![
Team::with_members([Member::new("b")]),
Team::with_members([Member::new("c")]),
],
outcome: Outcome::winner(0, 2),
},
];
h.add_events(events).unwrap();
let report = h.converge().unwrap();
assert!(report.converged);
assert!(h.lookup(&"a").is_some());
assert!(h.lookup(&"b").is_some());
assert!(h.lookup(&"c").is_some());
}
#[test]
fn add_events_draw() {
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.25)
.drift(ConstantDrift(25.0 / 300.0))
.build();
let events: Vec<Event<i64, &'static str>> = vec![Event {
time: 1,
teams: smallvec![
Team::with_members([Member::new("alice")]),
Team::with_members([Member::new("bob")]),
],
outcome: Outcome::draw(2),
}];
h.add_events(events).unwrap();
h.converge().unwrap();
}
#[test]
fn add_events_rejects_mismatched_outcome_ranks() {
use trueskill_tt::InferenceError;
let mut h: History = History::builder().build();
let events: Vec<Event<i64, &'static str>> = vec![Event {
time: 1,
teams: smallvec![
Team::with_members([Member::new("a")]),
Team::with_members([Member::new("b")]),
],
outcome: Outcome::ranking([0, 1, 2]), // 3 ranks but 2 teams
}];
let err = h.add_events(events).unwrap_err();
assert!(matches!(err, InferenceError::MismatchedShape { .. }));
}
#[test]
fn fluent_event_builder_basic() {
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.0)
.build();
h.event(1)
.team(["alice", "bob"])
.weights([1.0, 0.7])
.team(["carol"])
.ranking([1, 0])
.commit()
.unwrap();
let report = h.converge().unwrap();
assert!(report.converged);
assert!(h.lookup(&"alice").is_some());
assert!(h.lookup(&"bob").is_some());
assert!(h.lookup(&"carol").is_some());
}
#[test]
fn fluent_event_builder_winner_convenience() {
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.0)
.build();
h.event(1)
.team(["alice"])
.team(["bob"])
.winner(0)
.commit()
.unwrap();
h.converge().unwrap();
}
#[test]
fn fluent_event_builder_draw() {
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.25)
.build();
h.event(1)
.team(["alice"])
.team(["bob"])
.draw()
.commit()
.unwrap();
h.converge().unwrap();
}
#[test]
fn current_skill_and_learning_curve() {
use trueskill_tt::History;
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.0)
.build();
h.record_winner(&"a", &"b", 1).unwrap();
h.record_winner(&"a", &"b", 2).unwrap();
h.converge().unwrap();
let a = h.current_skill(&"a").unwrap();
assert!(a.mu() > 25.0);
let b = h.current_skill(&"b").unwrap();
assert!(b.mu() < 25.0);
let a_curve = h.learning_curve(&"a");
assert_eq!(a_curve.len(), 2);
assert_eq!(a_curve[0].0, 1);
assert_eq!(a_curve[1].0, 2);
let all = h.learning_curves();
assert_eq!(all.len(), 2);
assert!(all.contains_key("a"));
assert!(all.contains_key("b"));
}
#[test]
fn log_evidence_total_vs_subset() {
use trueskill_tt::{ConstantDrift, History};
let mut h = History::builder()
.mu(0.0)
.sigma(6.0)
.beta(1.0)
.p_draw(0.0)
.drift(ConstantDrift(0.0))
.build();
h.record_winner(&"a", &"b", 1).unwrap();
h.record_winner(&"b", &"a", 2).unwrap();
let total = h.log_evidence();
let a_only = h.log_evidence_for(&[&"a"]);
assert!(total.is_finite());
assert!(a_only.is_finite());
}
#[test]
fn predict_quality_two_teams() {
use trueskill_tt::History;
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.0)
.build();
h.record_winner(&"a", &"b", 1).unwrap();
h.converge().unwrap();
let q = h.predict_quality(&[&[&"a"], &[&"b"]]);
assert!(q > 0.0 && q <= 1.0);
}
#[test]
fn predict_outcome_two_teams_sums_to_one() {
use trueskill_tt::History;
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.p_draw(0.0)
.build();
h.record_winner(&"a", &"b", 1).unwrap();
h.converge().unwrap();
let p = h.predict_outcome(&[&[&"a"], &[&"b"]]);
assert_eq!(p.len(), 2);
assert!((p[0] + p[1] - 1.0).abs() < 1e-9);
assert!(p[0] > p[1]);
}

61
tests/equivalence.rs Normal file
View File

@@ -0,0 +1,61 @@
//! Equivalence tests: every historical golden from the pre-T2 tests is
//! reproduced here at the integration level via the new public API.
//!
//! The in-crate tests in `src/history.rs::tests` and
//! `src/time_slice.rs::tests` are the primary regression net for numerical
//! behavior. This file provides Game-level goldens that stand alone and are
//! more naturally expressed as integration tests.
use approx::assert_ulps_eq;
use trueskill_tt::{ConstantDrift, Game, GameOptions, Gaussian, Outcome, Rating};
type R = Rating<i64, ConstantDrift>;
fn ts_rating(mu: f64, sigma: f64, beta: f64, gamma: f64) -> R {
R::new(Gaussian::from_ms(mu, sigma), beta, ConstantDrift(gamma))
}
#[test]
fn game_1v1_golden_matches_historical() {
let a = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
let b = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
let (a_post, b_post) = Game::<i64, _>::one_v_one(&a, &b, Outcome::winner(0, 2)).unwrap();
// Historical golden from pre-T2 test_1vs1 (team 0 wins):
assert_ulps_eq!(
a_post,
Gaussian::from_ms(29.205220, 7.194481),
epsilon = 1e-6
);
assert_ulps_eq!(
b_post,
Gaussian::from_ms(20.794779, 7.194481),
epsilon = 1e-6
);
}
#[test]
fn game_1v1_draw_golden() {
let a = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
let b = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
let g = Game::<i64, _>::ranked(
&[&[a], &[b]],
Outcome::draw(2),
&GameOptions {
p_draw: 0.25,
convergence: Default::default(),
},
)
.unwrap();
let p = g.posteriors();
// Historical golden from pre-T2 test_1vs1_draw:
assert_ulps_eq!(
p[0][0],
Gaussian::from_ms(24.999999, 6.469480),
epsilon = 1e-6
);
assert_ulps_eq!(
p[1][0],
Gaussian::from_ms(24.999999, 6.469480),
epsilon = 1e-6
);
}

96
tests/game.rs Normal file
View File

@@ -0,0 +1,96 @@
use trueskill_tt::{
ConstantDrift, ConvergenceOptions, Game, GameOptions, Gaussian, InferenceError, Outcome, Rating,
};
type R = Rating<i64, ConstantDrift>;
fn default_rating() -> R {
R::new(
Gaussian::from_ms(25.0, 25.0 / 3.0),
25.0 / 6.0,
ConstantDrift(25.0 / 300.0),
)
}
#[test]
fn game_ranked_1v1_golden() {
let a = default_rating();
let b = default_rating();
let g = Game::<i64, _>::ranked(
&[&[a], &[b]],
Outcome::winner(0, 2),
&GameOptions::default(),
)
.unwrap();
let p = g.posteriors();
assert!(p[0][0].mu() > 25.0);
assert!(p[1][0].mu() < 25.0);
assert!((p[0][0].sigma() - p[1][0].sigma()).abs() < 1e-6);
}
#[test]
fn game_one_v_one_shortcut() {
let a = default_rating();
let b = default_rating();
let (a_post, b_post) = Game::<i64, _>::one_v_one(&a, &b, Outcome::winner(0, 2)).unwrap();
assert!(a_post.mu() > 25.0);
assert!(b_post.mu() < 25.0);
}
#[test]
fn game_ranked_rejects_bad_p_draw() {
let a = R::new(Gaussian::default(), 1.0, ConstantDrift(0.0));
let err = Game::<i64, _>::ranked(
&[&[a], &[a]],
Outcome::winner(0, 2),
&GameOptions {
p_draw: 1.5,
convergence: ConvergenceOptions::default(),
},
)
.unwrap_err();
assert!(matches!(err, InferenceError::InvalidProbability { .. }));
}
#[test]
fn game_ranked_rejects_mismatched_ranks() {
let a = R::new(Gaussian::default(), 1.0, ConstantDrift(0.0));
let err = Game::<i64, _>::ranked(
&[&[a], &[a]],
Outcome::ranking([0, 1, 2]),
&GameOptions::default(),
)
.unwrap_err();
assert!(matches!(err, InferenceError::MismatchedShape { .. }));
}
#[test]
fn game_free_for_all_three_players() {
let a = default_rating();
let b = default_rating();
let c = default_rating();
let g = Game::<i64, _>::free_for_all(
&[&a, &b, &c],
Outcome::ranking([0, 1, 2]),
&GameOptions::default(),
)
.unwrap();
let p = g.posteriors();
assert_eq!(p.len(), 3);
assert!(p[0][0].mu() > p[1][0].mu());
assert!(p[1][0].mu() > p[2][0].mu());
}
#[test]
fn game_log_evidence_is_finite() {
let a = default_rating();
let b = default_rating();
let g = Game::<i64, _>::ranked(
&[&[a], &[b]],
Outcome::winner(0, 2),
&GameOptions::default(),
)
.unwrap();
assert!(g.log_evidence().is_finite());
assert!(g.log_evidence() < 0.0);
}

54
tests/record_winner.rs Normal file
View File

@@ -0,0 +1,54 @@
use trueskill_tt::{ConstantDrift, ConvergenceOptions, History};
#[test]
fn record_winner_builds_history() {
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.drift(ConstantDrift(25.0 / 300.0))
.convergence(ConvergenceOptions {
max_iter: 30,
epsilon: 1e-6,
})
.build();
h.record_winner(&"alice", &"bob", 1).unwrap();
h.converge().unwrap();
let a_idx = h.lookup(&"alice").unwrap();
let b_idx = h.lookup(&"bob").unwrap();
assert_ne!(a_idx, b_idx);
}
#[test]
fn intern_is_idempotent() {
let mut h: History = History::builder().build();
let a1 = h.intern(&"alice");
let a2 = h.intern(&"alice");
assert_eq!(a1, a2);
}
#[test]
fn lookup_returns_none_for_missing() {
let h: History = History::builder().build();
assert!(h.lookup(&"nobody").is_none());
}
#[test]
fn record_draw_with_p_draw_set() {
let mut h = History::builder()
.mu(25.0)
.sigma(25.0 / 3.0)
.beta(25.0 / 6.0)
.drift(ConstantDrift(25.0 / 300.0))
.p_draw(0.25)
.build();
h.record_draw(&"alice", &"bob", 1).unwrap();
h.converge().unwrap();
assert!(h.lookup(&"alice").is_some());
assert!(h.lookup(&"bob").is_some());
}