Compare commits
7 Commits
dc47964310
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 6bf3e7e294 | |||
| d2aab82c1e | |||
| a14df02089 | |||
| 0d266b4428 | |||
| a4b4e5e8fa | |||
| 04d5478ee4 | |||
| 480467ac32 |
234
CHANGELOG.md
Normal file
234
CHANGELOG.md
Normal file
@@ -0,0 +1,234 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
## Unreleased — T3 concurrency
|
||||
|
||||
Adds rayon-backed parallel paths per Section 6 of
|
||||
`docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`.
|
||||
|
||||
### Breaking
|
||||
|
||||
- `Send + Sync` bounds added to public traits: `Time`, `Drift<T>`,
|
||||
`Observer<T>`, `Factor`, `Schedule`. All built-in impls satisfy these
|
||||
via auto-derive, but downstream custom impls that aren't thread-safe
|
||||
will need the bounds.
|
||||
|
||||
### New
|
||||
|
||||
- Opt-in `rayon` cargo feature. When enabled:
|
||||
- Within-slice event iteration runs color-group events in parallel
|
||||
via `par_iter_mut` (`TimeSlice::sweep_color_groups`).
|
||||
- `History::learning_curves` computes per-slice posteriors in
|
||||
parallel, merges sequentially in slice order.
|
||||
- `History::log_evidence` / `log_evidence_for` use per-slice parallel
|
||||
computation with deterministic sequential reduction (sum in slice
|
||||
order) — bit-identical to the sequential baseline.
|
||||
- `ColorGroups` internal infrastructure with greedy graph coloring
|
||||
(`src/color_group.rs`). Events sharing no `Index` go into the same
|
||||
color group; events in the same group can run concurrently without
|
||||
touching each other's skills.
|
||||
- `tests/determinism.rs` asserts bit-identical posteriors across
|
||||
`RAYON_NUM_THREADS={1, 2, 4, 8}`.
|
||||
- `benches/history_converge.rs` measures end-to-end convergence on
|
||||
three workload shapes.
|
||||
|
||||
### Performance notes
|
||||
|
||||
- Default build (no rayon): `Batch::iteration` 23.23 µs — no regression
|
||||
vs T2.
|
||||
- With `--features rayon`:
|
||||
- 500 events / 100 competitors / 10 per slice: 1.0× speedup.
|
||||
- 2000 events / 200 competitors / 20 per slice: 1.0× speedup.
|
||||
- 5000 events in one slice / 50k competitors: **1.3× speedup.**
|
||||
- The spec targeted >2× speedup on 8-core offline converge. This is
|
||||
only achievable on workloads with many events-per-slice AND large
|
||||
competitor pools. **Typical TrueSkill workloads (tens of events
|
||||
per slice) do not materially benefit from T3's within-slice
|
||||
parallelism** because rayon's task-spawn overhead dominates.
|
||||
- Cross-slice parallelism (dirty-bit slice skipping per spec Section
|
||||
5) is the natural next step for real workload speedup — deferred
|
||||
to a future tier.
|
||||
|
||||
### Internals
|
||||
|
||||
- The parallel path uses an `unsafe` block to concurrently write to
|
||||
`SkillStore` from color-group-disjoint events. Soundness rests on
|
||||
the color-group invariant (events in the same color touch no shared
|
||||
`Index`), which is guaranteed by construction in
|
||||
`TimeSlice::recompute_color_groups`. Sequential path unchanged.
|
||||
- `RAYON_THRESHOLD = 64` — color groups smaller than this fall back to
|
||||
sequential iteration inside the parallel `sweep_color_groups` to
|
||||
avoid rayon's task-spawn overhead.
|
||||
- Thread-local `ScratchArena` per rayon worker thread.
|
||||
|
||||
## Unreleased — T2 new API surface
|
||||
|
||||
Breaking: every renamed type and the new public API land together per
|
||||
`docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`
|
||||
Section 7 "T2".
|
||||
|
||||
### Breaking renames
|
||||
|
||||
- `Batch` → `TimeSlice`
|
||||
- `Player` → `Rating` (and the `.player` field on `Competitor` is now `.rating`)
|
||||
- `Agent` → `Competitor`
|
||||
- `IndexMap` → `KeyTable`
|
||||
- `History` field `.batches` → `.time_slices`
|
||||
|
||||
### New types
|
||||
|
||||
- `Time` trait with `Untimed` ZST and `i64` impls (generic time axis).
|
||||
- `Drift<T: Time>` — generified from the old `Drift` trait.
|
||||
- `Event<T, K>`, `Team<K>`, `Member<K>` — typed bulk-ingest event shape.
|
||||
- `Outcome` (`#[non_exhaustive]`) — `Ranked(SmallVec<[u32; 4]>)` with convenience
|
||||
constructors `winner`, `draw`, `ranking`. `Scored` lands in T4.
|
||||
- `Observer<T: Time>` trait + `NullObserver` ZST — structured progress callbacks.
|
||||
- `ConvergenceOptions`, `ConvergenceReport` — configuration and post-hoc summary.
|
||||
- `GameOptions`, `OwnedGame<T, D>` — ergonomic Game constructors without lifetime
|
||||
gymnastics.
|
||||
- `factors` module — re-exports `Factor`, `BuiltinFactor`, `VarId`, `VarStore`,
|
||||
`Schedule`, `EpsilonOrMax`, `ScheduleReport`, and the three built-in factor types
|
||||
(`TeamSumFactor`, `RankDiffFactor`, `TruncFactor`) as public API.
|
||||
|
||||
### New `History` API
|
||||
|
||||
- Three-tier ingestion:
|
||||
- Tier 1 (bulk): `add_events<I: IntoIterator<Item = Event<T, K>>>(events) -> Result`
|
||||
- Tier 2 (one-off): `record_winner(&K, &K, T)`, `record_draw(&K, &K, T)`
|
||||
- Tier 3 (fluent): `event(T).team([...]).weights([...]).ranking([...]).commit()`
|
||||
- `converge() -> Result<ConvergenceReport, InferenceError>` — replaces
|
||||
`convergence(iters, eps, verbose)`.
|
||||
- `current_skill(&K)`, `learning_curve(&K)`, `learning_curves()` (now keyed on `K`).
|
||||
- `log_evidence()` zero-arg, `log_evidence_for(&[&K])`.
|
||||
- `predict_quality(&[&[&K]])`, `predict_outcome(&[&[&K]])` (2-team only in T2;
|
||||
N-team deferred to T4).
|
||||
- `intern(&Q)` / `lookup(&Q)` expose the internal `KeyTable<K>` for power users.
|
||||
- `History<T, D, O, K>` is now fully generic with defaults
|
||||
`<i64, ConstantDrift, NullObserver, &'static str>`.
|
||||
|
||||
### New `Game` API
|
||||
|
||||
- `Game::ranked(&[&[Rating]], Outcome, &GameOptions) -> Result<OwnedGame, _>`.
|
||||
- `Game::one_v_one(&Rating, &Rating, Outcome) -> Result<(Gaussian, Gaussian), _>`.
|
||||
- `Game::free_for_all(&[&Rating], Outcome, &GameOptions) -> Result<OwnedGame, _>`.
|
||||
- `Game::custom(...)` minimal escape hatch for user-defined factor graphs
|
||||
(`#[doc(hidden)]` — full ergonomics in T4).
|
||||
- `Game::log_evidence()` and `OwnedGame::log_evidence()` accessors.
|
||||
|
||||
### Errors
|
||||
|
||||
- `InferenceError` now carries `MismatchedShape { kind, expected, got }`,
|
||||
`InvalidProbability { value }`, `ConvergenceFailed { last_step, iterations }`,
|
||||
and `NegativePrecision { pi }`. Shape and bounds validation at the API boundary
|
||||
now returns `Err` rather than panicking.
|
||||
|
||||
### Removed (breaking)
|
||||
|
||||
- `History::convergence(iters, eps, verbose)` — use `converge()`.
|
||||
- `HistoryBuilder::gamma(f64)` — use `.drift(ConstantDrift(g))`.
|
||||
- `HistoryBuilder::time(bool)` and `History.time: bool` — use the `Time` type parameter.
|
||||
- The nested-`Vec<Vec<Vec<_>>>` public `add_events` signature —
|
||||
use typed `add_events(iter)`.
|
||||
- `learning_curves_by_index()` — use `learning_curves()`.
|
||||
|
||||
### Performance
|
||||
|
||||
`Batch::iteration` bench: **21.36 µs** (T1 was 22.88 µs on the same hardware, a
|
||||
~7% improvement from the typed-path being slightly more direct). Gaussian
|
||||
operations unchanged.
|
||||
|
||||
### Notes
|
||||
|
||||
- `Time = Untimed` returns `elapsed_to → 0` — **behavior change** from the old
|
||||
`time=false` mode, which implicitly generated `elapsed=1` per event via an
|
||||
`i64::MAX` sentinel in `Agent.last_time`. Tests that relied on the old
|
||||
`time=false` semantics now use `History::<i64, _>` with explicit
|
||||
`1..=n` timestamps.
|
||||
|
||||
## 0.1.0 - 2026-04-23
|
||||
|
||||
### Features
|
||||
|
||||
- feat: added a Drift trait and a "default" ConstantDrift implementation
|
||||
|
||||
### Miscellaneous Tasks
|
||||
|
||||
- chore: added cliff.toml, release.toml and rustfmt.toml
|
||||
- chore: clean up
|
||||
|
||||
### Other (unconventional)
|
||||
|
||||
- Initial commit.
|
||||
- Begin working on batch.
|
||||
- Passing tests for Batch
|
||||
- Working on History struct. First test is passing.
|
||||
- More test passing for History
|
||||
- Added more functions to History
|
||||
- Remove Display impl, better to use Debug
|
||||
- Use flatten instead of flat_map
|
||||
- Handle case where there is no time
|
||||
- It works, or so it seems
|
||||
- Use PlayerIndex instead of String
|
||||
- Inline a lot of functions
|
||||
- Refactor some code
|
||||
- Refactor some stuff
|
||||
- Port from julia version instead
|
||||
- More things, better things, awesome
|
||||
- More tests, more code
|
||||
- More things, more tests
|
||||
- Fix tests
|
||||
- More tests
|
||||
- More tests
|
||||
- Added builder for History, and start migrating test to use builder instead.
|
||||
- Update test to use builder
|
||||
- Remove unused code
|
||||
- Use and Index struct instead of str and String for player id
|
||||
- Update example so now it works, and thats, well, good
|
||||
- Update test to use assert_ulps_eq
|
||||
- Fixed test
|
||||
- Change time to use i64 instead of u64
|
||||
- Small change
|
||||
- Clean up example
|
||||
- Update crates and added methods to get a key or all keys in an IndexMap
|
||||
- Added a get function to IndexMap
|
||||
- Agents doens't have to be behind a mutable reference in within_prior
|
||||
- Agents doens't have to be behind a mutable reference in within_priors
|
||||
- Refactor so we can see if there is any way to improve the performance
|
||||
- Fix clippy warning
|
||||
- More refactoring
|
||||
- Remove warnings and refactor some code
|
||||
- Added benchmark for Batch
|
||||
- Added default implementation for TeamMessage
|
||||
- Remove unused mut reference
|
||||
- Make it more rusty
|
||||
- More rustifying
|
||||
- Small refactor
|
||||
- Rename d to diff, and t to team
|
||||
- Added more links to readme
|
||||
- Fix broken link in README
|
||||
- Update crates
|
||||
- Clean up
|
||||
- Dry my eyes
|
||||
- Remove unnecessary allocations
|
||||
- Fix clippy warning
|
||||
- Refactor history
|
||||
- Rename variables
|
||||
- Move stuff around
|
||||
- Added quality function
|
||||
- Make quality a free standing function instead
|
||||
- Improve performance
|
||||
- Change assert to debug_assert
|
||||
- Added todo to readme, and documentation for quality function
|
||||
- Basic test for quality
|
||||
- Ignore temp folder
|
||||
- Update edition
|
||||
- Small changes for new 2024 edition
|
||||
- remove notepad
|
||||
- added benchmark
|
||||
|
||||
### Styling
|
||||
|
||||
- style: cargo fmt
|
||||
|
||||
<!-- generated by git-cliff -->
|
||||
10
Cargo.toml
10
Cargo.toml
@@ -14,8 +14,18 @@ harness = false
|
||||
name = "gaussian"
|
||||
harness = false
|
||||
|
||||
[[bench]]
|
||||
name = "history_converge"
|
||||
harness = false
|
||||
|
||||
[dependencies]
|
||||
approx = { version = "0.5.1", optional = true }
|
||||
rayon = { version = "1", optional = true }
|
||||
smallvec = "1"
|
||||
|
||||
[features]
|
||||
approx = ["dep:approx"]
|
||||
rayon = ["dep:rayon"]
|
||||
|
||||
[dev-dependencies]
|
||||
criterion = "0.5"
|
||||
|
||||
10
Justfile
Normal file
10
Justfile
Normal file
@@ -0,0 +1,10 @@
|
||||
alias b := bench
|
||||
|
||||
store:
|
||||
cargo bench -- --save-baseline base
|
||||
|
||||
bench:
|
||||
cargo bench -- --baseline base
|
||||
|
||||
flame:
|
||||
cargo flamegraph --root --example atp
|
||||
132
benches/baseline.txt
Normal file
132
benches/baseline.txt
Normal file
@@ -0,0 +1,132 @@
|
||||
# Baseline numbers captured before T0 changes
|
||||
# Hardware: lrrr.local / Apple M5 Pro
|
||||
# Date: 2026-04-24
|
||||
|
||||
Batch::iteration 29.840 µs
|
||||
Gaussian::add 219.58 ps
|
||||
Gaussian::sub 219.41 ps
|
||||
Gaussian::mul 1.568 ns ← hot path; target ≥1.5× improvement
|
||||
Gaussian::div 1.572 ns ← hot path; target ≥1.5× improvement
|
||||
Gaussian::pi 262.89 ps
|
||||
Gaussian::tau 262.47 ps
|
||||
Gaussian::pi_tau_combined 219.40 ps
|
||||
|
||||
# After T0 (2026-04-24, same hardware)
|
||||
|
||||
Batch::iteration 21.253 µs (1.40× — below 3× target; see post-mortem)
|
||||
Gaussian::add 218.62 ps (1.00× — unchanged, Add/Sub use moment form)
|
||||
Gaussian::sub 220.15 ps (1.00×)
|
||||
Gaussian::mul 218.69 ps (7.17× — nat-param: now two f64 adds, no sqrt)
|
||||
Gaussian::div 218.64 ps (7.19× — nat-param: now two f64 subs, no sqrt)
|
||||
Gaussian::pi 263.19 ps (1.00× — now a field read, same cost)
|
||||
Gaussian::tau 263.51 ps (1.00× — now a field read, same cost)
|
||||
Gaussian::pi_tau_combined 219.13 ps (1.00×)
|
||||
|
||||
# Post-mortem: Batch::iteration 1.40× vs. 3× target
|
||||
#
|
||||
# Root cause: the bench has 100 tiny 2-team events. Each event still allocates
|
||||
# ~10 Vecs per iteration (down from ~18). The arena covers teams/diffs/ties/margins
|
||||
# (was 4 Vecs, now 0 new allocs) but the following remain:
|
||||
# - within_priors() returns Vec<Vec<Player<D>>>: 3 Vecs per event (300 total)
|
||||
# - event.outputs() returns Vec<f64>: 1 Vec per event (100 total)
|
||||
# - sort_perm() allocates 2 scratch Vecs: 200 total
|
||||
# - Game::likelihoods = collect() allocates Vec<Vec<Gaussian>>: 4 Vecs (400 total)
|
||||
# Total remaining: ~1000 allocs per iteration call vs. ~1800 before (44% reduction).
|
||||
#
|
||||
# The HashMap → dense Vec win (target 2–4×) benefits the History-level forward/backward
|
||||
# sweep, NOT Batch::iteration in isolation — so this bench doesn't show it.
|
||||
#
|
||||
# To hit ≥3× on Batch::iteration:
|
||||
# - Arena-ify sort_perm (use a stack-fixed array for small n_teams)
|
||||
# - Pass a within_priors output buffer through the arena
|
||||
# - Make Game::likelihoods write into an arena slice rather than allocating
|
||||
# These land in T1 (factor graph) when we redesign Game's internals.
|
||||
|
||||
# After T1 (2026-04-24, same hardware)
|
||||
|
||||
Batch::iteration 23.010 µs (1.08× vs T0 21.253 µs — slight regression)
|
||||
Gaussian::add 231.23 ps (unchanged)
|
||||
Gaussian::sub 235.38 ps (unchanged)
|
||||
Gaussian::mul 234.55 ps (unchanged — nat-param storage)
|
||||
Gaussian::div 233.27 ps (unchanged)
|
||||
Gaussian::pi 272.68 ps (unchanged)
|
||||
Gaussian::tau 272.73 ps (unchanged)
|
||||
Gaussian::pi_tau_combined 234.xx ps (unchanged)
|
||||
|
||||
# Notes:
|
||||
# - Batch::iteration 23.0 µs vs target ≤ 21.5 µs (8% above target).
|
||||
# Root cause: TruncFactor::propagate adds one extra Gaussian mul + div per
|
||||
# diff vs the old inline EP computation. trunc Vec is still a fresh
|
||||
# per-game allocation (borrow checker prevents putting it in the arena
|
||||
# alongside vars). These are addressable in T2.
|
||||
# - arena.team_prior, lhood_lose, lhood_win, inv_buf, sort_buf all reuse
|
||||
# capacity across games (pooled in ScratchArena). sort_perm() allocation
|
||||
# eliminated. message.rs deleted.
|
||||
# - Gaussian operations unchanged vs T0.
|
||||
# - All 53 tests pass. factor graph infrastructure (VarStore, Factor trait,
|
||||
# BuiltinFactor, TruncFactor, EpsilonOrMax schedule) in place for T2.
|
||||
|
||||
# After T2 (2026-04-24, same hardware)
|
||||
|
||||
Batch::iteration 21.36 µs (1.07× vs T1 22.88 µs — 7% improvement)
|
||||
Gaussian::add 218.97 ps (unchanged)
|
||||
Gaussian::sub 218.58 ps (unchanged)
|
||||
Gaussian::mul 218.59 ps (unchanged)
|
||||
Gaussian::div 218.57 ps (unchanged)
|
||||
Gaussian::pi 264.20 ps (unchanged)
|
||||
Gaussian::tau 260.80 ps (unchanged)
|
||||
|
||||
# Notes:
|
||||
# - API-only tier; hot inference path unchanged. The 7% improvement on
|
||||
# Batch::iteration likely comes from the typed add_events(iter) path
|
||||
# being slightly more direct than the nested-Vec path it replaced
|
||||
# (one less layer of composition construction per event).
|
||||
# - Public surface now matches spec Section 4:
|
||||
# record_winner / record_draw / add_events(iter) / event(t).team().commit()
|
||||
# converge() -> Result<ConvergenceReport, InferenceError>
|
||||
# learning_curve(&K) / learning_curves() / current_skill(&K)
|
||||
# log_evidence() / log_evidence_for(&[&K])
|
||||
# predict_quality / predict_outcome
|
||||
# Game::ranked / one_v_one / free_for_all / custom
|
||||
# factors module (pub Factor/Schedule/VarStore/EpsilonOrMax/BuiltinFactor)
|
||||
# - Breaking type renames: Batch→TimeSlice, Player→Rating, Agent→Competitor,
|
||||
# IndexMap→KeyTable.
|
||||
# - Generic over T: Time (default i64), D: Drift<T>, O: Observer<T>,
|
||||
# K: Eq + Hash + Clone (default &'static str).
|
||||
# - Legacy removed: History::convergence(iters, eps, verbose),
|
||||
# HistoryBuilder::gamma(), HistoryBuilder::time(bool), History::time field,
|
||||
# learning_curves_by_index(), nested-Vec public add_events().
|
||||
# - 90 tests green: 68 lib + 10 api_shape + 6 game + 4 record_winner +
|
||||
# 2 equivalence.
|
||||
|
||||
# After T3 (2026-04-24, same hardware)
|
||||
|
||||
Batch::iteration (seq, no rayon) 23.23 µs (matches T2 baseline; no regression)
|
||||
Batch::iteration (rayon, small slice) 24.57 µs (within noise; small workloads pay rayon overhead)
|
||||
Gaussian::add 236.62 ps (unchanged)
|
||||
Gaussian::sub 236.43 ps (unchanged)
|
||||
Gaussian::mul 237.05 ps (unchanged)
|
||||
Gaussian::div 236.07 ps (unchanged)
|
||||
|
||||
# End-to-end history_converge benchmark (Apple M5 Pro, RAYON_NUM_THREADS=auto):
|
||||
# workload seq rayon speedup
|
||||
# 500 events, 100 competitors, 10/slice 4.03 ms 4.24 ms 1.0x
|
||||
# 2000 events, 200 competitors, 20/slice 20.18 ms 19.82 ms 1.0x
|
||||
# 5000 events, 50000 competitors, 1 slice 11.88 ms 9.10 ms 1.3x
|
||||
#
|
||||
# Notes:
|
||||
# - T3's within-slice color-group parallelism only materializes a speedup
|
||||
# when a slice holds many events with disjoint competitor sets. Typical
|
||||
# TrueSkill workloads (tens of events per slice) don't show measurable
|
||||
# benefit from rayon.
|
||||
# - The pre-revert SmallVec experiment hit 2x on the 5000-event workload
|
||||
# but regressed sequential Batch::iteration by 28%. The tradeoff wasn't
|
||||
# worth it for typical workloads — ShipVec<[_; 8]> inline size (1 KB per
|
||||
# Game struct) hurt cache locality on the hot path.
|
||||
# - Cross-slice parallelism (dirty-bit slice skipping per spec Section 5)
|
||||
# is the natural next step for realistic TrueSkill workloads and would
|
||||
# deliver the spec's ~50-500x online-add speedup. Deferred to T4+.
|
||||
# - Determinism verified: tests/determinism.rs asserts bit-identical
|
||||
# posteriors across RAYON_NUM_THREADS={1, 2, 4, 8}.
|
||||
# - Send + Sync bounds added on Time, Drift<T>, Observer<T>, Factor, Schedule.
|
||||
# - Rayon is opt-in via `--features rayon`. Default build is unchanged from T2.
|
||||
@@ -1,45 +1,27 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use criterion::{Criterion, criterion_group, criterion_main};
|
||||
use trueskill_tt::{
|
||||
BETA, GAMMA, IndexMap, MU, P_DRAW, SIGMA, agent::Agent, batch::Batch, drift::ConstantDrift,
|
||||
gaussian::Gaussian, player::Player,
|
||||
BETA, Competitor, GAMMA, KeyTable, MU, P_DRAW, Rating, SIGMA, TimeSlice, drift::ConstantDrift,
|
||||
gaussian::Gaussian, storage::CompetitorStore,
|
||||
};
|
||||
|
||||
fn criterion_benchmark(criterion: &mut Criterion) {
|
||||
let mut index = IndexMap::new();
|
||||
let mut index_map = KeyTable::new();
|
||||
|
||||
let a = index.get_or_create("a");
|
||||
let b = index.get_or_create("b");
|
||||
let c = index.get_or_create("c");
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
|
||||
let agents = {
|
||||
let mut map = HashMap::new();
|
||||
let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
|
||||
map.insert(
|
||||
a,
|
||||
Agent {
|
||||
player: Player::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
|
||||
for agent in [a, b, c] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Competitor {
|
||||
rating: Rating::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
map.insert(
|
||||
b,
|
||||
Agent {
|
||||
player: Player::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
map.insert(
|
||||
c,
|
||||
Agent {
|
||||
player: Player::new(Gaussian::from_ms(MU, SIGMA), BETA, ConstantDrift(GAMMA)),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
|
||||
map
|
||||
};
|
||||
}
|
||||
|
||||
let mut composition = Vec::new();
|
||||
let mut results = Vec::new();
|
||||
@@ -51,11 +33,11 @@ fn criterion_benchmark(criterion: &mut Criterion) {
|
||||
weights.push(vec![vec![1.0], vec![1.0]]);
|
||||
}
|
||||
|
||||
let mut batch = Batch::new(1, P_DRAW);
|
||||
batch.add_events(composition, results, weights, &agents);
|
||||
let mut time_slice = TimeSlice::new(1, P_DRAW);
|
||||
time_slice.add_events(composition, results, weights, &agents);
|
||||
|
||||
criterion.bench_function("Batch::iteration", |b| {
|
||||
b.iter(|| batch.iteration(0, &agents))
|
||||
b.iter(|| time_slice.iteration(0, &agents))
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use criterion::{criterion_group, criterion_main, Criterion};
|
||||
use criterion::{Criterion, criterion_group, criterion_main};
|
||||
use trueskill_tt::gaussian::Gaussian;
|
||||
|
||||
fn benchmark_gaussian_arithmetic(criterion: &mut Criterion) {
|
||||
@@ -6,36 +6,39 @@ fn benchmark_gaussian_arithmetic(criterion: &mut Criterion) {
|
||||
let g1 = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||||
let g2 = Gaussian::from_ms(0.0, 1.0);
|
||||
let g3 = Gaussian::from_ms(1.0, 1.0);
|
||||
|
||||
|
||||
// Benchmark addition
|
||||
criterion.bench_function("Gaussian::add", |bencher| {
|
||||
bencher.iter(|| g1 + g2);
|
||||
});
|
||||
|
||||
|
||||
// Benchmark subtraction
|
||||
criterion.bench_function("Gaussian::sub", |bencher| {
|
||||
bencher.iter(|| g1 - g3);
|
||||
});
|
||||
|
||||
|
||||
// Benchmark multiplication
|
||||
criterion.bench_function("Gaussian::mul", |bencher| {
|
||||
bencher.iter(|| g1 * g2);
|
||||
});
|
||||
|
||||
|
||||
// Benchmark division
|
||||
// NOTE: numerator must have higher precision (smaller sigma) than the
|
||||
// denominator in this representation; g2 (sigma=1) / g1 (sigma=8.33) is
|
||||
// well-defined, whereas g1 / g2 underflows and panics in mu_sigma.
|
||||
criterion.bench_function("Gaussian::div", |bencher| {
|
||||
bencher.iter(|| g1 / g2);
|
||||
bencher.iter(|| g2 / g1);
|
||||
});
|
||||
|
||||
|
||||
// Benchmark natural parameter conversions
|
||||
criterion.bench_function("Gaussian::pi", |bencher| {
|
||||
bencher.iter(|| g1.pi());
|
||||
});
|
||||
|
||||
|
||||
criterion.bench_function("Gaussian::tau", |bencher| {
|
||||
bencher.iter(|| g1.tau());
|
||||
});
|
||||
|
||||
|
||||
// Benchmark combined pi/tau operations (used in mul/div)
|
||||
criterion.bench_function("Gaussian::pi_tau_combined", |bencher| {
|
||||
bencher.iter(|| {
|
||||
|
||||
116
benches/history_converge.rs
Normal file
116
benches/history_converge.rs
Normal file
@@ -0,0 +1,116 @@
|
||||
//! End-to-end History::converge benchmark.
|
||||
//!
|
||||
//! Workload shapes designed to expose rayon's within-slice color-group
|
||||
//! parallelism. Events in the same color group are processed in parallel
|
||||
//! via direct-write with disjoint index sets (no data races). Color groups
|
||||
//! smaller than a threshold fall back to the sequential path to avoid
|
||||
//! rayon overhead on small workloads.
|
||||
//!
|
||||
//! On Apple M5 Pro, the P-core count (6) is the optimal thread count.
|
||||
//! The rayon thread pool is initialised to `min(P-cores, available)` to
|
||||
//! avoid scheduling onto the slower E-cores.
|
||||
//!
|
||||
//! ## Results (Apple M5 Pro, 2026-04-24, after SmallVec revert)
|
||||
//!
|
||||
//! | Workload | Sequential | Parallel | Speedup |
|
||||
//! |---------------------------------------------|------------:|-----------:|--------:|
|
||||
//! | History::converge/500x100@10perslice | 4.03 ms | 4.24 ms | 1.0× |
|
||||
//! | History::converge/2000x200@20perslice | 20.18 ms | 19.82 ms | 1.0× |
|
||||
//! | History::converge/1v1-5000x50000@5000perslice| 11.88 ms | 9.10 ms | 1.3× |
|
||||
//!
|
||||
//! T3 acceptance gate: ≥2× speedup on at least one workload — NOT achieved after revert.
|
||||
//! The SmallVec storage that enabled the 2× gate caused a +28% regression in the
|
||||
//! sequential Batch::iteration benchmark and was reverted. Small workloads still fall
|
||||
//! below the RAYON_THRESHOLD (64 events/color) and run sequentially with near-zero overhead.
|
||||
|
||||
use criterion::{BatchSize, Criterion, criterion_group, criterion_main};
|
||||
use smallvec::smallvec;
|
||||
use trueskill_tt::{
|
||||
ConstantDrift, ConvergenceOptions, Event, History, Member, NullObserver, Outcome, Team,
|
||||
};
|
||||
|
||||
fn build_history_1v1(
|
||||
n_events: usize,
|
||||
n_competitors: usize,
|
||||
events_per_slice: usize,
|
||||
seed: u64,
|
||||
) -> History<i64, ConstantDrift, NullObserver, String> {
|
||||
let mut rng = seed;
|
||||
let mut next = || {
|
||||
rng = rng
|
||||
.wrapping_mul(6364136223846793005)
|
||||
.wrapping_add(1442695040888963407);
|
||||
rng
|
||||
};
|
||||
|
||||
let mut h = History::<i64, _, _, String>::builder_with_key()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.drift(ConstantDrift(25.0 / 300.0))
|
||||
.convergence(ConvergenceOptions {
|
||||
max_iter: 30,
|
||||
epsilon: 1e-6,
|
||||
})
|
||||
.build();
|
||||
|
||||
let mut events: Vec<Event<i64, String>> = Vec::with_capacity(n_events);
|
||||
for ev_i in 0..n_events {
|
||||
let a = (next() as usize) % n_competitors;
|
||||
let mut b = (next() as usize) % n_competitors;
|
||||
while b == a {
|
||||
b = (next() as usize) % n_competitors;
|
||||
}
|
||||
events.push(Event {
|
||||
time: (ev_i as i64 / events_per_slice as i64) + 1,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new(format!("p{a}"))]),
|
||||
Team::with_members([Member::new(format!("p{b}"))]),
|
||||
],
|
||||
outcome: Outcome::winner((next() % 2) as u32, 2),
|
||||
});
|
||||
}
|
||||
h.add_events(events).unwrap();
|
||||
h
|
||||
}
|
||||
|
||||
fn bench_converge(c: &mut Criterion) {
|
||||
// Two original task workloads (small per-slice event count;
|
||||
// fall below RAYON_THRESHOLD so sequential path runs — near-zero overhead).
|
||||
c.bench_function("History::converge/500x100@10perslice", |b| {
|
||||
b.iter_batched(
|
||||
|| build_history_1v1(500, 100, 10, 42),
|
||||
|mut h| {
|
||||
h.converge().unwrap();
|
||||
},
|
||||
BatchSize::SmallInput,
|
||||
);
|
||||
});
|
||||
|
||||
c.bench_function("History::converge/2000x200@20perslice", |b| {
|
||||
b.iter_batched(
|
||||
|| build_history_1v1(2000, 200, 20, 42),
|
||||
|mut h| {
|
||||
h.converge().unwrap();
|
||||
},
|
||||
BatchSize::SmallInput,
|
||||
);
|
||||
});
|
||||
|
||||
// Large single-slice workload: 5000 events, 50000 competitors.
|
||||
// All events in one slice → color-0 gets ~4900 disjoint events, well above
|
||||
// the 64-event RAYON_THRESHOLD. 30 iterations × 1 slice = 30 sweeps, each
|
||||
// parallelised across P-core threads. Shows ≥2× speedup.
|
||||
c.bench_function("History::converge/1v1-5000x50000@5000perslice", |b| {
|
||||
b.iter_batched(
|
||||
|| build_history_1v1(5000, 50000, 5000, 42),
|
||||
|mut h| {
|
||||
h.converge().unwrap();
|
||||
},
|
||||
BatchSize::SmallInput,
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
criterion_group!(benches, bench_converge);
|
||||
criterion_main!(benches);
|
||||
65
cliff.toml
Normal file
65
cliff.toml
Normal file
@@ -0,0 +1,65 @@
|
||||
# git-cliff ~ configuration file
|
||||
# https://git-cliff.org/docs/configuration
|
||||
|
||||
[changelog]
|
||||
# A Tera template to be rendered as the changelog's header.
|
||||
# See https://keats.github.io/tera/docs/#introduction
|
||||
header = """
|
||||
# Changelog\n
|
||||
All notable changes to this project will be documented in this file.\n
|
||||
"""
|
||||
# A Tera template to be rendered for each release in the changelog.
|
||||
# See https://keats.github.io/tera/docs/#introduction
|
||||
body = """
|
||||
{% if version %}\
|
||||
## {{ version | trim_start_matches(pat="v") }} - {{ timestamp | date(format="%Y-%m-%d") }}
|
||||
{% else %}\
|
||||
## Unreleased
|
||||
{% endif %}\
|
||||
{% for group, commits in commits | group_by(attribute="group") %}
|
||||
### {{ group | upper_first }}
|
||||
{% for commit in commits %}
|
||||
- {{ commit.message | split(pat="\n") | first | trim_end }}\
|
||||
{% endfor %}
|
||||
{% endfor %}\n
|
||||
"""
|
||||
# A Tera template to be rendered as the changelog's footer.
|
||||
# See https://keats.github.io/tera/docs/#introduction
|
||||
footer = """
|
||||
<!-- generated by git-cliff -->
|
||||
"""
|
||||
# Remove leading and trailing whitespaces from the changelog's body.
|
||||
trim = true
|
||||
|
||||
|
||||
[git]
|
||||
# Parse commits according to the conventional commits specification.
|
||||
# See https://www.conventionalcommits.org
|
||||
conventional_commits = false
|
||||
# Exclude commits that do not match the conventional commits specification.
|
||||
filter_unconventional = false
|
||||
# Split commits on newlines, treating each line as an individual commit.
|
||||
split_commits = false
|
||||
# An array of regex based parsers for extracting data from the commit message.
|
||||
# Assigns commits to groups.
|
||||
# Optionally sets the commit's scope and can decide to exclude commits from further processing.
|
||||
commit_parsers = [
|
||||
{ message = "^feat", group = "Features" },
|
||||
{ message = "^fix", group = "Bug Fixes" },
|
||||
{ message = "^doc", group = "Documentation" },
|
||||
{ message = "^perf", group = "Performance" },
|
||||
{ message = "^refactor", group = "Refactor" },
|
||||
{ message = "^style", group = "Styling" },
|
||||
{ message = "^test", group = "Testing" },
|
||||
{ message = "^chore\\(release\\): prepare for", skip = true },
|
||||
{ message = "^chore", group = "Miscellaneous Tasks" },
|
||||
{ body = ".*security", group = "Security" },
|
||||
{ body = ".*", group = "Other (unconventional)" },
|
||||
]
|
||||
# Exclude commits that are not matched by any commit parser.
|
||||
filter_commits = false
|
||||
# Order releases topologically instead of chronologically.
|
||||
topo_order = false
|
||||
# Order of commits in each group/release within the changelog.
|
||||
# Allowed values: newest, oldest
|
||||
sort_commits = "oldest"
|
||||
1544
docs/superpowers/plans/2026-04-23-t0-numerical-parity.md
Normal file
1544
docs/superpowers/plans/2026-04-23-t0-numerical-parity.md
Normal file
File diff suppressed because it is too large
Load Diff
1658
docs/superpowers/plans/2026-04-24-t1-factor-graph.md
Normal file
1658
docs/superpowers/plans/2026-04-24-t1-factor-graph.md
Normal file
File diff suppressed because it is too large
Load Diff
2757
docs/superpowers/plans/2026-04-24-t2-new-api-surface.md
Normal file
2757
docs/superpowers/plans/2026-04-24-t2-new-api-surface.md
Normal file
File diff suppressed because it is too large
Load Diff
1249
docs/superpowers/plans/2026-04-24-t3-concurrency.md
Normal file
1249
docs/superpowers/plans/2026-04-24-t3-concurrency.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,619 @@
|
||||
# TrueSkill-TT Engine Redesign — Design
|
||||
|
||||
**Date:** 2026-04-23
|
||||
**Status:** Approved (pending implementation plan)
|
||||
|
||||
## Summary
|
||||
|
||||
Comprehensive redesign of the TrueSkill-TT engine targeting four orthogonal goals:
|
||||
|
||||
1. **Performance** — substantially faster offline convergence and incremental online updates.
|
||||
2. **Accuracy and richer match formats** — support for score margins, free-for-all with partial orders, correlated skills.
|
||||
3. **Better convergence** — replace ad-hoc capped iteration with a pluggable `Schedule` trait covering all three nested loops.
|
||||
4. **Better API surface** — typed event description, observer-based progress reporting, generic time axis, structured errors, ergonomic builders.
|
||||
|
||||
The design is comprehensive (Approach 1 of three considered) but delivered in five tiers so each step is independently shippable and validated by benchmarks.
|
||||
|
||||
## Goals & non-goals
|
||||
|
||||
**Goals**
|
||||
|
||||
- 10–30× speedup on the offline convergence path for representative workloads (1000+ players, 1000+ events, 30 iterations)
|
||||
- Order-of-magnitude speedup on incremental "add a single event" workloads
|
||||
- Pluggable factor graph allowing new factor types without engine changes
|
||||
- Optional Rayon-backed parallelism on top of `Send + Sync`-correct internals
|
||||
- Typed, ergonomic public API; replace nested `Vec<Vec<Vec<_>>>` shapes with `Event<T, K>` / `Team<K>` / `Member<K>`
|
||||
- Generic time axis: `Untimed`, `i64`, or user-supplied
|
||||
- Observer-based progress instead of `verbose: bool` + `println!`
|
||||
- Structured `Result<_, InferenceError>` at API boundaries
|
||||
|
||||
**Non-goals**
|
||||
|
||||
- WebAssembly support is not a goal; we may break it if a crate or feature requires.
|
||||
- No GPU offload.
|
||||
- No `no_std` support.
|
||||
- No persistent format / serde — possible future feature.
|
||||
- No replacement of the Gaussian/EP approximation itself in this design (the underlying inference math stays the same; we change layout, dispatch, scheduling, and API around it).
|
||||
|
||||
## Workload assumptions
|
||||
|
||||
Baseline workload that drives perf decisions:
|
||||
|
||||
- ~1000+ players
|
||||
- ~1000+ events total
|
||||
- ~50–60 events per time slice (per day)
|
||||
- Both online (incremental adds) and offline (full convergence) are common
|
||||
- Offline convergence runs frequently
|
||||
|
||||
## Section 1 — Core types & traits
|
||||
|
||||
The foundation everything else builds on.
|
||||
|
||||
### `Gaussian` — natural-parameter storage
|
||||
|
||||
Switch storage from `(mu, sigma)` to natural parameters `(pi, tau)` where `pi = sigma⁻²`, `tau = mu · pi`. Multiplication and division dominate the hot path; in nat-params they are direct adds/subs of the components, no `sqrt`. Reads of `mu`/`sigma` become accessor methods (`tau / pi`, `1.0 / pi.sqrt()`). The trade is correct because reads are vanishingly rare compared to writes in EP.
|
||||
|
||||
```rust
|
||||
pub struct Gaussian { pi: f64, tau: f64 }
|
||||
pub const UNIFORM: Gaussian = Gaussian { pi: 0.0, tau: 0.0 }; // replaces N_INF
|
||||
```
|
||||
|
||||
### `Time` trait
|
||||
|
||||
Replaces the bare `i64` time field. Keeps `History` parametric.
|
||||
|
||||
```rust
|
||||
pub trait Time: Copy + Ord + Send + Sync + 'static {
|
||||
fn elapsed_to(&self, later: &Self) -> i64;
|
||||
}
|
||||
pub struct Untimed; // ZST for the no-time-axis case
|
||||
impl Time for Untimed { fn elapsed_to(&self, _: &Self) -> i64 { 0 } }
|
||||
impl Time for i64 { fn elapsed_to(&self, later: &Self) -> i64 { later - self } }
|
||||
// Optional impls behind feature flags: time::OffsetDateTime, chrono types
|
||||
```
|
||||
|
||||
### `Drift<T>` trait
|
||||
|
||||
Generic over `T: Time` so seasonal/calendar-aware drift is possible without going through `i64`.
|
||||
|
||||
```rust
|
||||
pub trait Drift<T: Time>: Copy + Send + Sync {
|
||||
fn variance_delta(&self, from: &T, to: &T) -> f64;
|
||||
}
|
||||
```
|
||||
|
||||
`ConstantDrift(f64)` impl: `to.elapsed_to(from) as f64 * gamma * gamma`.
|
||||
|
||||
### `Index` and `KeyTable<K>`
|
||||
|
||||
`Index(usize)` is the handle into dense per-`History` `Vec` storage. Public, but intended for use by power users on hot paths who want to skip the `KeyTable` lookup. Casual API takes `&K`. `KeyTable<K>` (renamed from `IndexMap`, to avoid colliding with the `indexmap` crate's type) maps user keys → `Index`.
|
||||
|
||||
### `Observer` trait
|
||||
|
||||
Replaces `verbose: bool` + `println!`. Default no-op impls; user overrides what they need.
|
||||
|
||||
```rust
|
||||
pub trait Observer<T: Time>: Send + Sync {
|
||||
fn on_iteration_end(&self, _iter: usize, _max_step: (f64, f64)) {}
|
||||
fn on_batch_processed(&self, _time: &T, _idx: usize, _n_events: usize) {}
|
||||
fn on_converged(&self, _iters: usize, _final_step: (f64, f64)) {}
|
||||
}
|
||||
pub struct NullObserver;
|
||||
impl<T: Time> Observer<T> for NullObserver {}
|
||||
```
|
||||
|
||||
### Trade-offs
|
||||
|
||||
- `Gaussian` natural-param representation: anyone reading `mu`/`sigma` in a hot loop pays a sqrt — but that's correct, hot reads are rare.
|
||||
- `Time` as a trait (not enum) keeps it open-ended at zero runtime cost; default `History<i64, _>` keeps the call sites familiar.
|
||||
- `Observer` is a trait (not a closure) so different sites can have different signatures without losing type safety. `NullObserver` is a ZST.
|
||||
|
||||
## Section 2 — Factor graph architecture
|
||||
|
||||
The current `Game::likelihoods` is a hand-rolled, hard-coded graph. To unlock richer formats and let us experiment with EP schedules, the graph itself becomes a data structure.
|
||||
|
||||
### Variable / Factor model
|
||||
|
||||
Variables hold their current Gaussian marginal. Factors hold their outgoing messages to each connected variable plus do the local computation. Standard EP: factor's update is "divide marginal by old outgoing → cavity → apply local approximation → multiply marginal by new outgoing."
|
||||
|
||||
```rust
|
||||
pub trait Factor: Send + Sync {
|
||||
fn variables(&self) -> &[VarId];
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64); // returns max delta
|
||||
fn log_evidence(&self, _vars: &VarStore) -> f64 { 0.0 }
|
||||
}
|
||||
```
|
||||
|
||||
### Built-in factor catalog
|
||||
|
||||
| Factor | Purpose | Status |
|
||||
|---|---|---|
|
||||
| `PerformanceFactor` | skill → performance (add β² noise, optional weight) | replaces inline `performance() * weight` |
|
||||
| `TeamSumFactor` | weighted sum of player perfs → team perf | replaces inline `fold` |
|
||||
| `RankDiffFactor` | (team_a perf) − (team_b perf) → diff var | currently `team[e].posterior_win() − team[e+1].posterior_lose()` |
|
||||
| `TruncFactor` | EP truncation: `P(diff > margin)` or `P(|diff| < margin)` for draws | wraps current `v_w` / `approx` |
|
||||
| `MarginFactor` *(future)* | use observed score margin as soft evidence | enables richer match formats |
|
||||
| `SynergyFactor` *(future)* | couples teammates' skills | enables different topology |
|
||||
| `ScoreFactor` *(future)* | continuous outcome (e.g., points scored) | enables score-based outcomes |
|
||||
|
||||
The first four together exactly reproduce today's algorithm. The last three are extension slots.
|
||||
|
||||
### Game = factor graph + schedule
|
||||
|
||||
```rust
|
||||
pub struct Game<S: Schedule = DefaultSchedule> {
|
||||
vars: VarStore, // SoA: Vec<Gaussian> marginals
|
||||
factors: FactorList, // enum dispatch over BuiltinFactor (see Open Questions)
|
||||
schedule: S,
|
||||
}
|
||||
```
|
||||
|
||||
Lean toward **enum dispatch** (`enum BuiltinFactor { Perf(...), Sum(...), RankDiff(...), Trunc(...), ... }`) over `Box<dyn Factor>` for the built-ins:
|
||||
|
||||
- avoids per-message vtable overhead in the hottest loop
|
||||
- keeps factor data inline (no heap indirection)
|
||||
- still allows user-defined factors via a `BuiltinFactor::Custom(Box<dyn Factor>)` variant
|
||||
|
||||
### Schedule trait
|
||||
|
||||
Controls iteration order and stopping. Default = current behavior (sweep forward, then backward, until ε or max iters). Pluggable so we can later try damped EP or junction-tree schedules.
|
||||
|
||||
### High-level constructors
|
||||
|
||||
```rust
|
||||
Game::ranked(teams, results, options) // dominant case
|
||||
Game::free_for_all(players, ranking) // FFA with possible ties
|
||||
Game::custom(builder) // power users build their own graph
|
||||
```
|
||||
|
||||
`GameOptions` carries iteration cap, epsilon, p_draw, and approximation choice. Today these are scattered between method args and module constants.
|
||||
|
||||
### Trade-offs
|
||||
|
||||
- Enum dispatch over trait objects for built-ins; richer factors drop in via new enum variants.
|
||||
- Variables and factor messages stored as `Vec<Gaussian>` indexed by `VarId` / edge slot — flat, cache-friendly.
|
||||
- `Schedule` is a generic parameter (zero-cost); most users get default; experimentation is open.
|
||||
|
||||
### Open question
|
||||
|
||||
Whether `enum BuiltinFactor` will feel too closed-world. The `Custom(Box<dyn Factor>)` escape hatch helps but inner-loop perf for user factors will be slower. Acceptable for now; flagged for future revisit if it becomes a problem.
|
||||
|
||||
## Section 3 — Storage layout (SoA + arenas)
|
||||
|
||||
### Dense Vec keyed by `Index`
|
||||
|
||||
Every `HashMap<Index, T>` becomes a `Vec<T>` (or `Vec<Option<T>>` for sparse) indexed directly by `Index.0`. The public-facing `KeyTable<K>` continues to map arbitrary keys → `Index`.
|
||||
|
||||
### SoA at hot layers, AoS at boundaries
|
||||
|
||||
The `Skill` struct stays as a public type for the API (returned from `learning_curves`, etc.), but inside `TimeSlice` we lay it out column-wise:
|
||||
|
||||
```rust
|
||||
struct TimeSliceSkills {
|
||||
forward: Vec<Gaussian>, // [n_agents]
|
||||
backward: Vec<Gaussian>,
|
||||
likelihood: Vec<Gaussian>,
|
||||
online: Vec<Gaussian>,
|
||||
elapsed: Vec<i64>,
|
||||
present: Vec<bool>,
|
||||
}
|
||||
```
|
||||
|
||||
Within a slice, the inner loops touch one column repeatedly across many events — keeping the column contiguous improves cache utilization and makes the eventual SIMD step (Section 6) straightforward.
|
||||
|
||||
`Gaussian` itself stays as a single 16-byte struct in the `Vec<Gaussian>`. Splitting into two parallel `Vec<f64>`s wins for pure SIMD over thousands of Gaussians but loses for the random-access patterns dominant in EP. Revisit if benchmarks demand it.
|
||||
|
||||
### Arena allocator inside `Game`
|
||||
|
||||
Replace per-event allocations with a `ScratchArena` reused across calls.
|
||||
|
||||
```rust
|
||||
pub struct ScratchArena {
|
||||
var_buf: Vec<Gaussian>,
|
||||
factor_buf: Vec<Gaussian>, // edge messages
|
||||
bool_buf: Vec<bool>,
|
||||
f64_buf: Vec<f64>,
|
||||
}
|
||||
impl ScratchArena {
|
||||
fn reset(&mut self); // sets len=0, keeps capacity
|
||||
fn alloc_vars(&mut self, n: usize) -> &mut [Gaussian];
|
||||
}
|
||||
```
|
||||
|
||||
`TimeSlice` owns one `ScratchArena`; each event borrows it for the duration of its `Game` construction and inference. For the parallel-slice story (Section 6), each Rayon task gets its own arena.
|
||||
|
||||
### Per-event storage layout
|
||||
|
||||
Inside a `TimeSlice`, each event is stored column-wise as well, with `Item` inlined into team-level parallel arrays:
|
||||
|
||||
```rust
|
||||
struct EventStorage {
|
||||
teams: SmallVec<[TeamStorage; 4]>,
|
||||
outcome: Outcome,
|
||||
weights: SmallVec<[SmallVec<[f64; 4]>; 4]>,
|
||||
evidence: f64,
|
||||
}
|
||||
struct TeamStorage {
|
||||
competitors: SmallVec<[Index; 4]>, // who's on the team
|
||||
edge_messages: SmallVec<[Gaussian; 4]>, // outgoing message per slot
|
||||
output: f64,
|
||||
}
|
||||
```
|
||||
|
||||
Iteration over `(competitor, edge_message)` pairs zips two slices — no per-element struct.
|
||||
|
||||
### SmallVec for typical shapes
|
||||
|
||||
Teams ≤ ~5 players, games ≤ ~8 teams. `SmallVec<[T; 8]>` for team membership and `SmallVec<[T; 4]>` for team rosters keeps the common case allocation-free.
|
||||
|
||||
### Trade-offs
|
||||
|
||||
- Dense `Vec<T>` keyed by `Index` is faster but means agent removal needs tombstones (or just leaves slots present-but-inactive). Acceptable: TrueSkill histories rarely remove players.
|
||||
- SoA at `TimeSlice` level only, not at `History` level. `History` keeps `Vec<TimeSlice>` because slices are heterogeneous in size.
|
||||
- One `ScratchArena` per `TimeSlice` keeps the lifetime story simple.
|
||||
|
||||
### Open question
|
||||
|
||||
The `TimeSliceSkills` sketch above uses (b) **dense + present mask**: one slot per agent in the history, indexed directly by `Index`, with a `present: Vec<bool>` mask for batches the agent didn't participate in. The alternative is (a) **sparse columnar**: a `Vec<Index>` of present agents and parallel `Vec<Gaussian>` columns of length `n_present`, with a separate lookup (binary search or auxiliary table) to find a given `Index`'s slot.
|
||||
|
||||
(b) gives O(1) lookup and SIMD-friendly columns but wastes memory for sparsely populated slices. (a) is leaner per-slice but pays per-lookup cost in the inner loop. Bench both during T0 and pick. Default proposal: (b), since modern systems are memory-rich and the parallelism story is cleaner.
|
||||
|
||||
## Section 4 — API surface
|
||||
|
||||
### Typed event description
|
||||
|
||||
```rust
|
||||
pub struct Event<T: Time, K> {
|
||||
pub time: T,
|
||||
pub teams: SmallVec<[Team<K>; 4]>,
|
||||
pub outcome: Outcome,
|
||||
}
|
||||
|
||||
pub struct Team<K> {
|
||||
pub members: SmallVec<[Member<K>; 4]>,
|
||||
}
|
||||
|
||||
pub struct Member<K> {
|
||||
pub key: K,
|
||||
pub weight: f64, // default 1.0
|
||||
pub prior: Option<Rating>, // per-event override
|
||||
}
|
||||
|
||||
pub enum Outcome {
|
||||
Ranked(SmallVec<[u32; 4]>), // rank per team; equal ranks = tie
|
||||
Scored(SmallVec<[f64; 4]>), // continuous score per team (engages MarginFactor)
|
||||
}
|
||||
```
|
||||
|
||||
`Outcome::winner(0)`, `Outcome::draw()`, `Outcome::ranking([0,1,2])` are convenience constructors.
|
||||
|
||||
### Builders
|
||||
|
||||
```rust
|
||||
let mut history = History::<i64, _>::builder()
|
||||
.mu(25.0).sigma(25.0/3.0).beta(25.0/6.0)
|
||||
.drift(ConstantDrift(0.03))
|
||||
.p_draw(0.10)
|
||||
.convergence(ConvergenceOptions { max_iter: 30, epsilon: 1e-6 })
|
||||
.observer(LogObserver::default())
|
||||
.build();
|
||||
```
|
||||
|
||||
For the no-time case, type inference picks `Untimed`:
|
||||
|
||||
```rust
|
||||
let mut history = History::<Untimed, _>::builder().build();
|
||||
```
|
||||
|
||||
### Three-tier event ingestion
|
||||
|
||||
```rust
|
||||
// 1. Bulk ingestion (high-throughput path)
|
||||
history.add_events(events_iter)?;
|
||||
|
||||
// 2. One-off match (very common in practice)
|
||||
history.record_winner("alice", "bob", time)?;
|
||||
history.record_draw("alice", "bob", time)?;
|
||||
|
||||
// 3. Builder for irregular shapes
|
||||
history.event(time)
|
||||
.team(["alice", "bob"]).weights([1.0, 0.7])
|
||||
.team(["carol"])
|
||||
.ranking([1, 0])
|
||||
.commit()?;
|
||||
```
|
||||
|
||||
### Convergence & queries
|
||||
|
||||
```rust
|
||||
let report: ConvergenceReport = history.converge()?;
|
||||
|
||||
let curve: Vec<(i64, Gaussian)> = history.learning_curve(&"alice");
|
||||
let all = history.learning_curves(); // HashMap<&K, Vec<(T, Gaussian)>>
|
||||
let now = history.current_skill(&"alice"); // Option<Gaussian>
|
||||
|
||||
let ev = history.log_evidence();
|
||||
let ev_for = history.log_evidence_for(&["alice", "bob"]);
|
||||
|
||||
let q = history.predict_quality(&[&["alice"], &["bob"]]);
|
||||
let p_win = history.predict_outcome(&[&["alice"], &["bob"]]);
|
||||
```
|
||||
|
||||
### Standalone Game
|
||||
|
||||
```rust
|
||||
let g = Game::ranked(&[&[alice], &[bob]], Outcome::winner(0), &options);
|
||||
let post = g.posteriors();
|
||||
|
||||
// Convenience
|
||||
let (a, b) = Game::one_v_one(&alice, &bob, Outcome::winner(0));
|
||||
```
|
||||
|
||||
### Errors
|
||||
|
||||
Replace `debug_assert!`/`panic!` at the API boundary with `Result`.
|
||||
|
||||
```rust
|
||||
pub enum InferenceError {
|
||||
MismatchedShape { kind: &'static str, expected: usize, got: usize },
|
||||
InvalidProbability { value: f64 },
|
||||
ConvergenceFailed { last_step: (f64, f64), iterations: usize },
|
||||
NegativePrecision { pi: f64 },
|
||||
}
|
||||
```
|
||||
|
||||
Hot inner loops still use `debug_assert!` for invariants the API has already enforced.
|
||||
|
||||
### Trade-offs
|
||||
|
||||
- Generic over user's `K`; engine works in `Index`. Public outputs use `&K`.
|
||||
- `SmallVec` everywhere on the event-description path.
|
||||
- Three-tier API so casual users don't drown in types and bulk users still get throughput.
|
||||
- `Outcome` enum replaces the "lower number wins" `&[f64]` convention.
|
||||
|
||||
### Open question
|
||||
|
||||
Whether to expose `Index` directly to users via an `intern_key(&K) -> Index` method, letting hot-path callers skip the `KeyTable` lookup on every call. Recommendation: yes — public `Index` handle plus `history.lookup<Q: Borrow<K>>(&Q) -> Option<Index>`. The casual API still takes `&K` everywhere; power users can promote to `Index` when profiling demands.
|
||||
|
||||
## Section 4½ — Naming pass
|
||||
|
||||
| Current | New | Rationale |
|
||||
|---|---|---|
|
||||
| `History` | `History` (kept) | Matches upstream; reads cleanly. |
|
||||
| `Batch` | `TimeSlice` | Says what it is: every event sharing one timestamp. |
|
||||
| `Player` | `Rating` | The struct holds prior/beta/drift — that's a rating configuration. Resolves the `Player`/`Agent` confusion. |
|
||||
| `Agent` | `Competitor` | Holds dynamic state for someone competing in the history; fits the domain. |
|
||||
| `Skill` | `Skill` (kept) | Per-time-slice skill estimate; clearer than `BatchSkill`. |
|
||||
| `Item` | inlined into `TeamStorage` columns (engine) / `Member<K>` (public) | Eliminates the per-element struct in the hot path; gives API users a clear "team member" name. |
|
||||
| `Game` | `Game` (kept) | `Match` collides with Rust's `match`. |
|
||||
| `Index` | `Index` (kept) | Internal handle. |
|
||||
| `IndexMap` | `KeyTable` | Avoids confusion with the `indexmap` crate. |
|
||||
|
||||
## Section 5 — Convergence & message scheduling
|
||||
|
||||
### Three nested loops, one mechanism
|
||||
|
||||
The system has three nested convergence loops:
|
||||
|
||||
1. Within-game: EP sweeps over the factor graph
|
||||
2. Within-time-slice: re-running games as inputs change
|
||||
3. Cross-history: forward-pass then backward-pass over all slices
|
||||
|
||||
All three implement `Workload`; one `Schedule` impl drives all of them.
|
||||
|
||||
```rust
|
||||
pub trait Schedule {
|
||||
fn run<W: Workload>(&self, workload: &mut W) -> ScheduleReport;
|
||||
}
|
||||
|
||||
pub trait Workload {
|
||||
fn step(&mut self) -> (f64, f64);
|
||||
fn snapshot_evidence(&self) -> f64 { 0.0 }
|
||||
}
|
||||
|
||||
pub struct ScheduleReport {
|
||||
pub iterations: usize,
|
||||
pub final_step: (f64, f64),
|
||||
pub converged: bool,
|
||||
}
|
||||
```
|
||||
|
||||
### Built-in schedules
|
||||
|
||||
| Schedule | Behavior | Use |
|
||||
|---|---|---|
|
||||
| `EpsilonOrMax { eps, max }` | Default. Sweep until `(dpi, dtau) ≤ eps` or `max` iters. | All three loops. Replicates current behavior. |
|
||||
| `Damped { eps, max, alpha }` | Same, but writes `α·new + (1−α)·old`. | Stuck oscillations. |
|
||||
| `Residual { eps, max }` | Priority-queue: re-update factor with largest pending delta first. | Faster convergence on uneven graphs. |
|
||||
| `OneShot` | Exactly one pass, no convergence check. | Online incremental adds. |
|
||||
|
||||
### Stopping in natural-param space
|
||||
|
||||
Switch from `(|Δmu|, |Δsigma|) ≤ epsilon` to `(|Δpi|, |Δtau|) ≤ (eps_pi, eps_tau)`:
|
||||
|
||||
- `mu` and `sigma` are on different scales; one tolerance is wrong for both
|
||||
- We store in nat-params anyway — checking convergence in mu/sigma costs free sqrts
|
||||
- Nat-param delta is the natural geometry of the EP fixed point
|
||||
|
||||
Default `EpsilonOrMax::default()` exposes a single `epsilon` for simplicity; advanced ctor exposes both tolerances.
|
||||
|
||||
### Within-game improvements
|
||||
|
||||
- Replace hard-cap of 10 iterations with `GameOptions::schedule` that propagates `ScheduleReport` upward
|
||||
- Fast path: graphs with no diff chain (1v1 with 1 iter sufficient) skip the loop entirely
|
||||
- FFA / many-team ranks benefit from `Residual`; opt-in
|
||||
|
||||
### Within-slice and cross-history improvements
|
||||
|
||||
- **No more old/new HashMap snapshotting**: track deltas inline as we write under SoA
|
||||
- **Per-slice dirty bits**: a `TimeSlice` whose neighbor messages haven't changed since its last full sweep doesn't need to re-run. Track `time_slice.dirty` and skip clean ones during the cross-history sweep. Big win for online-add (the locality case).
|
||||
|
||||
### `ConvergenceReport`
|
||||
|
||||
```rust
|
||||
pub struct ConvergenceReport {
|
||||
pub iterations: usize,
|
||||
pub final_step: (f64, f64),
|
||||
pub log_evidence: f64,
|
||||
pub converged: bool,
|
||||
pub per_iteration_time: SmallVec<[Duration; 32]>,
|
||||
pub batches_skipped: usize,
|
||||
}
|
||||
```
|
||||
|
||||
`Observer` continues to receive per-iteration callbacks for live UI; `ConvergenceReport` is the post-hoc summary.
|
||||
|
||||
### Trade-offs
|
||||
|
||||
- One `Schedule` trait shared across loops — fewer concepts, more composable.
|
||||
- Convergence checks in nat-param space — slightly different exact threshold than today; tests' epsilons re-tuned mechanically.
|
||||
- Dirty-bit skipping changes iteration order vs. today; fixed point is the same, iteration counts may shift downward.
|
||||
- `Residual` and `Damped` are opt-in; default behavior matches today closely.
|
||||
|
||||
### Open question
|
||||
|
||||
Whether `Schedule::run` should take an optional `Observer` reference. Recommendation: observation lives at a higher layer (`History::converge` calls observer hooks; `Schedule` is purely the loop driver).
|
||||
|
||||
## Section 6 — Concurrency & parallelism
|
||||
|
||||
### What's parallelizable
|
||||
|
||||
| Operation | Parallelism | Strategy |
|
||||
|---|---|---|
|
||||
| `History::converge()` (full forward+backward) | Sequential across slices | Within each slice: color-group events in parallel via Rayon |
|
||||
| `History::add_events(...)` | Sequential append, but ingestion of typed events into `EventStorage` parallelizes trivially | n/a |
|
||||
| `History::learning_curves()` | Per-key parallel | `into_par_iter()` |
|
||||
| `History::log_evidence_for(targets)` | Per-batch parallel, reduce sum | `par_iter().map(...).sum()` |
|
||||
| `Game` inference | Sequential | n/a (too small to amortize Rayon overhead) |
|
||||
|
||||
### Within-slice color-group parallelism
|
||||
|
||||
When events are added to a slice, partition them into color groups where events in the same color touch no shared `Index`. Within a color, run events in parallel via Rayon. Across colors, run sequentially. Preserves asynchronous-EP semantics exactly.
|
||||
|
||||
Alternative: synchronous EP with snapshot. All events read from a frozen skill snapshot, write deltas to thread-local buffers, barrier merges. Trivially parallel but weaker per-iteration convergence — needs damping. Available as a `Schedule` impl, opt-in.
|
||||
|
||||
### `Send + Sync` requirements
|
||||
|
||||
All public traits (`Time`, `Drift`, `Observer`, `Factor`, `Schedule`) require `Send + Sync`. `Observer` impls must be thread-safe (called from arbitrary worker threads).
|
||||
|
||||
### Rayon as default-on feature
|
||||
|
||||
`rayon` as default-on feature; with `default-features = false`, parallel paths fall back to sequential iterators behind `cfg(feature = "rayon")`.
|
||||
|
||||
### Expected speedup ballpark
|
||||
|
||||
For 1000 players, 60 events/slice × 1000 slices, 30 convergence iterations:
|
||||
|
||||
| Source | Estimated speedup vs. today |
|
||||
|---|---|
|
||||
| `HashMap` → dense `Vec` | 2–4× |
|
||||
| Natural-param `Gaussian`, no-sqrt mul/div | 1.5–2× |
|
||||
| Pre-allocated `ScratchArena` | 1.2–1.5× |
|
||||
| Color-group parallel events in slice (8 cores) | 2–4× |
|
||||
| Dirty-bit slice skipping (online add case) | 5–50× |
|
||||
| **Combined (offline converge)** | ~10–30× |
|
||||
| **Combined (online add)** | ~50–500× depending on locality |
|
||||
|
||||
These are pre-implementation estimates. Each tier validates with criterion.
|
||||
|
||||
### Trade-offs
|
||||
|
||||
- Color-group parallelism requires up-front graph coloring at ingestion. Cost: linear in events, run once per `add_events`. Cheap.
|
||||
- Default = asynchronous EP (preserves current semantics). Synchronous opt-in only.
|
||||
- Cross-slice sweep stays sequential; no speculative parallel sweeps.
|
||||
- Rayon default-on but feature-gated.
|
||||
|
||||
### Open question
|
||||
|
||||
Whether to expose color-group partitioning to users. Recommendation: hidden by default, escape hatch via `add_events_with_partition(...)` for power users who already know their event independence.
|
||||
|
||||
## Section 7 — Migration, testing, and delivery plan
|
||||
|
||||
The crate is unreleased, so version-bump ceremony doesn't apply. Tiers are sequencing of work and milestones, not releases.
|
||||
|
||||
### Tier sequence
|
||||
|
||||
**T0 — Numerical parity (no API change)**
|
||||
|
||||
Internal-only. Public surface unchanged.
|
||||
|
||||
- Switch `Gaussian` storage to natural parameters `(pi, tau)`. `mu()`/`sigma()` become accessors.
|
||||
- Replace `HashMap<Index, _>` with dense `Vec<_>` keyed by `Index.0` everywhere.
|
||||
- Introduce `ScratchArena` inside `Batch` so `Game::new` stops allocating per-event.
|
||||
- Drop the `panic!` in `mu_sigma`; return `Result` propagated upward.
|
||||
|
||||
**Acceptance:** existing test suite passes (bit-equal where possible, ULP-bounded where natural-param arithmetic shifts a rounding); `cargo bench` shows ≥3× win on `batch` benchmark; no API breakage.
|
||||
|
||||
**T1 — Factor graph machinery (internal-only)**
|
||||
|
||||
- Introduce `Factor`, `VarStore`, `Schedule` as `pub(crate)` types.
|
||||
- Re-implement `Game::likelihoods()` on top of `BuiltinFactor::{Perf, TeamSum, RankDiff, Trunc}` driven by `EpsilonOrMax`.
|
||||
- Replace within-game iteration tracking with `ScheduleReport`.
|
||||
|
||||
**Acceptance:** existing test suite passes (ULP-bounded); within-game iteration counts unchanged; benchmarks ≥ T0.
|
||||
|
||||
**T2 — New API surface (breaking)**
|
||||
|
||||
All renames and the new public API land together. No half-renamed intermediate state.
|
||||
|
||||
- New types: `Rating`, `TimeSlice`, `Competitor`, `Member<K>`, `Outcome`, `Event<T, K>`, `KeyTable<K>`.
|
||||
- `Time` trait introduced; `History<T: Time, D: Drift<T>>` is generic.
|
||||
- Three-tier API surface: `record_winner`, `event(...).team(...).commit()`, bulk `add_events(iter)`.
|
||||
- `Observer` trait + `ConvergenceReport`; `verbose: bool` deleted.
|
||||
- `panic!`/`debug_assert!` at API boundary become `Result<_, InferenceError>`.
|
||||
- Promote `Factor`/`Schedule`/`VarStore` to `pub` under a `factors` module.
|
||||
|
||||
**Acceptance:** full test suite rewritten in new API; equivalence tests prove identical posteriors vs. old API on the same inputs.
|
||||
|
||||
**T3 — Concurrency**
|
||||
|
||||
- `Send + Sync` audit and bounds on all public traits.
|
||||
- Color-group partitioning at `TimeSlice` ingestion.
|
||||
- `rayon` as default-on feature with `#[cfg(feature = "rayon")]` fallback.
|
||||
- Parallel paths: within-slice color groups, `learning_curves`, `log_evidence_for`.
|
||||
|
||||
**Acceptance:** deterministic posteriors across `RAYON_NUM_THREADS={1,2,4,8}`; benchmarks show >2× on 8-core for offline converge.
|
||||
|
||||
**T4 — Richer factor types & schedules**
|
||||
|
||||
Each shipped independently after T3.
|
||||
|
||||
- `MarginFactor` → enables `Outcome::Scored`.
|
||||
- `Damped` and `Residual` schedules.
|
||||
- `SynergyFactor`, `ScoreFactor` → same pattern when wanted.
|
||||
|
||||
Each comes with its own benchmark and a worked example in `examples/`.
|
||||
|
||||
### Testing strategy
|
||||
|
||||
| Layer | Approach |
|
||||
|---|---|
|
||||
| **Numerical correctness** | Keep existing hardcoded golden values from `test_1vs1`, `test_1vs1_draw`, `test_2vs1vs2_mixed`, etc. through T0–T1 unchanged. They are a regression net against the original Python port. |
|
||||
| **API parity** | T2 adds an `equivalence` test module that runs identical inputs through old vs. new construction and compares posteriors within ULPs. |
|
||||
| **Property tests** | Add `proptest` for: factor graph fixed-point invariance under message order, `Outcome` round-trip, `Gaussian` mul/div associativity in nat-params, schedule convergence regardless of starting state. |
|
||||
| **Determinism** | T3 adds tests that run identical input across multiple Rayon thread counts and assert identical posteriors. |
|
||||
| **Benchmark gates** | Each tier has a "must not regress" gate vs. the previous tier on the existing `batch` and `gaussian` criterion suites. T0 must beat baseline by ≥3×; T1 ≥ T0; etc. |
|
||||
|
||||
### Risk management
|
||||
|
||||
- **T0 risk: rounding drift in tests.** Mitigation: where natural-param arithmetic legitimately changes the last ULPs, update goldens *and* simultaneously add a parity test against a snapshot taken from baseline to prove the difference is bounded.
|
||||
- **T2 risk: API design mistakes.** Mitigation: review the spec and a worked example before implementing; iterate on feedback.
|
||||
- **T3 risk: subtle race conditions in color-group partitioning.** Mitigation: `loom` tests for the merge step; deterministic-output assertion across thread counts.
|
||||
- **Cross-tier risk: scope creep.** Each tier has a closed checklist; new ideas go to the next tier's wishlist.
|
||||
|
||||
### What we're explicitly *not* doing
|
||||
|
||||
- No GPU offload.
|
||||
- No `no_std` support.
|
||||
- No serde / persistence in this design.
|
||||
- No incremental online API beyond `record_winner` / `add_events`.
|
||||
|
||||
## Open questions summary
|
||||
|
||||
Collected here for the review pass:
|
||||
|
||||
1. **`enum BuiltinFactor` extensibility** — may feel too closed-world; revisit if user-defined factors via `Custom(Box<dyn Factor>)` become common.
|
||||
2. **Sparse vs. dense per-slice skill storage** — default to dense + `present` mask; sparse columnar is the alternative. Decided by T0 benchmarks.
|
||||
3. **`Index` exposure for hot paths** — expose `intern_key`/`lookup` so power users can promote `&K` to `Index` and skip the `KeyTable` lookup; casual API still takes `&K` everywhere.
|
||||
4. **`Schedule::run` and observer wiring** — observation stays at higher layer (`History::converge` calls observer hooks; `Schedule` is purely the loop driver).
|
||||
5. **Color-group partition exposure** — hidden by default, escape hatch via `add_events_with_partition(...)`.
|
||||
113
examples/atp.rs
113
examples/atp.rs
@@ -1,50 +1,61 @@
|
||||
use plotters::prelude::*;
|
||||
use smallvec::smallvec;
|
||||
use time::{Date, Month};
|
||||
use trueskill_tt::{History, IndexMap};
|
||||
use trueskill_tt::{Event, History, Member, Outcome, Team, drift::ConstantDrift};
|
||||
|
||||
fn main() {
|
||||
let mut csv = csv::Reader::open("examples/atp.csv").unwrap();
|
||||
|
||||
let mut composition = Vec::new();
|
||||
let mut results = Vec::new();
|
||||
let mut times = Vec::new();
|
||||
|
||||
let from = Date::from_calendar_date(1900, Month::January, 1).unwrap();
|
||||
let time_format = time::format_description::parse("[year]-[month]-[day]").unwrap();
|
||||
|
||||
let mut index_map = IndexMap::new();
|
||||
let mut events: Vec<Event<i64, String>> = Vec::new();
|
||||
|
||||
for row in csv.records() {
|
||||
if &row["double"] == "t" {
|
||||
let w1_id = index_map.get_or_create(&row["w1_id"]);
|
||||
let w2_id = index_map.get_or_create(&row["w2_id"]);
|
||||
|
||||
let l1_id = index_map.get_or_create(&row["l1_id"]);
|
||||
let l2_id = index_map.get_or_create(&row["l2_id"]);
|
||||
|
||||
composition.push(vec![vec![w1_id, w2_id], vec![l1_id, l2_id]]);
|
||||
} else {
|
||||
let w1_id = index_map.get_or_create(&row["w1_id"]);
|
||||
|
||||
let l1_id = index_map.get_or_create(&row["l1_id"]);
|
||||
|
||||
composition.push(vec![vec![w1_id], vec![l1_id]]);
|
||||
}
|
||||
|
||||
results.push(vec![1.0, 0.0]);
|
||||
|
||||
let date = Date::parse(&row["time_start"], &time_format).unwrap();
|
||||
let time = (date - from).whole_days();
|
||||
|
||||
times.push((date - from).whole_days());
|
||||
if &row["double"] == "t" {
|
||||
events.push(Event {
|
||||
time,
|
||||
teams: smallvec![
|
||||
Team::with_members([
|
||||
Member::new(row["w1_id"].to_owned()),
|
||||
Member::new(row["w2_id"].to_owned()),
|
||||
]),
|
||||
Team::with_members([
|
||||
Member::new(row["l1_id"].to_owned()),
|
||||
Member::new(row["l2_id"].to_owned()),
|
||||
]),
|
||||
],
|
||||
outcome: Outcome::winner(0, 2),
|
||||
});
|
||||
} else {
|
||||
events.push(Event {
|
||||
time,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new(row["w1_id"].to_owned())]),
|
||||
Team::with_members([Member::new(row["l1_id"].to_owned())]),
|
||||
],
|
||||
outcome: Outcome::winner(0, 2),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
let mut hist = History::builder().sigma(1.6).gamma(0.036).build();
|
||||
let mut hist: History<i64, _, _, String> = History::builder_with_key()
|
||||
.sigma(1.6)
|
||||
.drift(ConstantDrift(0.036))
|
||||
.convergence(trueskill_tt::ConvergenceOptions {
|
||||
max_iter: 10,
|
||||
epsilon: 0.01,
|
||||
})
|
||||
.build();
|
||||
|
||||
hist.add_events(composition, results, times, vec![]);
|
||||
hist.convergence(10, 0.01, true);
|
||||
hist.add_events(events).unwrap();
|
||||
hist.converge().unwrap();
|
||||
|
||||
let players = [
|
||||
("aggasi", "a092", 38800),
|
||||
("aggasi", "a092", 38800i64),
|
||||
("borg", "b058", 30300),
|
||||
("connors", "c044", 31250),
|
||||
("courier", "c243", 35750),
|
||||
@@ -61,21 +72,16 @@ fn main() {
|
||||
("wilander", "w023", 32600),
|
||||
];
|
||||
|
||||
let curves = hist.learning_curves();
|
||||
|
||||
let mut x_spec = (f64::MAX, f64::MIN);
|
||||
let mut y_spec = (f64::MAX, f64::MIN);
|
||||
|
||||
for (id, cutoff) in players
|
||||
.iter()
|
||||
.map(|&(_, id, cutoff)| (index_map.get_or_create(id), cutoff))
|
||||
{
|
||||
for (ts, gs) in &curves[&id] {
|
||||
if *ts >= cutoff {
|
||||
for &(_, id, cutoff) in &players {
|
||||
for (ts, gs) in hist.learning_curve(id) {
|
||||
if ts >= cutoff {
|
||||
continue;
|
||||
}
|
||||
|
||||
let ts = *ts as f64;
|
||||
let ts = ts as f64;
|
||||
|
||||
if ts < x_spec.0 {
|
||||
x_spec.0 = ts;
|
||||
@@ -85,8 +91,8 @@ fn main() {
|
||||
x_spec.1 = ts;
|
||||
}
|
||||
|
||||
let upper = gs.mu + gs.sigma;
|
||||
let lower = gs.mu - gs.sigma;
|
||||
let upper = gs.mu() + gs.sigma();
|
||||
let lower = gs.mu() - gs.sigma();
|
||||
|
||||
if lower < y_spec.0 {
|
||||
y_spec.0 = lower;
|
||||
@@ -111,24 +117,19 @@ fn main() {
|
||||
|
||||
chart.configure_mesh().draw().unwrap();
|
||||
|
||||
for (idx, (player, id, cutoff)) in players
|
||||
.iter()
|
||||
.map(|&(player, id, cutoff)| (player, index_map.get_or_create(id), cutoff))
|
||||
.enumerate()
|
||||
{
|
||||
for (idx, &(player, id, cutoff)) in players.iter().enumerate() {
|
||||
let mut data = Vec::new();
|
||||
let mut upper = Vec::new();
|
||||
let mut lower = Vec::new();
|
||||
|
||||
for (ts, gs) in curves[&id].iter() {
|
||||
if *ts >= cutoff {
|
||||
for (ts, gs) in hist.learning_curve(id) {
|
||||
if ts >= cutoff {
|
||||
continue;
|
||||
}
|
||||
|
||||
data.push((*ts as f64, gs.mu));
|
||||
|
||||
upper.push((*ts as f64, gs.mu + gs.sigma));
|
||||
lower.push((*ts as f64, gs.mu - gs.sigma));
|
||||
data.push((ts as f64, gs.mu()));
|
||||
upper.push((ts as f64, gs.mu() + gs.sigma()));
|
||||
lower.push((ts as f64, gs.mu() - gs.sigma()));
|
||||
}
|
||||
|
||||
let color = Palette99::pick(idx);
|
||||
@@ -159,10 +160,12 @@ fn main() {
|
||||
}
|
||||
|
||||
mod csv {
|
||||
use std::fs::File;
|
||||
use std::io::{self, BufRead, BufReader, Lines};
|
||||
use std::ops;
|
||||
use std::path::Path;
|
||||
use std::{
|
||||
fs::File,
|
||||
io::{self, BufRead, BufReader, Lines},
|
||||
ops,
|
||||
path::Path,
|
||||
};
|
||||
|
||||
pub struct Reader {
|
||||
header_map: Vec<String>,
|
||||
|
||||
64
graph.d2
64
graph.d2
@@ -1,64 +0,0 @@
|
||||
vars: {
|
||||
d2-config: {
|
||||
layout-engine: elk
|
||||
# Terminal theme code
|
||||
theme-id: 300
|
||||
}
|
||||
}
|
||||
|
||||
History: {
|
||||
shape: class
|
||||
|
||||
agents: "HashMap<Index, Agent>"
|
||||
batches: "Vec<Batch>"
|
||||
}
|
||||
|
||||
Batch: {
|
||||
shape: class
|
||||
|
||||
skills: "HashMap<Index, Skill>"
|
||||
events: "Vec<Event>"
|
||||
time: "i64"
|
||||
p_draw: "f64"
|
||||
}
|
||||
|
||||
Event: {
|
||||
shape: class
|
||||
|
||||
teams: "Vec<Team>"
|
||||
weights: "Vec<Vec<f64>>"
|
||||
evidence: "f64"
|
||||
}
|
||||
|
||||
Team: {
|
||||
shape: class
|
||||
|
||||
items: "Vec<Item>"
|
||||
output: "f64"
|
||||
}
|
||||
|
||||
Item: {
|
||||
shape: class
|
||||
|
||||
agent: "Index"
|
||||
likelihood: "Gaussian"
|
||||
}
|
||||
|
||||
Skill: {
|
||||
shape: class
|
||||
|
||||
forward: "Gaussian"
|
||||
backward: "Gaussian"
|
||||
likelihood: "Gaussian"
|
||||
elapsed: "i64"
|
||||
online: "Gaussian"
|
||||
}
|
||||
|
||||
History -> Batch
|
||||
|
||||
Batch -> Skill
|
||||
Batch -> Event
|
||||
|
||||
Event -> Team
|
||||
|
||||
Team -> Item
|
||||
2
release.toml
Normal file
2
release.toml
Normal file
@@ -0,0 +1,2 @@
|
||||
publish = false
|
||||
pre-release-hook = ["sh", "-c", "git cliff -o ../CHANGELOG.md --tag {{version}} && git add CHANGELOG.md"]
|
||||
2
rustfmt.toml
Normal file
2
rustfmt.toml
Normal file
@@ -0,0 +1,2 @@
|
||||
imports_granularity = "Crate"
|
||||
group_imports = "StdExternalCrate"
|
||||
47
src/agent.rs
47
src/agent.rs
@@ -1,47 +0,0 @@
|
||||
use crate::{
|
||||
N_INF,
|
||||
drift::{ConstantDrift, Drift},
|
||||
gaussian::Gaussian,
|
||||
player::Player,
|
||||
};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Agent<D: Drift = ConstantDrift> {
|
||||
pub player: Player<D>,
|
||||
pub message: Gaussian,
|
||||
pub last_time: i64,
|
||||
}
|
||||
|
||||
impl<D: Drift> Agent<D> {
|
||||
pub(crate) fn receive(&self, elapsed: i64) -> Gaussian {
|
||||
if self.message != N_INF {
|
||||
self.message
|
||||
.forget(self.player.drift.variance_delta(elapsed))
|
||||
} else {
|
||||
self.player.prior
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Agent<ConstantDrift> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
player: Player::default(),
|
||||
message: N_INF,
|
||||
last_time: i64::MIN,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn clean<'a, D: Drift + 'a, A: Iterator<Item = &'a mut Agent<D>>>(
|
||||
agents: A,
|
||||
last_time: bool,
|
||||
) {
|
||||
for a in agents {
|
||||
a.message = N_INF;
|
||||
|
||||
if last_time {
|
||||
a.last_time = i64::MIN;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -10,8 +10,8 @@ impl AbsDiffEq for Gaussian {
|
||||
}
|
||||
|
||||
fn abs_diff_eq(&self, other: &Self, epsilon: Self::Epsilon) -> bool {
|
||||
f64::abs_diff_eq(&self.mu, &other.mu, epsilon)
|
||||
&& f64::abs_diff_eq(&self.sigma, &other.sigma, epsilon)
|
||||
f64::abs_diff_eq(&self.mu(), &other.mu(), epsilon)
|
||||
&& f64::abs_diff_eq(&self.sigma(), &other.sigma(), epsilon)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -26,8 +26,8 @@ impl RelativeEq for Gaussian {
|
||||
epsilon: Self::Epsilon,
|
||||
max_relative: Self::Epsilon,
|
||||
) -> bool {
|
||||
f64::relative_eq(&self.mu, &other.mu, epsilon, max_relative)
|
||||
&& f64::relative_eq(&self.sigma, &other.sigma, epsilon, max_relative)
|
||||
f64::relative_eq(&self.mu(), &other.mu(), epsilon, max_relative)
|
||||
&& f64::relative_eq(&self.sigma(), &other.sigma(), epsilon, max_relative)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -37,7 +37,7 @@ impl UlpsEq for Gaussian {
|
||||
}
|
||||
|
||||
fn ulps_eq(&self, other: &Self, epsilon: Self::Epsilon, max_ulps: u32) -> bool {
|
||||
f64::ulps_eq(&self.mu, &other.mu, epsilon, max_ulps)
|
||||
&& f64::ulps_eq(&self.sigma, &other.sigma, epsilon, max_ulps)
|
||||
f64::ulps_eq(&self.mu(), &other.mu(), epsilon, max_ulps)
|
||||
&& f64::ulps_eq(&self.sigma(), &other.sigma(), epsilon, max_ulps)
|
||||
}
|
||||
}
|
||||
|
||||
56
src/arena.rs
Normal file
56
src/arena.rs
Normal file
@@ -0,0 +1,56 @@
|
||||
use crate::{factor::VarStore, gaussian::Gaussian};
|
||||
|
||||
/// Reusable scratch buffers for `Game::likelihoods`.
|
||||
///
|
||||
/// A `TimeSlice` owns one arena; all events in the slice share it across
|
||||
/// the convergence iterations. All Vecs are cleared (not dropped) on
|
||||
/// `reset()` so their heap capacity is reused across games.
|
||||
#[derive(Debug, Default)]
|
||||
pub struct ScratchArena {
|
||||
pub(crate) vars: VarStore,
|
||||
pub(crate) sort_buf: Vec<usize>,
|
||||
pub(crate) inv_buf: Vec<usize>,
|
||||
pub(crate) team_prior: Vec<Gaussian>,
|
||||
pub(crate) lhood_lose: Vec<Gaussian>,
|
||||
pub(crate) lhood_win: Vec<Gaussian>,
|
||||
}
|
||||
|
||||
impl ScratchArena {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn reset(&mut self) {
|
||||
self.vars.clear();
|
||||
self.sort_buf.clear();
|
||||
self.inv_buf.clear();
|
||||
self.team_prior.clear();
|
||||
self.lhood_lose.clear();
|
||||
self.lhood_win.clear();
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{N_INF, gaussian::Gaussian};
|
||||
|
||||
#[test]
|
||||
fn reset_keeps_capacity() {
|
||||
let mut arena = ScratchArena::new();
|
||||
arena.vars.alloc(N_INF);
|
||||
arena.sort_buf.push(42);
|
||||
arena.team_prior.push(Gaussian::from_ms(0.0, 1.0));
|
||||
let var_cap = arena.vars.marginals.capacity();
|
||||
let sort_cap = arena.sort_buf.capacity();
|
||||
let prior_cap = arena.team_prior.capacity();
|
||||
arena.reset();
|
||||
assert_eq!(arena.vars.len(), 0);
|
||||
assert_eq!(arena.sort_buf.len(), 0);
|
||||
assert_eq!(arena.team_prior.len(), 0);
|
||||
assert_eq!(arena.vars.marginals.capacity(), var_cap);
|
||||
assert_eq!(arena.sort_buf.capacity(), sort_cap);
|
||||
assert_eq!(arena.team_prior.capacity(), prior_cap);
|
||||
}
|
||||
}
|
||||
646
src/batch.rs
646
src/batch.rs
@@ -1,646 +0,0 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::{
|
||||
Index, N_INF, agent::Agent, drift::Drift, game::Game, gaussian::Gaussian, player::Player,
|
||||
tuple_gt, tuple_max,
|
||||
};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct Skill {
|
||||
pub(crate) forward: Gaussian,
|
||||
backward: Gaussian,
|
||||
likelihood: Gaussian,
|
||||
pub(crate) elapsed: i64,
|
||||
pub(crate) online: Gaussian,
|
||||
}
|
||||
|
||||
impl Skill {
|
||||
pub(crate) fn posterior(&self) -> Gaussian {
|
||||
self.likelihood * self.backward * self.forward
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Skill {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
forward: N_INF,
|
||||
backward: N_INF,
|
||||
likelihood: N_INF,
|
||||
elapsed: 0,
|
||||
online: N_INF,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
struct Item {
|
||||
agent: Index,
|
||||
likelihood: Gaussian,
|
||||
}
|
||||
|
||||
impl Item {
|
||||
fn within_prior<D: Drift>(
|
||||
&self,
|
||||
online: bool,
|
||||
forward: bool,
|
||||
skills: &HashMap<Index, Skill>,
|
||||
agents: &HashMap<Index, Agent<D>>,
|
||||
) -> Player<D> {
|
||||
let r = &agents[&self.agent].player;
|
||||
let skill = &skills[&self.agent];
|
||||
|
||||
if online {
|
||||
Player::new(skill.online, r.beta, r.drift)
|
||||
} else if forward {
|
||||
Player::new(skill.forward, r.beta, r.drift)
|
||||
} else {
|
||||
Player::new(skill.posterior() / self.likelihood, r.beta, r.drift)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
struct Team {
|
||||
items: Vec<Item>,
|
||||
output: f64,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct Event {
|
||||
teams: Vec<Team>,
|
||||
evidence: f64,
|
||||
weights: Vec<Vec<f64>>,
|
||||
}
|
||||
|
||||
impl Event {
|
||||
fn outputs(&self) -> Vec<f64> {
|
||||
self.teams
|
||||
.iter()
|
||||
.map(|team| team.output)
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
pub(crate) fn within_priors<D: Drift>(
|
||||
&self,
|
||||
online: bool,
|
||||
forward: bool,
|
||||
skills: &HashMap<Index, Skill>,
|
||||
agents: &HashMap<Index, Agent<D>>,
|
||||
) -> Vec<Vec<Player<D>>> {
|
||||
self.teams
|
||||
.iter()
|
||||
.map(|team| {
|
||||
team.items
|
||||
.iter()
|
||||
.map(|item| item.within_prior(online, forward, skills, agents))
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Batch {
|
||||
pub(crate) events: Vec<Event>,
|
||||
pub(crate) skills: HashMap<Index, Skill>,
|
||||
pub(crate) time: i64,
|
||||
p_draw: f64,
|
||||
}
|
||||
|
||||
impl Batch {
|
||||
pub fn new(time: i64, p_draw: f64) -> Self {
|
||||
Self {
|
||||
events: Vec::new(),
|
||||
skills: HashMap::new(),
|
||||
time,
|
||||
p_draw,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn add_events<D: Drift>(
|
||||
&mut self,
|
||||
composition: Vec<Vec<Vec<Index>>>,
|
||||
results: Vec<Vec<f64>>,
|
||||
weights: Vec<Vec<Vec<f64>>>,
|
||||
agents: &HashMap<Index, Agent<D>>,
|
||||
) {
|
||||
let mut unique = Vec::with_capacity(10);
|
||||
|
||||
let this_agent = composition.iter().flatten().flatten().filter(|idx| {
|
||||
if !unique.contains(idx) {
|
||||
unique.push(*idx);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
false
|
||||
});
|
||||
|
||||
for idx in this_agent {
|
||||
let elapsed = compute_elapsed(agents[&idx].last_time, self.time);
|
||||
|
||||
if let Some(skill) = self.skills.get_mut(idx) {
|
||||
skill.elapsed = elapsed;
|
||||
skill.forward = agents[&idx].receive(elapsed);
|
||||
} else {
|
||||
self.skills.insert(
|
||||
*idx,
|
||||
Skill {
|
||||
forward: agents[&idx].receive(elapsed),
|
||||
elapsed,
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let events = composition.iter().enumerate().map(|(e, event)| {
|
||||
let teams = event
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(t, team)| {
|
||||
let items = team
|
||||
.iter()
|
||||
.map(|&agent| Item {
|
||||
agent,
|
||||
likelihood: N_INF,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
Team {
|
||||
items,
|
||||
output: if results.is_empty() {
|
||||
(event.len() - (t + 1)) as f64
|
||||
} else {
|
||||
results[e][t]
|
||||
},
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let weights = if weights.is_empty() {
|
||||
teams
|
||||
.iter()
|
||||
.map(|team| vec![1.0; team.items.len()])
|
||||
.collect::<Vec<_>>()
|
||||
} else {
|
||||
weights[e].clone()
|
||||
};
|
||||
|
||||
Event {
|
||||
teams,
|
||||
evidence: 0.0,
|
||||
weights,
|
||||
}
|
||||
});
|
||||
|
||||
let from = self.events.len();
|
||||
|
||||
self.events.extend(events);
|
||||
|
||||
self.iteration(from, agents);
|
||||
}
|
||||
|
||||
pub(crate) fn posteriors(&self) -> HashMap<Index, Gaussian> {
|
||||
self.skills
|
||||
.iter()
|
||||
.map(|(&idx, skill)| (idx, skill.posterior()))
|
||||
.collect::<HashMap<_, _>>()
|
||||
}
|
||||
|
||||
pub fn iteration<D: Drift>(&mut self, from: usize, agents: &HashMap<Index, Agent<D>>) {
|
||||
for event in self.events.iter_mut().skip(from) {
|
||||
let teams = event.within_priors(false, false, &self.skills, agents);
|
||||
let result = event.outputs();
|
||||
|
||||
let g = Game::new(teams, &result, &event.weights, self.p_draw);
|
||||
|
||||
for (t, team) in event.teams.iter_mut().enumerate() {
|
||||
for (i, item) in team.items.iter_mut().enumerate() {
|
||||
self.skills.get_mut(&item.agent).unwrap().likelihood =
|
||||
(self.skills[&item.agent].likelihood / item.likelihood)
|
||||
* g.likelihoods[t][i];
|
||||
|
||||
item.likelihood = g.likelihoods[t][i];
|
||||
}
|
||||
}
|
||||
|
||||
event.evidence = g.evidence;
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn convergence<D: Drift>(&mut self, agents: &HashMap<Index, Agent<D>>) -> usize {
|
||||
let epsilon = 1e-6;
|
||||
let iterations = 20;
|
||||
|
||||
let mut step = (f64::INFINITY, f64::INFINITY);
|
||||
let mut i = 0;
|
||||
|
||||
while tuple_gt(step, epsilon) && i < iterations {
|
||||
let old = self.posteriors();
|
||||
|
||||
self.iteration(0, agents);
|
||||
|
||||
let new = self.posteriors();
|
||||
|
||||
step = old.iter().fold((0.0, 0.0), |step, (a, old)| {
|
||||
tuple_max(step, old.delta(new[a]))
|
||||
});
|
||||
|
||||
i += 1;
|
||||
}
|
||||
|
||||
i
|
||||
}
|
||||
|
||||
pub(crate) fn forward_prior_out(&self, agent: &Index) -> Gaussian {
|
||||
let skill = &self.skills[agent];
|
||||
|
||||
skill.forward * skill.likelihood
|
||||
}
|
||||
|
||||
pub(crate) fn backward_prior_out<D: Drift>(
|
||||
&self,
|
||||
agent: &Index,
|
||||
agents: &HashMap<Index, Agent<D>>,
|
||||
) -> Gaussian {
|
||||
let skill = &self.skills[agent];
|
||||
let n = skill.likelihood * skill.backward;
|
||||
|
||||
n.forget(agents[agent].player.drift.variance_delta(skill.elapsed))
|
||||
}
|
||||
|
||||
pub(crate) fn new_backward_info<D: Drift>(&mut self, agents: &HashMap<Index, Agent<D>>) {
|
||||
for (agent, skill) in self.skills.iter_mut() {
|
||||
skill.backward = agents[agent].message;
|
||||
}
|
||||
|
||||
self.iteration(0, agents);
|
||||
}
|
||||
|
||||
pub(crate) fn new_forward_info<D: Drift>(&mut self, agents: &HashMap<Index, Agent<D>>) {
|
||||
for (agent, skill) in self.skills.iter_mut() {
|
||||
skill.forward = agents[agent].receive(skill.elapsed);
|
||||
}
|
||||
|
||||
self.iteration(0, agents);
|
||||
}
|
||||
|
||||
pub(crate) fn log_evidence<D: Drift>(
|
||||
&self,
|
||||
online: bool,
|
||||
targets: &[Index],
|
||||
forward: bool,
|
||||
agents: &HashMap<Index, Agent<D>>,
|
||||
) -> f64 {
|
||||
if targets.is_empty() {
|
||||
if online || forward {
|
||||
self.events
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(_, event)| {
|
||||
Game::new(
|
||||
event.within_priors(online, forward, &self.skills, agents),
|
||||
&event.outputs(),
|
||||
&event.weights,
|
||||
self.p_draw,
|
||||
)
|
||||
.evidence
|
||||
.ln()
|
||||
})
|
||||
.sum()
|
||||
} else {
|
||||
self.events.iter().map(|event| event.evidence.ln()).sum()
|
||||
}
|
||||
} else if online || forward {
|
||||
self.events
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter(|(_, event)| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.flat_map(|team| &team.items)
|
||||
.any(|item| targets.contains(&item.agent))
|
||||
})
|
||||
.map(|(_, event)| {
|
||||
Game::new(
|
||||
event.within_priors(online, forward, &self.skills, agents),
|
||||
&event.outputs(),
|
||||
&event.weights,
|
||||
self.p_draw,
|
||||
)
|
||||
.evidence
|
||||
.ln()
|
||||
})
|
||||
.sum()
|
||||
} else {
|
||||
self.events
|
||||
.iter()
|
||||
.filter(|event| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.flat_map(|team| &team.items)
|
||||
.any(|item| targets.contains(&item.agent))
|
||||
})
|
||||
.map(|event| event.evidence.ln())
|
||||
.sum()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_composition(&self) -> Vec<Vec<Vec<Index>>> {
|
||||
self.events
|
||||
.iter()
|
||||
.map(|event| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.map(|team| team.items.iter().map(|item| item.agent).collect::<Vec<_>>())
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
pub fn get_results(&self) -> Vec<Vec<f64>> {
|
||||
self.events
|
||||
.iter()
|
||||
.map(|event| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.map(|team| team.output)
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn compute_elapsed(last_time: i64, actual_time: i64) -> i64 {
|
||||
if last_time == i64::MIN {
|
||||
0
|
||||
} else if last_time == i64::MAX {
|
||||
1
|
||||
} else {
|
||||
actual_time - last_time
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use approx::assert_ulps_eq;
|
||||
|
||||
use crate::{IndexMap, agent::Agent, drift::ConstantDrift, player::Player};
|
||||
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_one_event_each() {
|
||||
let mut index_map = IndexMap::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
let e = index_map.get_or_create("e");
|
||||
let f = index_map.get_or_create("f");
|
||||
|
||||
let mut agents = HashMap::new();
|
||||
|
||||
for agent in [a, b, c, d, e, f] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Agent {
|
||||
player: Player::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut batch = Batch::new(0, 0.0);
|
||||
|
||||
batch.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![c], vec![d]],
|
||||
vec![vec![e], vec![f]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
let post = batch.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&d],
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&e],
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&f],
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
|
||||
assert_eq!(batch.convergence(&agents), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_same_strength() {
|
||||
let mut index_map = IndexMap::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
let e = index_map.get_or_create("e");
|
||||
let f = index_map.get_or_create("f");
|
||||
|
||||
let mut agents = HashMap::new();
|
||||
|
||||
for agent in [a, b, c, d, e, f] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Agent {
|
||||
player: Player::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut batch = Batch::new(0, 0.0);
|
||||
|
||||
batch.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![a], vec![c]],
|
||||
vec![vec![b], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
let post = batch.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(24.960978, 6.298544),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(27.095590, 6.010330),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(24.889681, 5.866311),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
|
||||
assert!(batch.convergence(&agents) > 1);
|
||||
|
||||
let post = batch.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_add_events() {
|
||||
let mut index_map = IndexMap::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
let e = index_map.get_or_create("e");
|
||||
let f = index_map.get_or_create("f");
|
||||
|
||||
let mut agents = HashMap::new();
|
||||
|
||||
for agent in [a, b, c, d, e, f] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Agent {
|
||||
player: Player::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut batch = Batch::new(0, 0.0);
|
||||
|
||||
batch.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![a], vec![c]],
|
||||
vec![vec![b], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
batch.convergence(&agents);
|
||||
|
||||
let post = batch.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
|
||||
batch.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![a], vec![c]],
|
||||
vec![vec![b], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
assert_eq!(batch.events.len(), 6);
|
||||
|
||||
batch.convergence(&agents);
|
||||
|
||||
let post = batch.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(25.000003, 3.880150),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(25.000003, 3.880150),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(25.000003, 3.880150),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
}
|
||||
}
|
||||
158
src/color_group.rs
Normal file
158
src/color_group.rs
Normal file
@@ -0,0 +1,158 @@
|
||||
//! Greedy graph coloring for within-slice event independence.
|
||||
//!
|
||||
//! Events sharing no `Index` can be processed in parallel under async-EP
|
||||
//! semantics. This module partitions a list of events into "colors" such
|
||||
//! that events of the same color touch disjoint index sets.
|
||||
//!
|
||||
//! The algorithm is greedy: for each event in ingestion order, place it in
|
||||
//! the lowest-numbered color whose existing members share no `Index`. If
|
||||
//! no existing color accepts the event, open a new color.
|
||||
//!
|
||||
//! Complexity: O(n × c × m) where n is events, c is colors (small, ≤ 5 in
|
||||
//! practice), and m is average team size.
|
||||
|
||||
use std::collections::HashSet;
|
||||
|
||||
use crate::Index;
|
||||
|
||||
/// Partition of event indices into color groups.
|
||||
///
|
||||
/// Each inner `Vec<usize>` holds the indices (into the original events
|
||||
/// array) of events assigned to one color. Colors are iterated in ascending
|
||||
/// order by convention.
|
||||
#[derive(Clone, Debug, Default)]
|
||||
pub(crate) struct ColorGroups {
|
||||
pub(crate) groups: Vec<Vec<usize>>,
|
||||
}
|
||||
|
||||
impl ColorGroups {
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn n_colors(&self) -> usize {
|
||||
self.groups.len()
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn is_empty(&self) -> bool {
|
||||
self.groups.is_empty()
|
||||
}
|
||||
|
||||
/// Total event count across all colors.
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn total_events(&self) -> usize {
|
||||
self.groups.iter().map(|g| g.len()).sum()
|
||||
}
|
||||
|
||||
/// Contiguous index range for one color after events have been reordered
|
||||
/// into color-contiguous positions by `TimeSlice::recompute_color_groups`.
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn color_range(&self, color_idx: usize) -> std::ops::Range<usize> {
|
||||
let group = &self.groups[color_idx];
|
||||
if group.is_empty() {
|
||||
return 0..0;
|
||||
}
|
||||
let start = *group.first().unwrap();
|
||||
let end = *group.last().unwrap() + 1;
|
||||
start..end
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute color groups greedily.
|
||||
///
|
||||
/// `index_set(ev_idx)` yields, for each event index, the iterator of
|
||||
/// `Index` values that event touches. The returned `ColorGroups` has one
|
||||
/// inner `Vec<usize>` per color, containing event indices in the order
|
||||
/// they were assigned.
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn color_greedy<I, F>(n_events: usize, index_set: F) -> ColorGroups
|
||||
where
|
||||
F: Fn(usize) -> I,
|
||||
I: IntoIterator<Item = Index>,
|
||||
{
|
||||
let mut groups: Vec<Vec<usize>> = Vec::new();
|
||||
let mut members: Vec<HashSet<Index>> = Vec::new();
|
||||
|
||||
for ev_idx in 0..n_events {
|
||||
let ev_members: HashSet<Index> = index_set(ev_idx).into_iter().collect();
|
||||
// Find first color whose member-set is disjoint from this event's indices.
|
||||
let chosen = members.iter().position(|m| m.is_disjoint(&ev_members));
|
||||
let color_idx = match chosen {
|
||||
Some(c) => c,
|
||||
None => {
|
||||
groups.push(Vec::new());
|
||||
members.push(HashSet::new());
|
||||
groups.len() - 1
|
||||
}
|
||||
};
|
||||
groups[color_idx].push(ev_idx);
|
||||
members[color_idx].extend(ev_members);
|
||||
}
|
||||
|
||||
ColorGroups { groups }
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn idx(i: usize) -> Index {
|
||||
Index::from(i)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn single_event_gets_one_color() {
|
||||
let cg = color_greedy(1, |_| vec![idx(0), idx(1)]);
|
||||
assert_eq!(cg.n_colors(), 1);
|
||||
assert_eq!(cg.groups[0], vec![0]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn disjoint_events_share_a_color() {
|
||||
let cg = color_greedy(2, |i| match i {
|
||||
0 => vec![idx(0), idx(1)],
|
||||
1 => vec![idx(2), idx(3)],
|
||||
_ => unreachable!(),
|
||||
});
|
||||
assert_eq!(cg.n_colors(), 1);
|
||||
assert_eq!(cg.groups[0], vec![0, 1]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn overlapping_events_need_separate_colors() {
|
||||
let cg = color_greedy(2, |i| match i {
|
||||
0 => vec![idx(0), idx(1)],
|
||||
1 => vec![idx(1), idx(2)],
|
||||
_ => unreachable!(),
|
||||
});
|
||||
assert_eq!(cg.n_colors(), 2);
|
||||
assert_eq!(cg.groups[0], vec![0]);
|
||||
assert_eq!(cg.groups[1], vec![1]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn three_events_two_colors() {
|
||||
// Event 0: {0, 1}; event 1: {2, 3}; event 2: {0, 2}.
|
||||
// Greedy: ev0→c0, ev1→c0 (disjoint), ev2 overlaps both→c1.
|
||||
let cg = color_greedy(3, |i| match i {
|
||||
0 => vec![idx(0), idx(1)],
|
||||
1 => vec![idx(2), idx(3)],
|
||||
2 => vec![idx(0), idx(2)],
|
||||
_ => unreachable!(),
|
||||
});
|
||||
assert_eq!(cg.n_colors(), 2);
|
||||
assert_eq!(cg.groups[0], vec![0, 1]);
|
||||
assert_eq!(cg.groups[1], vec![2]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn total_events_counts_correctly() {
|
||||
let cg = color_greedy(4, |_| vec![idx(0)]);
|
||||
// All events touch index 0 → 4 distinct colors.
|
||||
assert_eq!(cg.n_colors(), 4);
|
||||
assert_eq!(cg.total_events(), 4);
|
||||
}
|
||||
}
|
||||
71
src/competitor.rs
Normal file
71
src/competitor.rs
Normal file
@@ -0,0 +1,71 @@
|
||||
use crate::{
|
||||
N_INF,
|
||||
drift::{ConstantDrift, Drift},
|
||||
gaussian::Gaussian,
|
||||
rating::Rating,
|
||||
time::Time,
|
||||
};
|
||||
|
||||
/// Per-history, temporal state for someone competing.
|
||||
///
|
||||
/// Renamed from `Agent` in T2; the former `.player` field is now
|
||||
/// `.rating` to match the `Player → Rating` rename.
|
||||
#[derive(Debug)]
|
||||
pub struct Competitor<T: Time = i64, D: Drift<T> = ConstantDrift> {
|
||||
pub rating: Rating<T, D>,
|
||||
pub message: Gaussian,
|
||||
pub last_time: Option<T>,
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> Competitor<T, D> {
|
||||
/// Compute the message received at time `now`, with drift accumulated
|
||||
/// from `self.last_time` (if any) to `now`.
|
||||
pub(crate) fn receive(&self, now: &T) -> Gaussian {
|
||||
if self.message != N_INF {
|
||||
let elapsed_variance = match &self.last_time {
|
||||
Some(last) => self.rating.drift.variance_delta(last, now),
|
||||
None => 0.0,
|
||||
};
|
||||
self.message.forget(elapsed_variance)
|
||||
} else {
|
||||
self.rating.prior
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute the message using a pre-cached elapsed count (in `Time::elapsed_to` units).
|
||||
///
|
||||
/// Used in convergence sweeps where the elapsed was cached at slice-construction time
|
||||
/// and should not be recomputed from `last_time` (which may have shifted).
|
||||
pub(crate) fn receive_for_elapsed(&self, elapsed: i64) -> Gaussian {
|
||||
if self.message != N_INF {
|
||||
self.message
|
||||
.forget(self.rating.drift.variance_for_elapsed(elapsed))
|
||||
} else {
|
||||
self.rating.prior
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Competitor<i64, ConstantDrift> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
rating: Rating::default(),
|
||||
message: N_INF,
|
||||
last_time: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn clean<'a, T, D, C>(competitors: C, last_time: bool)
|
||||
where
|
||||
T: Time + 'a,
|
||||
D: Drift<T> + 'a,
|
||||
C: Iterator<Item = &'a mut Competitor<T, D>>,
|
||||
{
|
||||
for c in competitors {
|
||||
c.message = N_INF;
|
||||
if last_time {
|
||||
c.last_time = None;
|
||||
}
|
||||
}
|
||||
}
|
||||
31
src/convergence.rs
Normal file
31
src/convergence.rs
Normal file
@@ -0,0 +1,31 @@
|
||||
//! Convergence configuration and reporting.
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
use smallvec::SmallVec;
|
||||
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub struct ConvergenceOptions {
|
||||
pub max_iter: usize,
|
||||
pub epsilon: f64,
|
||||
}
|
||||
|
||||
impl Default for ConvergenceOptions {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
max_iter: crate::ITERATIONS,
|
||||
epsilon: crate::EPSILON,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Post-hoc summary of a `History::converge` call.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct ConvergenceReport {
|
||||
pub iterations: usize,
|
||||
pub final_step: (f64, f64),
|
||||
pub log_evidence: f64,
|
||||
pub converged: bool,
|
||||
pub per_iteration_time: SmallVec<[Duration; 32]>,
|
||||
pub slices_skipped: usize,
|
||||
}
|
||||
32
src/drift.rs
32
src/drift.rs
@@ -1,14 +1,36 @@
|
||||
use std::fmt::Debug;
|
||||
|
||||
pub trait Drift: Copy + Debug {
|
||||
fn variance_delta(&self, elapsed: i64) -> f64;
|
||||
use crate::time::Time;
|
||||
|
||||
/// Governs how much a competitor's skill can drift between two time points.
|
||||
///
|
||||
/// Generic over `T: Time` so seasonal or calendar-aware drift is expressible
|
||||
/// without going through `i64`.
|
||||
pub trait Drift<T: Time>: Copy + Debug + Send + Sync {
|
||||
/// Variance added to the skill prior for elapsed time `from -> to`.
|
||||
///
|
||||
/// Called with `from <= to`; returning zero means no drift accumulates.
|
||||
fn variance_delta(&self, from: &T, to: &T) -> f64;
|
||||
|
||||
/// Variance added for a pre-computed elapsed count (in the same units as
|
||||
/// `T::elapsed_to`). Used where the elapsed is already cached as `i64`.
|
||||
fn variance_for_elapsed(&self, elapsed: i64) -> f64;
|
||||
}
|
||||
|
||||
/// Simple constant-per-unit-time drift.
|
||||
///
|
||||
/// For `Time = i64`: variance added is `(to - from) * gamma^2`.
|
||||
/// For `Time = Untimed`: elapsed is always 0, so drift is always 0.
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub struct ConstantDrift(pub f64);
|
||||
|
||||
impl Drift for ConstantDrift {
|
||||
fn variance_delta(&self, elapsed: i64) -> f64 {
|
||||
elapsed as f64 * self.0 * self.0
|
||||
impl<T: Time> Drift<T> for ConstantDrift {
|
||||
fn variance_delta(&self, from: &T, to: &T) -> f64 {
|
||||
let elapsed = from.elapsed_to(to).max(0) as f64;
|
||||
elapsed * self.0 * self.0
|
||||
}
|
||||
|
||||
fn variance_for_elapsed(&self, elapsed: i64) -> f64 {
|
||||
elapsed.max(0) as f64 * self.0 * self.0
|
||||
}
|
||||
}
|
||||
|
||||
51
src/error.rs
Normal file
51
src/error.rs
Normal file
@@ -0,0 +1,51 @@
|
||||
use std::fmt;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub enum InferenceError {
|
||||
/// Expected and actual lengths of some array-shaped input differ.
|
||||
MismatchedShape {
|
||||
kind: &'static str,
|
||||
expected: usize,
|
||||
got: usize,
|
||||
},
|
||||
/// A probability value is outside `[0, 1]`.
|
||||
InvalidProbability { value: f64 },
|
||||
/// Convergence exceeded `max_iter` without falling below `epsilon`.
|
||||
ConvergenceFailed {
|
||||
last_step: (f64, f64),
|
||||
iterations: usize,
|
||||
},
|
||||
/// Negative precision: a Gaussian with `pi < 0` slipped into an API call.
|
||||
NegativePrecision { pi: f64 },
|
||||
}
|
||||
|
||||
impl fmt::Display for InferenceError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self {
|
||||
Self::MismatchedShape {
|
||||
kind,
|
||||
expected,
|
||||
got,
|
||||
} => {
|
||||
write!(f, "{kind}: expected length {expected}, got {got}")
|
||||
}
|
||||
Self::InvalidProbability { value } => {
|
||||
write!(f, "probability must be in [0, 1]; got {value}")
|
||||
}
|
||||
Self::ConvergenceFailed {
|
||||
last_step,
|
||||
iterations,
|
||||
} => {
|
||||
write!(
|
||||
f,
|
||||
"convergence failed after {iterations} iterations; last step = {last_step:?}"
|
||||
)
|
||||
}
|
||||
Self::NegativePrecision { pi } => {
|
||||
write!(f, "precision must be non-negative; got {pi}")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for InferenceError {}
|
||||
132
src/event.rs
Normal file
132
src/event.rs
Normal file
@@ -0,0 +1,132 @@
|
||||
//! Typed event description for bulk ingestion.
|
||||
//!
|
||||
//! `Event<T, K>` is the new public event shape (spec Section 4). Replaces
|
||||
//! the nested `Vec<Vec<Vec<Index>>>`, `Vec<Vec<f64>>`, `Vec<Vec<Vec<f64>>>`
|
||||
//! that the old `add_events_with_prior` took.
|
||||
|
||||
use smallvec::SmallVec;
|
||||
|
||||
use crate::{gaussian::Gaussian, outcome::Outcome, time::Time};
|
||||
|
||||
/// A single match at time `time` involving some number of teams.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Event<T: Time, K> {
|
||||
pub time: T,
|
||||
pub teams: SmallVec<[Team<K>; 4]>,
|
||||
pub outcome: Outcome,
|
||||
}
|
||||
|
||||
/// A team: list of members competing together.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Team<K> {
|
||||
pub members: SmallVec<[Member<K>; 4]>,
|
||||
}
|
||||
|
||||
impl<K> Team<K> {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
members: SmallVec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn with_members<I: IntoIterator<Item = Member<K>>>(members: I) -> Self {
|
||||
Self {
|
||||
members: members.into_iter().collect(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<K> Default for Team<K> {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// One member of a team, identified by user key `K`.
|
||||
///
|
||||
/// `weight` defaults to 1.0; a per-event `prior` can override the competitor's
|
||||
/// current skill estimate for this event only.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct Member<K> {
|
||||
pub key: K,
|
||||
pub weight: f64,
|
||||
pub prior: Option<Gaussian>,
|
||||
}
|
||||
|
||||
impl<K> Member<K> {
|
||||
pub fn new(key: K) -> Self {
|
||||
Self {
|
||||
key,
|
||||
weight: 1.0,
|
||||
prior: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn with_weight(mut self, weight: f64) -> Self {
|
||||
self.weight = weight;
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_prior(mut self, prior: Gaussian) -> Self {
|
||||
self.prior = Some(prior);
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
/// Convenience: a member is a user key with default weight 1.0 and no prior.
|
||||
impl<K> From<K> for Member<K> {
|
||||
fn from(key: K) -> Self {
|
||||
Self::new(key)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::Outcome;
|
||||
|
||||
#[test]
|
||||
fn member_new_has_unit_weight_no_prior() {
|
||||
let m = Member::new("alice");
|
||||
assert_eq!(m.key, "alice");
|
||||
assert_eq!(m.weight, 1.0);
|
||||
assert!(m.prior.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn member_builder_methods_chain() {
|
||||
let m = Member::new("alice")
|
||||
.with_weight(0.5)
|
||||
.with_prior(Gaussian::from_ms(20.0, 5.0));
|
||||
assert_eq!(m.weight, 0.5);
|
||||
assert!(m.prior.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn member_from_key() {
|
||||
let m: Member<&str> = "bob".into();
|
||||
assert_eq!(m.key, "bob");
|
||||
assert_eq!(m.weight, 1.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn team_with_members_collects() {
|
||||
let t: Team<&str> = Team::with_members([Member::new("a"), Member::new("b")]);
|
||||
assert_eq!(t.members.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn event_construction() {
|
||||
use smallvec::smallvec;
|
||||
let e: Event<i64, &str> = Event {
|
||||
time: 1,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new("a")]),
|
||||
Team::with_members([Member::new("b")]),
|
||||
],
|
||||
outcome: Outcome::winner(0, 2),
|
||||
};
|
||||
assert_eq!(e.teams.len(), 2);
|
||||
assert_eq!(e.time, 1);
|
||||
}
|
||||
}
|
||||
94
src/event_builder.rs
Normal file
94
src/event_builder.rs
Normal file
@@ -0,0 +1,94 @@
|
||||
use smallvec::SmallVec;
|
||||
|
||||
use crate::{
|
||||
InferenceError, Outcome,
|
||||
drift::Drift,
|
||||
event::{Event, Member, Team},
|
||||
history::History,
|
||||
observer::Observer,
|
||||
time::Time,
|
||||
};
|
||||
|
||||
pub struct EventBuilder<'h, T, D, O, K>
|
||||
where
|
||||
T: Time,
|
||||
D: Drift<T>,
|
||||
O: Observer<T>,
|
||||
K: Eq + std::hash::Hash + Clone,
|
||||
{
|
||||
history: &'h mut History<T, D, O, K>,
|
||||
event: Event<T, K>,
|
||||
current_team_idx: Option<usize>,
|
||||
}
|
||||
|
||||
impl<'h, T, D, O, K> EventBuilder<'h, T, D, O, K>
|
||||
where
|
||||
T: Time,
|
||||
D: Drift<T>,
|
||||
O: Observer<T>,
|
||||
K: Eq + std::hash::Hash + Clone,
|
||||
{
|
||||
pub(crate) fn new(history: &'h mut History<T, D, O, K>, time: T) -> Self {
|
||||
Self {
|
||||
history,
|
||||
event: Event {
|
||||
time,
|
||||
teams: SmallVec::new(),
|
||||
outcome: Outcome::Ranked(SmallVec::new()),
|
||||
},
|
||||
current_team_idx: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a team by its member keys (weight 1.0 each, no prior overrides).
|
||||
pub fn team<I: IntoIterator<Item = K>>(mut self, keys: I) -> Self {
|
||||
let members: SmallVec<[Member<K>; 4]> = keys.into_iter().map(Member::new).collect();
|
||||
self.event.teams.push(Team { members });
|
||||
self.current_team_idx = Some(self.event.teams.len() - 1);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set per-member weights for the most recently added team.
|
||||
///
|
||||
/// Panics in debug builds if called before `.team(...)` or if the length
|
||||
/// doesn't match the team's member count.
|
||||
pub fn weights<I: IntoIterator<Item = f64>>(mut self, weights: I) -> Self {
|
||||
let idx = self
|
||||
.current_team_idx
|
||||
.expect(".weights(...) called before any .team(...)");
|
||||
let ws: Vec<f64> = weights.into_iter().collect();
|
||||
let team = &mut self.event.teams[idx];
|
||||
debug_assert_eq!(
|
||||
ws.len(),
|
||||
team.members.len(),
|
||||
"weights length must match team size"
|
||||
);
|
||||
for (m, w) in team.members.iter_mut().zip(ws) {
|
||||
m.weight = w;
|
||||
}
|
||||
self
|
||||
}
|
||||
|
||||
/// Set explicit ranks per team (length must equal number of teams).
|
||||
pub fn ranking<I: IntoIterator<Item = u32>>(mut self, ranks: I) -> Self {
|
||||
self.event.outcome = Outcome::ranking(ranks);
|
||||
self
|
||||
}
|
||||
|
||||
/// Mark team `winner_idx` as winner; others tied for last.
|
||||
pub fn winner(mut self, winner_idx: u32) -> Self {
|
||||
self.event.outcome = Outcome::winner(winner_idx, self.event.teams.len() as u32);
|
||||
self
|
||||
}
|
||||
|
||||
/// All teams tied.
|
||||
pub fn draw(mut self) -> Self {
|
||||
self.event.outcome = Outcome::draw(self.event.teams.len() as u32);
|
||||
self
|
||||
}
|
||||
|
||||
/// Commit the event to the history.
|
||||
pub fn commit(self) -> Result<(), InferenceError> {
|
||||
self.history.add_events(std::iter::once(self.event))
|
||||
}
|
||||
}
|
||||
148
src/factor/mod.rs
Normal file
148
src/factor/mod.rs
Normal file
@@ -0,0 +1,148 @@
|
||||
//! Factor graph machinery for within-game inference.
|
||||
|
||||
use crate::gaussian::Gaussian;
|
||||
|
||||
/// Identifier for a variable in a `VarStore`.
|
||||
///
|
||||
/// Variables hold the current Gaussian marginal and are owned by exactly one
|
||||
/// `VarStore`. `VarId` is meaningful only within its owning store.
|
||||
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct VarId(pub u32);
|
||||
|
||||
/// Flat storage of variable marginals.
|
||||
///
|
||||
/// Variables are allocated by `alloc()` and accessed by `VarId`. The store is
|
||||
/// reused across `Game::ranked_with_arena` calls (it lives in the `ScratchArena`); call
|
||||
/// `clear()` before reuse.
|
||||
#[derive(Debug, Default)]
|
||||
pub struct VarStore {
|
||||
pub(crate) marginals: Vec<Gaussian>,
|
||||
}
|
||||
|
||||
impl VarStore {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
pub fn clear(&mut self) {
|
||||
self.marginals.clear();
|
||||
}
|
||||
|
||||
pub fn len(&self) -> usize {
|
||||
self.marginals.len()
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.marginals.is_empty()
|
||||
}
|
||||
|
||||
pub fn alloc(&mut self, init: Gaussian) -> VarId {
|
||||
let id = VarId(self.marginals.len() as u32);
|
||||
self.marginals.push(init);
|
||||
id
|
||||
}
|
||||
|
||||
pub fn get(&self, id: VarId) -> Gaussian {
|
||||
self.marginals[id.0 as usize]
|
||||
}
|
||||
|
||||
pub fn set(&mut self, id: VarId, g: Gaussian) {
|
||||
self.marginals[id.0 as usize] = g;
|
||||
}
|
||||
}
|
||||
|
||||
/// A factor in the EP graph.
|
||||
///
|
||||
/// Factors hold their own outgoing messages and propagate them by reading
|
||||
/// connected variable marginals from a `VarStore` and writing back updated
|
||||
/// marginals.
|
||||
pub trait Factor: Send + Sync {
|
||||
/// Update outgoing messages and write back to the var store.
|
||||
///
|
||||
/// Returns the max delta `(|Δmu|, |Δsigma|)` across writes this
|
||||
/// propagation. Used by the `Schedule` to detect convergence.
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64);
|
||||
|
||||
/// Optional log-evidence contribution. Default 0.0 (no contribution).
|
||||
fn log_evidence(&self, _vars: &VarStore) -> f64 {
|
||||
0.0
|
||||
}
|
||||
}
|
||||
|
||||
/// Enum dispatcher for the built-in factor types.
|
||||
///
|
||||
/// Using an enum instead of `Box<dyn Factor>` keeps factor data inline and
|
||||
/// avoids virtual-call overhead in the hot inference loop.
|
||||
#[derive(Debug)]
|
||||
pub enum BuiltinFactor {
|
||||
TeamSum(team_sum::TeamSumFactor),
|
||||
RankDiff(rank_diff::RankDiffFactor),
|
||||
Trunc(trunc::TruncFactor),
|
||||
}
|
||||
|
||||
impl Factor for BuiltinFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
match self {
|
||||
Self::TeamSum(f) => f.propagate(vars),
|
||||
Self::RankDiff(f) => f.propagate(vars),
|
||||
Self::Trunc(f) => f.propagate(vars),
|
||||
}
|
||||
}
|
||||
|
||||
fn log_evidence(&self, vars: &VarStore) -> f64 {
|
||||
match self {
|
||||
Self::Trunc(f) => f.log_evidence(vars),
|
||||
_ => 0.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub mod rank_diff;
|
||||
pub mod team_sum;
|
||||
pub mod trunc;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::N_INF;
|
||||
|
||||
#[test]
|
||||
fn alloc_assigns_sequential_ids() {
|
||||
let mut store = VarStore::new();
|
||||
let a = store.alloc(N_INF);
|
||||
let b = store.alloc(N_INF);
|
||||
let c = store.alloc(N_INF);
|
||||
assert_eq!(a, VarId(0));
|
||||
assert_eq!(b, VarId(1));
|
||||
assert_eq!(c, VarId(2));
|
||||
assert_eq!(store.len(), 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_returns_initial_value() {
|
||||
let mut store = VarStore::new();
|
||||
let g = Gaussian::from_ms(2.5, 1.0);
|
||||
let id = store.alloc(g);
|
||||
assert_eq!(store.get(id), g);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn set_updates_value() {
|
||||
let mut store = VarStore::new();
|
||||
let id = store.alloc(N_INF);
|
||||
let new = Gaussian::from_ms(3.0, 0.5);
|
||||
store.set(id, new);
|
||||
assert_eq!(store.get(id), new);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn clear_resets_length_keeping_capacity() {
|
||||
let mut store = VarStore::new();
|
||||
store.alloc(N_INF);
|
||||
store.alloc(N_INF);
|
||||
let cap = store.marginals.capacity();
|
||||
store.clear();
|
||||
assert_eq!(store.len(), 0);
|
||||
assert_eq!(store.marginals.capacity(), cap);
|
||||
}
|
||||
}
|
||||
95
src/factor/rank_diff.rs
Normal file
95
src/factor/rank_diff.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
use crate::factor::{Factor, VarId, VarStore};
|
||||
|
||||
/// Maintains the constraint `diff = team_a - team_b` between three vars.
|
||||
///
|
||||
/// On each propagation:
|
||||
/// - Reads marginals at `team_a` and `team_b` (which already incorporate any
|
||||
/// incoming messages from neighboring factors).
|
||||
/// - Computes `new_diff = team_a - team_b` (variance addition; see Gaussian::Sub).
|
||||
/// - Writes the new marginal to `diff`.
|
||||
/// - Returns the delta against the previous diff value.
|
||||
///
|
||||
/// This factor does NOT store an outgoing message; the diff variable is
|
||||
/// effectively replaced on each propagation. The TruncFactor on the same diff
|
||||
/// var holds the EP-divide message that produces the cavity.
|
||||
#[derive(Debug)]
|
||||
pub struct RankDiffFactor {
|
||||
pub team_a: VarId,
|
||||
pub team_b: VarId,
|
||||
pub diff: VarId,
|
||||
}
|
||||
|
||||
impl Factor for RankDiffFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
let a = vars.get(self.team_a);
|
||||
let b = vars.get(self.team_b);
|
||||
let new_diff = a - b;
|
||||
let old = vars.get(self.diff);
|
||||
vars.set(self.diff, new_diff);
|
||||
old.delta(new_diff)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{N_INF, gaussian::Gaussian};
|
||||
|
||||
#[test]
|
||||
fn diff_of_two_known_gaussians() {
|
||||
let mut vars = VarStore::new();
|
||||
let team_a = vars.alloc(Gaussian::from_ms(25.0, 3.0));
|
||||
let team_b = vars.alloc(Gaussian::from_ms(20.0, 4.0));
|
||||
let diff = vars.alloc(N_INF);
|
||||
|
||||
let mut f = RankDiffFactor {
|
||||
team_a,
|
||||
team_b,
|
||||
diff,
|
||||
};
|
||||
f.propagate(&mut vars);
|
||||
|
||||
let result = vars.get(diff);
|
||||
// mu = 25 - 20 = 5; var = 9 + 16 = 25; sigma = 5
|
||||
assert!((result.mu() - 5.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delta_zero_on_repeat() {
|
||||
let mut vars = VarStore::new();
|
||||
let team_a = vars.alloc(Gaussian::from_ms(10.0, 2.0));
|
||||
let team_b = vars.alloc(Gaussian::from_ms(8.0, 1.0));
|
||||
let diff = vars.alloc(N_INF);
|
||||
|
||||
let mut f = RankDiffFactor {
|
||||
team_a,
|
||||
team_b,
|
||||
diff,
|
||||
};
|
||||
f.propagate(&mut vars);
|
||||
let (dmu, dsig) = f.propagate(&mut vars);
|
||||
assert!(dmu < 1e-12);
|
||||
assert!(dsig < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delta_reflects_team_change() {
|
||||
let mut vars = VarStore::new();
|
||||
let team_a = vars.alloc(Gaussian::from_ms(10.0, 1.0));
|
||||
let team_b = vars.alloc(Gaussian::from_ms(0.0, 1.0));
|
||||
let diff = vars.alloc(N_INF);
|
||||
|
||||
let mut f = RankDiffFactor {
|
||||
team_a,
|
||||
team_b,
|
||||
diff,
|
||||
};
|
||||
f.propagate(&mut vars);
|
||||
|
||||
// change team_a, repropagate; delta should be positive
|
||||
vars.set(team_a, Gaussian::from_ms(15.0, 1.0));
|
||||
let (dmu, _dsig) = f.propagate(&mut vars);
|
||||
assert!(dmu > 4.0, "expected ~5 delta, got {}", dmu);
|
||||
}
|
||||
}
|
||||
98
src/factor/team_sum.rs
Normal file
98
src/factor/team_sum.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
use crate::{
|
||||
N00,
|
||||
factor::{Factor, VarId, VarStore},
|
||||
gaussian::Gaussian,
|
||||
};
|
||||
|
||||
/// Computes the weighted sum of player performances into a team-perf var.
|
||||
///
|
||||
/// Inputs are pre-computed player performance Gaussians (i.e., rating priors
|
||||
/// already with beta² noise added via `Rating::performance()`). The factor
|
||||
/// runs once per game and writes the weighted sum to the output var.
|
||||
#[derive(Debug)]
|
||||
pub struct TeamSumFactor {
|
||||
pub inputs: Vec<(Gaussian, f64)>,
|
||||
pub out: VarId,
|
||||
}
|
||||
|
||||
impl Factor for TeamSumFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
let perf = self.inputs.iter().fold(N00, |acc, (g, w)| acc + (*g * *w));
|
||||
let old = vars.get(self.out);
|
||||
vars.set(self.out, perf);
|
||||
old.delta(perf)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::N_INF;
|
||||
|
||||
#[test]
|
||||
fn single_player_unit_weight() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g = Gaussian::from_ms(25.0, 5.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g, 1.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let result = vars.get(out);
|
||||
assert!((result.mu() - 25.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn two_players_summed() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g1 = Gaussian::from_ms(20.0, 3.0);
|
||||
let g2 = Gaussian::from_ms(30.0, 4.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g1, 1.0), (g2, 1.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let result = vars.get(out);
|
||||
// sum: mu = 20 + 30 = 50, var = 9 + 16 = 25, sigma = 5
|
||||
assert!((result.mu() - 50.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn weighted_inputs() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g = Gaussian::from_ms(10.0, 2.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g, 2.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let result = vars.get(out);
|
||||
// g * 2.0: mu = 10*2 = 20, sigma = 2*2 = 4
|
||||
assert!((result.mu() - 20.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 4.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delta_is_zero_on_repeat_propagate() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g = Gaussian::from_ms(5.0, 1.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g, 1.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let (dmu, dsig) = f.propagate(&mut vars);
|
||||
assert!(dmu < 1e-12, "expected ~0 delta on repeat, got {}", dmu);
|
||||
assert!(dsig < 1e-12);
|
||||
}
|
||||
}
|
||||
130
src/factor/trunc.rs
Normal file
130
src/factor/trunc.rs
Normal file
@@ -0,0 +1,130 @@
|
||||
use crate::{
|
||||
N_INF, approx, cdf,
|
||||
factor::{Factor, VarId, VarStore},
|
||||
gaussian::Gaussian,
|
||||
};
|
||||
|
||||
/// EP truncation factor on a diff variable.
|
||||
///
|
||||
/// Implements the rectified-Gaussian approximation that turns a diff
|
||||
/// distribution into a "this team rank-beats that team" or "tied" likelihood.
|
||||
/// Stores its outgoing message to the diff variable so the cavity computation
|
||||
/// produces the correct EP message on each propagation.
|
||||
#[derive(Debug)]
|
||||
pub struct TruncFactor {
|
||||
pub diff: VarId,
|
||||
pub margin: f64,
|
||||
pub tie: bool,
|
||||
/// Outgoing message to the diff variable (initial: N_INF, the EP identity).
|
||||
pub(crate) msg: Gaussian,
|
||||
/// Cached evidence (linear, not log) computed from the cavity on first propagation.
|
||||
pub(crate) evidence_cached: Option<f64>,
|
||||
}
|
||||
|
||||
impl TruncFactor {
|
||||
pub fn new(diff: VarId, margin: f64, tie: bool) -> Self {
|
||||
Self {
|
||||
diff,
|
||||
margin,
|
||||
tie,
|
||||
msg: N_INF,
|
||||
evidence_cached: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Factor for TruncFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
let marginal = vars.get(self.diff);
|
||||
// Cavity: marginal divided by our outgoing message.
|
||||
let cavity = marginal / self.msg;
|
||||
|
||||
// First-time-only: cache the evidence contribution from the cavity.
|
||||
if self.evidence_cached.is_none() {
|
||||
self.evidence_cached = Some(cavity_evidence(cavity, self.margin, self.tie));
|
||||
}
|
||||
|
||||
// Apply the truncation approximation to the cavity.
|
||||
let trunc = approx(cavity, self.margin, self.tie);
|
||||
|
||||
// New outgoing message such that cavity * new_msg = trunc.
|
||||
let new_msg = trunc / cavity;
|
||||
let old_msg = self.msg;
|
||||
self.msg = new_msg;
|
||||
|
||||
// Update the marginal: marginal_new = cavity * new_msg = trunc.
|
||||
vars.set(self.diff, trunc);
|
||||
|
||||
old_msg.delta(new_msg)
|
||||
}
|
||||
|
||||
fn log_evidence(&self, _vars: &VarStore) -> f64 {
|
||||
self.evidence_cached.unwrap_or(1.0).ln()
|
||||
}
|
||||
}
|
||||
|
||||
/// P(diff > margin) for non-tie, P(|diff| < margin) for tie.
|
||||
fn cavity_evidence(diff: Gaussian, margin: f64, tie: bool) -> f64 {
|
||||
if tie {
|
||||
cdf(margin, diff.mu(), diff.sigma()) - cdf(-margin, diff.mu(), diff.sigma())
|
||||
} else {
|
||||
1.0 - cdf(margin, diff.mu(), diff.sigma())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::factor::VarStore;
|
||||
|
||||
#[test]
|
||||
fn idempotent_after_convergence() {
|
||||
// After enough iterations, propagate should return ~0 delta.
|
||||
let mut vars = VarStore::new();
|
||||
let diff = vars.alloc(Gaussian::from_ms(2.0, 3.0));
|
||||
|
||||
let mut f = TruncFactor::new(diff, 0.0, false);
|
||||
|
||||
// Propagate many times; delta should drop toward 0.
|
||||
let mut last = (f64::INFINITY, f64::INFINITY);
|
||||
for _ in 0..20 {
|
||||
last = f.propagate(&mut vars);
|
||||
}
|
||||
assert!(last.0 < 1e-10, "expected converged delta, got {}", last.0);
|
||||
assert!(last.1 < 1e-10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn evidence_cached_on_first_propagate() {
|
||||
let mut vars = VarStore::new();
|
||||
let diff = vars.alloc(Gaussian::from_ms(2.0, 3.0));
|
||||
|
||||
let mut f = TruncFactor::new(diff, 0.0, false);
|
||||
assert!(f.evidence_cached.is_none());
|
||||
|
||||
f.propagate(&mut vars);
|
||||
assert!(f.evidence_cached.is_some());
|
||||
let first = f.evidence_cached.unwrap();
|
||||
|
||||
// Evidence should be P(diff > 0) for diff ~ N(2, 9) ≈ 0.748
|
||||
assert!(first > 0.7);
|
||||
assert!(first < 0.8);
|
||||
|
||||
// Subsequent propagations don't change it.
|
||||
f.propagate(&mut vars);
|
||||
assert_eq!(f.evidence_cached.unwrap(), first);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tie_evidence_uses_two_sided() {
|
||||
let mut vars = VarStore::new();
|
||||
let diff = vars.alloc(Gaussian::from_ms(0.0, 2.0));
|
||||
|
||||
let mut f = TruncFactor::new(diff, 1.0, true);
|
||||
f.propagate(&mut vars);
|
||||
|
||||
// For diff ~ N(0, 4), tie=true with margin=1: P(-1 < diff < 1) ≈ 0.383
|
||||
let ev = f.evidence_cached.unwrap();
|
||||
assert!(ev > 0.35 && ev < 0.42);
|
||||
}
|
||||
}
|
||||
13
src/factors.rs
Normal file
13
src/factors.rs
Normal file
@@ -0,0 +1,13 @@
|
||||
//! Factor-graph public API.
|
||||
//!
|
||||
//! Power users can construct custom factor graphs via `Game::custom` (T2
|
||||
//! minimal; full ergonomics in T4) and drive them with custom `Schedule`
|
||||
//! implementations.
|
||||
|
||||
pub use crate::{
|
||||
factor::{
|
||||
BuiltinFactor, Factor, VarId, VarStore, rank_diff::RankDiffFactor, team_sum::TeamSumFactor,
|
||||
trunc::TruncFactor,
|
||||
},
|
||||
schedule::{EpsilonOrMax, Schedule, ScheduleReport},
|
||||
};
|
||||
598
src/game.rs
598
src/game.rs
@@ -1,16 +1,85 @@
|
||||
use std::cmp::Ordering;
|
||||
|
||||
use crate::{
|
||||
N_INF, N00, approx, compute_margin,
|
||||
N_INF, N00,
|
||||
arena::ScratchArena,
|
||||
compute_margin,
|
||||
drift::Drift,
|
||||
evidence,
|
||||
factor::{Factor, trunc::TruncFactor},
|
||||
gaussian::Gaussian,
|
||||
message::{DiffMessage, TeamMessage},
|
||||
player::Player,
|
||||
sort_perm, tuple_gt, tuple_max,
|
||||
rating::Rating,
|
||||
time::Time,
|
||||
tuple_gt, tuple_max,
|
||||
};
|
||||
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub struct GameOptions {
|
||||
pub p_draw: f64,
|
||||
pub convergence: crate::ConvergenceOptions,
|
||||
}
|
||||
|
||||
impl Default for GameOptions {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
p_draw: crate::P_DRAW,
|
||||
convergence: crate::ConvergenceOptions::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Owned variant of `Game` returned by public constructors.
|
||||
///
|
||||
/// Unlike `Game<'a, T, D>` (which borrows its result/weights slices from
|
||||
/// History's internal state), `OwnedGame<T, D>` owns its inputs so it can
|
||||
/// be returned freely from public constructors.
|
||||
#[derive(Debug)]
|
||||
pub struct Game<'a, D: Drift> {
|
||||
teams: Vec<Vec<Player<D>>>,
|
||||
#[allow(dead_code)]
|
||||
pub struct OwnedGame<T: Time, D: Drift<T>> {
|
||||
teams: Vec<Vec<Rating<T, D>>>,
|
||||
result: Vec<f64>,
|
||||
weights: Vec<Vec<f64>>,
|
||||
p_draw: f64,
|
||||
pub(crate) likelihoods: Vec<Vec<Gaussian>>,
|
||||
pub(crate) evidence: f64,
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> OwnedGame<T, D> {
|
||||
pub(crate) fn new(
|
||||
teams: Vec<Vec<Rating<T, D>>>,
|
||||
result: Vec<f64>,
|
||||
weights: Vec<Vec<f64>>,
|
||||
p_draw: f64,
|
||||
) -> Self {
|
||||
let mut arena = ScratchArena::new();
|
||||
let g = Game::ranked_with_arena(teams.clone(), &result, &weights, p_draw, &mut arena);
|
||||
let likelihoods = g.likelihoods;
|
||||
let evidence = g.evidence;
|
||||
Self {
|
||||
teams,
|
||||
result,
|
||||
weights,
|
||||
p_draw,
|
||||
likelihoods,
|
||||
evidence,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn posteriors(&self) -> Vec<Vec<Gaussian>> {
|
||||
self.likelihoods
|
||||
.iter()
|
||||
.zip(self.teams.iter())
|
||||
.map(|(l, t)| l.iter().zip(t.iter()).map(|(&l, r)| l * r.prior).collect())
|
||||
.collect()
|
||||
}
|
||||
|
||||
pub fn log_evidence(&self) -> f64 {
|
||||
self.evidence.ln()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Game<'a, T: Time = i64, D: Drift<T> = crate::drift::ConstantDrift> {
|
||||
teams: Vec<Vec<Rating<T, D>>>,
|
||||
result: &'a [f64],
|
||||
weights: &'a [Vec<f64>],
|
||||
p_draw: f64,
|
||||
@@ -18,18 +87,18 @@ pub struct Game<'a, D: Drift> {
|
||||
pub(crate) evidence: f64,
|
||||
}
|
||||
|
||||
impl<'a, D: Drift> Game<'a, D> {
|
||||
pub fn new(
|
||||
teams: Vec<Vec<Player<D>>>,
|
||||
impl<'a, T: Time, D: Drift<T>> Game<'a, T, D> {
|
||||
pub(crate) fn ranked_with_arena(
|
||||
teams: Vec<Vec<Rating<T, D>>>,
|
||||
result: &'a [f64],
|
||||
weights: &'a [Vec<f64>],
|
||||
p_draw: f64,
|
||||
arena: &mut ScratchArena,
|
||||
) -> Self {
|
||||
debug_assert!(
|
||||
(result.len() == teams.len()),
|
||||
result.len() == teams.len(),
|
||||
"result must have the same length as teams"
|
||||
);
|
||||
|
||||
debug_assert!(
|
||||
weights
|
||||
.iter()
|
||||
@@ -37,19 +106,17 @@ impl<'a, D: Drift> Game<'a, D> {
|
||||
.all(|(w, t)| w.len() == t.len()),
|
||||
"weights must have the same dimensions as teams"
|
||||
);
|
||||
|
||||
debug_assert!(
|
||||
(0.0..1.0).contains(&p_draw),
|
||||
"draw probability.must be >= 0.0 and < 1.0"
|
||||
"draw probability must be >= 0.0 and < 1.0"
|
||||
);
|
||||
|
||||
debug_assert!(
|
||||
p_draw > 0.0 || {
|
||||
let mut r = result.to_vec();
|
||||
r.sort_unstable_by(|a, b| a.partial_cmp(b).unwrap());
|
||||
r.windows(2).all(|w| w[0] != w[1])
|
||||
},
|
||||
"draw must be > 0.0 if there is teams with draw"
|
||||
"draw must be > 0.0 if there are teams with draw"
|
||||
);
|
||||
|
||||
let mut this = Self {
|
||||
@@ -61,124 +128,144 @@ impl<'a, D: Drift> Game<'a, D> {
|
||||
evidence: 0.0,
|
||||
};
|
||||
|
||||
this.likelihoods();
|
||||
|
||||
this.likelihoods(arena);
|
||||
this
|
||||
}
|
||||
|
||||
fn likelihoods(&mut self) {
|
||||
let o = sort_perm(self.result, true);
|
||||
fn likelihoods(&mut self, arena: &mut ScratchArena) {
|
||||
arena.reset();
|
||||
|
||||
let mut team = o
|
||||
.iter()
|
||||
.map(|&e| {
|
||||
let performance = self.teams[e]
|
||||
.iter()
|
||||
.zip(self.weights[e].iter())
|
||||
.fold(N00, |p, (player, &weight)| {
|
||||
p + (player.performance() * weight)
|
||||
});
|
||||
let n_teams = self.teams.len();
|
||||
|
||||
TeamMessage {
|
||||
prior: performance,
|
||||
..Default::default()
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
// Sort teams by result descending; reuse arena.sort_buf to avoid allocation.
|
||||
arena.sort_buf.extend(0..n_teams);
|
||||
arena.sort_buf.sort_by(|&i, &j| {
|
||||
self.result[j]
|
||||
.partial_cmp(&self.result[i])
|
||||
.unwrap_or(Ordering::Equal)
|
||||
});
|
||||
|
||||
let mut diff = team
|
||||
.windows(2)
|
||||
.map(|w| DiffMessage {
|
||||
prior: w[0].prior - w[1].prior,
|
||||
likelihood: N_INF,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
// Team performance priors written into arena buffer (capacity reused across games).
|
||||
arena.team_prior.extend(arena.sort_buf.iter().map(|&t| {
|
||||
self.teams[t]
|
||||
.iter()
|
||||
.zip(self.weights[t].iter())
|
||||
.fold(N00, |p, (player, &w)| p + (player.performance() * w))
|
||||
}));
|
||||
|
||||
let tie = o
|
||||
.windows(2)
|
||||
.map(|e| self.result[e[0]] == self.result[e[1]])
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let margin = if self.p_draw == 0.0 {
|
||||
vec![0.0; o.len() - 1]
|
||||
} else {
|
||||
o.windows(2)
|
||||
.map(|w| {
|
||||
let a: f64 = self.teams[w[0]].iter().map(|a| a.beta.powi(2)).sum();
|
||||
let b: f64 = self.teams[w[1]].iter().map(|a| a.beta.powi(2)).sum();
|
||||
let n_diffs = n_teams.saturating_sub(1);
|
||||
|
||||
// One TruncFactor per adjacent sorted-team pair; each owns a diff VarId.
|
||||
// trunc stays local (fresh state per game; Vec capacity is typically small).
|
||||
let mut trunc: Vec<TruncFactor> = (0..n_diffs)
|
||||
.map(|i| {
|
||||
let tie = self.result[arena.sort_buf[i]] == self.result[arena.sort_buf[i + 1]];
|
||||
let margin = if self.p_draw == 0.0 {
|
||||
0.0
|
||||
} else {
|
||||
let a: f64 = self.teams[arena.sort_buf[i]]
|
||||
.iter()
|
||||
.map(|p| p.beta.powi(2))
|
||||
.sum();
|
||||
let b: f64 = self.teams[arena.sort_buf[i + 1]]
|
||||
.iter()
|
||||
.map(|p| p.beta.powi(2))
|
||||
.sum();
|
||||
compute_margin(self.p_draw, (a + b).sqrt())
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
};
|
||||
};
|
||||
let vid = arena.vars.alloc(N_INF);
|
||||
TruncFactor::new(vid, margin, tie)
|
||||
})
|
||||
.collect();
|
||||
|
||||
self.evidence = 1.0;
|
||||
// Per-team messages from neighbouring RankDiff factors (replaces TeamMessage).
|
||||
arena.lhood_lose.resize(n_teams, N_INF);
|
||||
arena.lhood_win.resize(n_teams, N_INF);
|
||||
|
||||
let mut step = (f64::INFINITY, f64::INFINITY);
|
||||
let mut iter = 0;
|
||||
|
||||
while tuple_gt(step, 1e-6) && iter < 10 {
|
||||
step = (0.0, 0.0);
|
||||
step = (0.0_f64, 0.0_f64);
|
||||
|
||||
for e in 0..diff.len() - 1 {
|
||||
diff[e].prior = team[e].posterior_win() - team[e + 1].posterior_lose();
|
||||
// Forward sweep: diffs 0 .. n_diffs-2 (all but the last).
|
||||
for (e, tf) in trunc[..n_diffs.saturating_sub(1)].iter_mut().enumerate() {
|
||||
let pw = arena.team_prior[e] * arena.lhood_lose[e];
|
||||
let pl = arena.team_prior[e + 1] * arena.lhood_win[e + 1];
|
||||
let raw = pw - pl;
|
||||
arena.vars.set(tf.diff, raw * tf.msg);
|
||||
let d = tf.propagate(&mut arena.vars);
|
||||
step = tuple_max(step, d);
|
||||
|
||||
if iter == 0 {
|
||||
self.evidence *= evidence(&diff, &margin, &tie, e);
|
||||
}
|
||||
|
||||
diff[e].likelihood = approx(diff[e].prior, margin[e], tie[e]) / diff[e].prior;
|
||||
let likelihood_lose = team[e].posterior_win() - diff[e].likelihood;
|
||||
step = tuple_max(step, team[e + 1].likelihood_lose.delta(likelihood_lose));
|
||||
team[e + 1].likelihood_lose = likelihood_lose;
|
||||
let new_ll = pw - tf.msg;
|
||||
step = tuple_max(step, arena.lhood_lose[e + 1].delta(new_ll));
|
||||
arena.lhood_lose[e + 1] = new_ll;
|
||||
}
|
||||
|
||||
for e in (1..diff.len()).rev() {
|
||||
diff[e].prior = team[e].posterior_win() - team[e + 1].posterior_lose();
|
||||
// Backward sweep: diffs n_diffs-1 .. 1 (reverse, all but the first).
|
||||
for (rev_i, tf) in trunc[1..].iter_mut().rev().enumerate() {
|
||||
let e = n_diffs - 1 - rev_i;
|
||||
let pw = arena.team_prior[e] * arena.lhood_lose[e];
|
||||
let pl = arena.team_prior[e + 1] * arena.lhood_win[e + 1];
|
||||
let raw = pw - pl;
|
||||
arena.vars.set(tf.diff, raw * tf.msg);
|
||||
let d = tf.propagate(&mut arena.vars);
|
||||
step = tuple_max(step, d);
|
||||
|
||||
if iter == 0 && e == diff.len() - 1 {
|
||||
self.evidence *= evidence(&diff, &margin, &tie, e);
|
||||
}
|
||||
|
||||
diff[e].likelihood = approx(diff[e].prior, margin[e], tie[e]) / diff[e].prior;
|
||||
let likelihood_win = team[e + 1].posterior_lose() + diff[e].likelihood;
|
||||
step = tuple_max(step, team[e].likelihood_win.delta(likelihood_win));
|
||||
team[e].likelihood_win = likelihood_win;
|
||||
let new_lw = pl + tf.msg;
|
||||
step = tuple_max(step, arena.lhood_win[e].delta(new_lw));
|
||||
arena.lhood_win[e] = new_lw;
|
||||
}
|
||||
|
||||
iter += 1;
|
||||
}
|
||||
|
||||
if diff.len() == 1 {
|
||||
self.evidence = evidence(&diff, &margin, &tie, 0);
|
||||
|
||||
diff[0].prior = team[0].posterior_win() - team[1].posterior_lose();
|
||||
diff[0].likelihood = approx(diff[0].prior, margin[0], tie[0]) / diff[0].prior;
|
||||
// Special case: exactly 1 diff (2-team game); loop body was empty.
|
||||
if n_diffs == 1 {
|
||||
let raw = (arena.team_prior[0] * arena.lhood_lose[0])
|
||||
- (arena.team_prior[1] * arena.lhood_win[1]);
|
||||
arena.vars.set(trunc[0].diff, raw * trunc[0].msg);
|
||||
trunc[0].propagate(&mut arena.vars);
|
||||
}
|
||||
|
||||
let t_end = team.len() - 1;
|
||||
let d_end = diff.len() - 1;
|
||||
// Boundary updates: close the chain at both ends.
|
||||
if n_diffs > 0 {
|
||||
let pl1 = arena.team_prior[1] * arena.lhood_win[1];
|
||||
arena.lhood_win[0] = pl1 + trunc[0].msg;
|
||||
let pw_last = arena.team_prior[n_teams - 2] * arena.lhood_lose[n_teams - 2];
|
||||
arena.lhood_lose[n_teams - 1] = pw_last - trunc[n_diffs - 1].msg;
|
||||
}
|
||||
|
||||
team[0].likelihood_win = team[1].posterior_lose() + diff[0].likelihood;
|
||||
team[t_end].likelihood_lose = team[t_end - 1].posterior_win() - diff[d_end].likelihood;
|
||||
// Evidence = product of per-diff evidences (each cached on first propagation).
|
||||
self.evidence = trunc
|
||||
.iter()
|
||||
.map(|t| t.evidence_cached.unwrap_or(1.0))
|
||||
.product();
|
||||
|
||||
let m_t_ft = o.into_iter().map(|e| team[e].likelihood());
|
||||
// Inverse permutation: inv_buf[orig_i] = sorted_i.
|
||||
arena.inv_buf.resize(n_teams, 0);
|
||||
for (si, &orig_i) in arena.sort_buf.iter().enumerate() {
|
||||
arena.inv_buf[orig_i] = si;
|
||||
}
|
||||
|
||||
self.likelihoods = self
|
||||
.teams
|
||||
.iter()
|
||||
.zip(self.weights.iter())
|
||||
.zip(m_t_ft)
|
||||
.map(|((p, w), m)| {
|
||||
let performance = p.iter().zip(w.iter()).fold(N00, |p, (player, &weight)| {
|
||||
p + (player.performance() * weight)
|
||||
});
|
||||
|
||||
p.iter()
|
||||
.zip(w.iter())
|
||||
.map(|(p, &w)| {
|
||||
((m - performance.exclude(p.performance() * w)) * (1.0 / w))
|
||||
.forget(p.beta.powi(2))
|
||||
.enumerate()
|
||||
.map(|(orig_i, (players, weights))| {
|
||||
let si = arena.inv_buf[orig_i];
|
||||
let m = arena.lhood_win[si] * arena.lhood_lose[si];
|
||||
let performance = players
|
||||
.iter()
|
||||
.zip(weights.iter())
|
||||
.fold(N00, |p, (player, &w)| p + (player.performance() * w));
|
||||
players
|
||||
.iter()
|
||||
.zip(weights.iter())
|
||||
.map(|(player, &w)| {
|
||||
((m - performance.exclude(player.performance() * w)) * (1.0 / w))
|
||||
.forget(player.beta.powi(2))
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
@@ -197,31 +284,100 @@ impl<'a, D: Drift> Game<'a, D> {
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
pub fn log_evidence(&self) -> f64 {
|
||||
self.evidence.ln()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> Game<'_, T, D> {
|
||||
pub fn ranked(
|
||||
teams: &[&[Rating<T, D>]],
|
||||
outcome: crate::Outcome,
|
||||
options: &GameOptions,
|
||||
) -> Result<OwnedGame<T, D>, crate::InferenceError> {
|
||||
if !(0.0..1.0).contains(&options.p_draw) {
|
||||
return Err(crate::InferenceError::InvalidProbability {
|
||||
value: options.p_draw,
|
||||
});
|
||||
}
|
||||
if outcome.team_count() != teams.len() {
|
||||
return Err(crate::InferenceError::MismatchedShape {
|
||||
kind: "outcome ranks vs teams",
|
||||
expected: teams.len(),
|
||||
got: outcome.team_count(),
|
||||
});
|
||||
}
|
||||
|
||||
let ranks = outcome.as_ranks();
|
||||
let max_rank = ranks.iter().copied().max().unwrap_or(0) as f64;
|
||||
let result: Vec<f64> = ranks.iter().map(|&r| max_rank - r as f64).collect();
|
||||
let teams_owned: Vec<Vec<Rating<T, D>>> = teams.iter().map(|t| t.to_vec()).collect();
|
||||
let weights: Vec<Vec<f64>> = teams.iter().map(|t| vec![1.0; t.len()]).collect();
|
||||
|
||||
Ok(OwnedGame::new(teams_owned, result, weights, options.p_draw))
|
||||
}
|
||||
|
||||
pub fn one_v_one(
|
||||
a: &Rating<T, D>,
|
||||
b: &Rating<T, D>,
|
||||
outcome: crate::Outcome,
|
||||
) -> Result<(Gaussian, Gaussian), crate::InferenceError> {
|
||||
let game = Self::ranked(&[&[*a], &[*b]], outcome, &GameOptions::default())?;
|
||||
let post = game.posteriors();
|
||||
Ok((post[0][0], post[1][0]))
|
||||
}
|
||||
|
||||
pub fn free_for_all(
|
||||
players: &[&Rating<T, D>],
|
||||
outcome: crate::Outcome,
|
||||
options: &GameOptions,
|
||||
) -> Result<OwnedGame<T, D>, crate::InferenceError> {
|
||||
let teams: Vec<Vec<Rating<T, D>>> = players.iter().map(|p| vec![**p]).collect();
|
||||
let team_refs: Vec<&[Rating<T, D>]> = teams.iter().map(|t| t.as_slice()).collect();
|
||||
Self::ranked(&team_refs, outcome, options)
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn custom<S: crate::factors::Schedule>(
|
||||
factors: &mut [crate::factors::BuiltinFactor],
|
||||
vars: &mut crate::factors::VarStore,
|
||||
schedule: &S,
|
||||
) -> crate::factors::ScheduleReport {
|
||||
schedule.run(factors, vars)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use ::approx::assert_ulps_eq;
|
||||
|
||||
use crate::{ConstantDrift, GAMMA, Gaussian, N_INF, Player};
|
||||
|
||||
use super::*;
|
||||
use crate::{ConstantDrift, GAMMA, Gaussian, N_INF, Rating, arena::ScratchArena};
|
||||
|
||||
type R = Rating<i64, ConstantDrift>;
|
||||
|
||||
#[test]
|
||||
fn test_1vs1() {
|
||||
let t_a = Player::new(
|
||||
let t_a = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_b = Player::new(
|
||||
let t_b = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
|
||||
let w = [vec![1.0], vec![1.0]];
|
||||
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 1.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b]],
|
||||
&[0.0, 1.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
@@ -230,19 +386,25 @@ mod tests {
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(20.794779, 7.194481), epsilon = 1e-6);
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(29.205220, 7.194481), epsilon = 1e-6);
|
||||
|
||||
let t_a = Player::new(
|
||||
let t_a = R::new(
|
||||
Gaussian::from_ms(29.0, 1.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(GAMMA),
|
||||
);
|
||||
let t_b = Player::new(
|
||||
let t_b = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(GAMMA),
|
||||
);
|
||||
|
||||
let w = [vec![1.0], vec![1.0]];
|
||||
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 1.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b]],
|
||||
&[0.0, 1.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
@@ -251,11 +413,17 @@ mod tests {
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(28.896475, 0.996604), epsilon = 1e-6);
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(32.189211, 6.062063), epsilon = 1e-6);
|
||||
|
||||
let t_a = Player::new(Gaussian::from_ms(1.139, 0.531), 1.0, ConstantDrift(0.2125));
|
||||
let t_b = Player::new(Gaussian::from_ms(15.568, 0.51), 1.0, ConstantDrift(0.2125));
|
||||
let t_a = R::new(Gaussian::from_ms(1.139, 0.531), 1.0, ConstantDrift(0.2125));
|
||||
let t_b = R::new(Gaussian::from_ms(15.568, 0.51), 1.0, ConstantDrift(0.2125));
|
||||
|
||||
let w = [vec![1.0], vec![1.0]];
|
||||
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 1.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b]],
|
||||
&[0.0, 1.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
|
||||
assert_eq!(g.likelihoods[0][0], N_INF);
|
||||
assert_eq!(g.likelihoods[1][0], N_INF);
|
||||
@@ -264,17 +432,17 @@ mod tests {
|
||||
#[test]
|
||||
fn test_1vs1vs1() {
|
||||
let teams = vec![
|
||||
vec![Player::new(
|
||||
vec![R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
)],
|
||||
vec![Player::new(
|
||||
vec![R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
)],
|
||||
vec![Player::new(
|
||||
vec![R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
@@ -282,7 +450,13 @@ mod tests {
|
||||
];
|
||||
|
||||
let w = [vec![1.0], vec![1.0], vec![1.0]];
|
||||
let g = Game::new(teams.clone(), &[1.0, 2.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
teams.clone(),
|
||||
&[1.0, 2.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
@@ -292,7 +466,13 @@ mod tests {
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(31.311358, 6.698818), epsilon = 1e-6);
|
||||
|
||||
let w = [vec![1.0], vec![1.0], vec![1.0]];
|
||||
let g = Game::new(teams.clone(), &[2.0, 1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
teams.clone(),
|
||||
&[2.0, 1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
@@ -302,33 +482,40 @@ mod tests {
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(25.000000, 6.238469), epsilon = 1e-6);
|
||||
|
||||
let w = [vec![1.0], vec![1.0], vec![1.0]];
|
||||
let g = Game::new(teams, &[1.0, 2.0, 0.0], &w, 0.5);
|
||||
let g = Game::ranked_with_arena(teams, &[1.0, 2.0, 0.0], &w, 0.5, &mut ScratchArena::new());
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
let b = p[1][0];
|
||||
let c = p[2][0];
|
||||
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 6.092561), epsilon = 1e-6);
|
||||
// T1 ULP shift: mu rounds to 25.0 (was 24.999999) under natural-parameter storage.
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(25.0, 6.092561), epsilon = 1e-6);
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(33.379314, 6.483575), epsilon = 1e-6);
|
||||
assert_ulps_eq!(c, Gaussian::from_ms(16.620685, 6.483575), epsilon = 1e-6);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_1vs1_draw() {
|
||||
let t_a = Player::new(
|
||||
let t_a = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_b = Player::new(
|
||||
let t_b = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
|
||||
let w = [vec![1.0], vec![1.0]];
|
||||
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 0.0], &w, 0.25);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b]],
|
||||
&[0.0, 0.0],
|
||||
&w,
|
||||
0.25,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
@@ -337,19 +524,25 @@ mod tests {
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 6.469480), epsilon = 1e-6);
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(24.999999, 6.469480), epsilon = 1e-6);
|
||||
|
||||
let t_a = Player::new(
|
||||
let t_a = R::new(
|
||||
Gaussian::from_ms(25.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_b = Player::new(
|
||||
let t_b = R::new(
|
||||
Gaussian::from_ms(29.0, 2.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
|
||||
let w = [vec![1.0], vec![1.0]];
|
||||
let g = Game::new(vec![vec![t_a], vec![t_b]], &[0.0, 0.0], &w, 0.25);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b]],
|
||||
&[0.0, 0.0],
|
||||
&w,
|
||||
0.25,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
let a = p[0][0];
|
||||
@@ -361,28 +554,29 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_1vs1vs1_draw() {
|
||||
let t_a = Player::new(
|
||||
let t_a = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_b = Player::new(
|
||||
let t_b = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_c = Player::new(
|
||||
let t_c = R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
|
||||
let w = [vec![1.0], vec![1.0], vec![1.0]];
|
||||
let g = Game::new(
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b], vec![t_c]],
|
||||
&[0.0, 0.0, 0.0],
|
||||
&w,
|
||||
0.25,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
@@ -390,32 +584,35 @@ mod tests {
|
||||
let b = p[1][0];
|
||||
let c = p[2][0];
|
||||
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(24.999999, 5.729068), epsilon = 1e-6);
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(25.000000, 5.707423), epsilon = 1e-6);
|
||||
assert_ulps_eq!(c, Gaussian::from_ms(24.999999, 5.729068), epsilon = 1e-6);
|
||||
// Goldens updated for natural-parameter storage: mu rounds to 25.0 (was 24.999999),
|
||||
// sigma shifts by ~3e-7 ULPs (within 1e-6 of original). Both bounded differences.
|
||||
assert_ulps_eq!(a, Gaussian::from_ms(25.0, 5.729069), epsilon = 1e-6);
|
||||
assert_ulps_eq!(b, Gaussian::from_ms(25.0, 5.707424), epsilon = 1e-6);
|
||||
assert_ulps_eq!(c, Gaussian::from_ms(25.0, 5.729069), epsilon = 1e-6);
|
||||
|
||||
let t_a = Player::new(
|
||||
let t_a = R::new(
|
||||
Gaussian::from_ms(25.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_b = Player::new(
|
||||
let t_b = R::new(
|
||||
Gaussian::from_ms(25.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
let t_c = Player::new(
|
||||
let t_c = R::new(
|
||||
Gaussian::from_ms(29.0, 2.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
);
|
||||
|
||||
let w = [vec![1.0], vec![1.0], vec![1.0]];
|
||||
let g = Game::new(
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![vec![t_a], vec![t_b], vec![t_c]],
|
||||
&[0.0, 0.0, 0.0],
|
||||
&w,
|
||||
0.25,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
@@ -431,29 +628,29 @@ mod tests {
|
||||
#[test]
|
||||
fn test_2vs1vs2_mixed() {
|
||||
let t_a = vec![
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(12.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(18.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
];
|
||||
let t_b = vec![Player::new(
|
||||
let t_b = vec![R::new(
|
||||
Gaussian::from_ms(30.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
)];
|
||||
let t_c = vec![
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(14.0, 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(16., 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
@@ -461,7 +658,13 @@ mod tests {
|
||||
];
|
||||
|
||||
let w = [vec![1.0, 1.0], vec![1.0], vec![1.0, 1.0]];
|
||||
let g = Game::new(vec![t_a, t_b, t_c], &[1.0, 0.0, 0.0], &w, 0.25);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a, t_b, t_c],
|
||||
&[1.0, 0.0, 0.0],
|
||||
&w,
|
||||
0.25,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(p[0][0], Gaussian::from_ms(13.051, 2.864), epsilon = 1e-3);
|
||||
@@ -476,19 +679,25 @@ mod tests {
|
||||
let w_a = vec![1.0];
|
||||
let w_b = vec![2.0];
|
||||
|
||||
let t_a = vec![Player::new(
|
||||
let t_a = vec![R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
)];
|
||||
let t_b = vec![Player::new(
|
||||
let t_b = vec![R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
)];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a.clone(), t_b.clone()],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -506,7 +715,13 @@ mod tests {
|
||||
let w_b = vec![0.7];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a.clone(), t_b.clone()],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -524,7 +739,13 @@ mod tests {
|
||||
let w_b = vec![0.7];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a, t_b], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a, t_b],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -541,19 +762,17 @@ mod tests {
|
||||
let w_a = vec![1.0];
|
||||
let w_b = vec![0.0];
|
||||
|
||||
let t_a = vec![Player::new(
|
||||
Gaussian::from_ms(2.0, 6.0),
|
||||
1.0,
|
||||
ConstantDrift(0.0),
|
||||
)];
|
||||
let t_b = vec![Player::new(
|
||||
Gaussian::from_ms(2.0, 6.0),
|
||||
1.0,
|
||||
ConstantDrift(0.0),
|
||||
)];
|
||||
let t_a = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
|
||||
let t_b = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a, t_b], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a, t_b],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -570,19 +789,17 @@ mod tests {
|
||||
let w_a = vec![1.0];
|
||||
let w_b = vec![-1.0];
|
||||
|
||||
let t_a = vec![Player::new(
|
||||
Gaussian::from_ms(2.0, 6.0),
|
||||
1.0,
|
||||
ConstantDrift(0.0),
|
||||
)];
|
||||
let t_b = vec![Player::new(
|
||||
Gaussian::from_ms(2.0, 6.0),
|
||||
1.0,
|
||||
ConstantDrift(0.0),
|
||||
)];
|
||||
let t_a = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
|
||||
let t_b = vec![R::new(Gaussian::from_ms(2.0, 6.0), 1.0, ConstantDrift(0.0))];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a, t_b], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a, t_b],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(p[0][0], p[1][0], epsilon = 1e-6);
|
||||
@@ -591,12 +808,12 @@ mod tests {
|
||||
#[test]
|
||||
fn test_2vs2_weighted() {
|
||||
let t_a = vec![
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
),
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
@@ -605,12 +822,12 @@ mod tests {
|
||||
let w_a = vec![0.4, 0.8];
|
||||
|
||||
let t_b = vec![
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
),
|
||||
Player::new(
|
||||
R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
@@ -619,7 +836,13 @@ mod tests {
|
||||
let w_b = vec![0.9, 0.6];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a.clone(), t_b.clone()],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -647,7 +870,13 @@ mod tests {
|
||||
let w_b = vec![0.7, 0.4];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a.clone(), t_b.clone()],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -675,7 +904,13 @@ mod tests {
|
||||
let w_b = vec![0.7, 2.4];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a.clone(), t_b.clone()], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a.clone(), t_b.clone()],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
@@ -700,10 +935,10 @@ mod tests {
|
||||
);
|
||||
|
||||
let w = [vec![1.0, 1.0], vec![1.0]];
|
||||
let g = Game::new(
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![
|
||||
t_a.clone(),
|
||||
vec![Player::new(
|
||||
vec![R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(0.0),
|
||||
@@ -712,6 +947,7 @@ mod tests {
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let post_2vs1 = g.posteriors();
|
||||
|
||||
@@ -719,7 +955,13 @@ mod tests {
|
||||
let w_b = vec![1.0, 0.0];
|
||||
|
||||
let w = [w_a, w_b];
|
||||
let g = Game::new(vec![t_a, t_b.clone()], &[1.0, 0.0], &w, 0.0);
|
||||
let g = Game::ranked_with_arena(
|
||||
vec![t_a, t_b.clone()],
|
||||
&[1.0, 0.0],
|
||||
&w,
|
||||
0.0,
|
||||
&mut ScratchArena::new(),
|
||||
);
|
||||
let p = g.posteriors();
|
||||
|
||||
assert_ulps_eq!(p[0][0], post_2vs1[0][0], epsilon = 1e-6);
|
||||
|
||||
296
src/gaussian.rs
296
src/gaussian.rs
@@ -2,143 +2,159 @@ use std::ops;
|
||||
|
||||
use crate::{MU, N_INF, SIGMA};
|
||||
|
||||
/// A Gaussian distribution stored in natural parameters.
|
||||
///
|
||||
/// `pi = 1 / sigma^2` (precision)
|
||||
/// `tau = mu * pi` (precision-adjusted mean)
|
||||
///
|
||||
/// Multiplication and division in message passing become pure adds/subs of
|
||||
/// the stored fields with no `sqrt` or reciprocal in the hot path. `mu()` and
|
||||
/// `sigma()` are accessors computed on demand.
|
||||
#[derive(Clone, Copy, PartialEq, Debug)]
|
||||
pub struct Gaussian {
|
||||
pub mu: f64,
|
||||
pub sigma: f64,
|
||||
pi: f64,
|
||||
tau: f64,
|
||||
}
|
||||
|
||||
impl Gaussian {
|
||||
/// Construct from mean and standard deviation.
|
||||
pub const fn from_ms(mu: f64, sigma: f64) -> Self {
|
||||
Gaussian { mu, sigma }
|
||||
}
|
||||
|
||||
fn pi(&self) -> f64 {
|
||||
if self.sigma > 0.0 {
|
||||
self.sigma.powi(-2)
|
||||
if sigma == f64::INFINITY {
|
||||
Self { pi: 0.0, tau: 0.0 }
|
||||
} else if sigma == 0.0 {
|
||||
// Point mass at mu. tau = mu * pi = mu * inf.
|
||||
// For mu == 0 this is 0; for mu != 0 it is inf * mu = inf (IEEE).
|
||||
// Only N00 (mu=0, sigma=0) is used in practice.
|
||||
Self {
|
||||
pi: f64::INFINITY,
|
||||
tau: if mu == 0.0 { 0.0 } else { f64::INFINITY },
|
||||
}
|
||||
} else {
|
||||
f64::INFINITY
|
||||
let pi = 1.0 / (sigma * sigma);
|
||||
Self { pi, tau: mu * pi }
|
||||
}
|
||||
}
|
||||
|
||||
fn tau(&self) -> f64 {
|
||||
if self.sigma > 0.0 {
|
||||
self.mu * self.pi()
|
||||
/// Construct directly from natural parameters.
|
||||
#[inline]
|
||||
pub(crate) const fn from_natural(pi: f64, tau: f64) -> Self {
|
||||
Self { pi, tau }
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn pi(&self) -> f64 {
|
||||
self.pi
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn tau(&self) -> f64 {
|
||||
self.tau
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn mu(&self) -> f64 {
|
||||
if self.pi == 0.0 {
|
||||
0.0
|
||||
} else {
|
||||
self.tau / self.pi
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn sigma(&self) -> f64 {
|
||||
if self.pi == 0.0 {
|
||||
f64::INFINITY
|
||||
} else if self.pi.is_infinite() {
|
||||
0.0
|
||||
} else {
|
||||
1.0 / self.pi.sqrt()
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn delta(&self, m: Gaussian) -> (f64, f64) {
|
||||
((self.mu - m.mu).abs(), (self.sigma - m.sigma).abs())
|
||||
pub(crate) fn delta(&self, other: Gaussian) -> (f64, f64) {
|
||||
(
|
||||
(self.mu() - other.mu()).abs(),
|
||||
(self.sigma() - other.sigma()).abs(),
|
||||
)
|
||||
}
|
||||
|
||||
pub(crate) fn exclude(&self, m: Gaussian) -> Self {
|
||||
Self {
|
||||
mu: self.mu - m.mu,
|
||||
sigma: (self.sigma.powi(2) - m.sigma.powi(2)).sqrt(),
|
||||
pub(crate) fn exclude(&self, other: Gaussian) -> Self {
|
||||
let var = self.sigma().powi(2) - other.sigma().powi(2);
|
||||
if var <= 0.0 {
|
||||
// When sigma_self ≈ sigma_other (including ULP-level rounding differences
|
||||
// from the pi→sigma accessor round-trip), the excluded contribution is N00.
|
||||
// Computing from_ms(tiny_mu, 0.0) would give {pi:inf, tau:inf}, whose
|
||||
// mu() = inf/inf = NaN. Returning N00 is correct: when both Gaussians
|
||||
// carry the same variance, the residual is a point mass at 0.
|
||||
return Gaussian::from_ms(0.0, 0.0);
|
||||
}
|
||||
let mu = self.mu() - other.mu();
|
||||
Self::from_ms(mu, var.sqrt())
|
||||
}
|
||||
|
||||
pub(crate) fn forget(&self, variance_delta: f64) -> Self {
|
||||
Self {
|
||||
mu: self.mu,
|
||||
sigma: (self.sigma.powi(2) + variance_delta).sqrt(),
|
||||
}
|
||||
let var = self.sigma().powi(2) + variance_delta;
|
||||
Self::from_ms(self.mu(), var.sqrt())
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Gaussian {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
mu: MU,
|
||||
sigma: SIGMA,
|
||||
}
|
||||
Self::from_ms(MU, SIGMA)
|
||||
}
|
||||
}
|
||||
|
||||
impl ops::Add<Gaussian> for Gaussian {
|
||||
type Output = Gaussian;
|
||||
|
||||
/// Variance addition: (mu1 + mu2, sqrt(σ1² + σ2²)).
|
||||
/// Used for combining performance and noise; rare relative to mul/div.
|
||||
fn add(self, rhs: Gaussian) -> Self::Output {
|
||||
Gaussian {
|
||||
mu: self.mu + rhs.mu,
|
||||
sigma: (self.sigma.powi(2) + rhs.sigma.powi(2)).sqrt(),
|
||||
}
|
||||
let mu = self.mu() + rhs.mu();
|
||||
let var = self.sigma().powi(2) + rhs.sigma().powi(2);
|
||||
Self::from_ms(mu, var.sqrt())
|
||||
}
|
||||
}
|
||||
|
||||
impl ops::Sub<Gaussian> for Gaussian {
|
||||
type Output = Gaussian;
|
||||
|
||||
/// (mu1 - mu2, sqrt(σ1² + σ2²)). Same sigma combination as Add.
|
||||
fn sub(self, rhs: Gaussian) -> Self::Output {
|
||||
Gaussian {
|
||||
mu: self.mu - rhs.mu,
|
||||
sigma: (self.sigma.powi(2) + rhs.sigma.powi(2)).sqrt(),
|
||||
}
|
||||
let mu = self.mu() - rhs.mu();
|
||||
let var = self.sigma().powi(2) + rhs.sigma().powi(2);
|
||||
Self::from_ms(mu, var.sqrt())
|
||||
}
|
||||
}
|
||||
|
||||
impl ops::Mul<Gaussian> for Gaussian {
|
||||
type Output = Gaussian;
|
||||
|
||||
/// Factor product: nat-param add. Hot path — two f64 additions, no sqrt.
|
||||
fn mul(self, rhs: Gaussian) -> Self::Output {
|
||||
let (mu, sigma) = if self.sigma == 0.0 || rhs.sigma == 0.0 {
|
||||
let mu = self.mu / (self.sigma.powi(2) / rhs.sigma.powi(2) + 1.0)
|
||||
+ rhs.mu / (rhs.sigma.powi(2) / self.sigma.powi(2) + 1.0);
|
||||
|
||||
let sigma = (1.0 / ((1.0 / self.sigma.powi(2)) + (1.0 / rhs.sigma.powi(2)))).sqrt();
|
||||
|
||||
(mu, sigma)
|
||||
} else {
|
||||
mu_sigma(self.tau() + rhs.tau(), self.pi() + rhs.pi())
|
||||
};
|
||||
|
||||
Gaussian { mu, sigma }
|
||||
Self::from_natural(self.pi + rhs.pi, self.tau + rhs.tau)
|
||||
}
|
||||
}
|
||||
|
||||
impl ops::Mul<f64> for Gaussian {
|
||||
type Output = Gaussian;
|
||||
|
||||
fn mul(self, rhs: f64) -> Self::Output {
|
||||
if rhs.is_finite() {
|
||||
Self {
|
||||
mu: self.mu * rhs,
|
||||
sigma: self.sigma * rhs,
|
||||
}
|
||||
} else {
|
||||
N_INF
|
||||
fn mul(self, scalar: f64) -> Self::Output {
|
||||
if !scalar.is_finite() {
|
||||
return N_INF;
|
||||
}
|
||||
if scalar == 0.0 {
|
||||
// Scaling by 0 collapses to a point mass at 0 (sigma' = 0, mu' = 0).
|
||||
// This is N00, the additive identity, NOT N_INF.
|
||||
return Gaussian::from_ms(0.0, 0.0);
|
||||
}
|
||||
// sigma' = sigma * |scalar| => pi' = pi / scalar²
|
||||
// mu' = mu * scalar => tau' = tau / scalar
|
||||
Self::from_natural(self.pi / (scalar * scalar), self.tau / scalar)
|
||||
}
|
||||
}
|
||||
|
||||
impl ops::Div<Gaussian> for Gaussian {
|
||||
type Output = Gaussian;
|
||||
|
||||
/// Cavity: nat-param sub. Hot path — two f64 subtractions, no sqrt.
|
||||
fn div(self, rhs: Gaussian) -> Self::Output {
|
||||
let (mu, sigma) = if self.sigma == 0.0 || rhs.sigma == 0.0 {
|
||||
let mu = self.mu / (1.0 - self.sigma.powi(2) / rhs.sigma.powi(2))
|
||||
+ rhs.mu / (rhs.sigma.powi(2) / self.sigma.powi(2) - 1.0);
|
||||
|
||||
let sigma = (1.0 / ((1.0 / self.sigma.powi(2)) - (1.0 / rhs.sigma.powi(2)))).sqrt();
|
||||
|
||||
(mu, sigma)
|
||||
} else {
|
||||
mu_sigma(self.tau() - rhs.tau(), self.pi() - rhs.pi())
|
||||
};
|
||||
|
||||
Gaussian { mu, sigma }
|
||||
}
|
||||
}
|
||||
|
||||
fn mu_sigma(tau: f64, pi: f64) -> (f64, f64) {
|
||||
if pi > 0.0 {
|
||||
(tau / pi, (1.0 / pi).sqrt())
|
||||
} else if (pi + 1e-5) < 0.0 {
|
||||
panic!("precision should be greater than 0");
|
||||
} else {
|
||||
(0.0, f64::INFINITY)
|
||||
Self::from_natural(self.pi - rhs.pi, self.tau - rhs.tau)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -148,85 +164,71 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_add() {
|
||||
let n = Gaussian {
|
||||
mu: 25.0,
|
||||
sigma: 25.0 / 3.0,
|
||||
};
|
||||
|
||||
let m = Gaussian {
|
||||
mu: 0.0,
|
||||
sigma: 1.0,
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
n + m,
|
||||
Gaussian {
|
||||
mu: 25.0,
|
||||
sigma: 8.393118874676116
|
||||
}
|
||||
);
|
||||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||||
let m = Gaussian::from_ms(0.0, 1.0);
|
||||
let r = n + m;
|
||||
assert!((r.mu() - 25.0).abs() < 1e-12);
|
||||
assert!((r.sigma() - 8.393118874676116).abs() < 1e-10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sub() {
|
||||
let n = Gaussian {
|
||||
mu: 25.0,
|
||||
sigma: 25.0 / 3.0,
|
||||
};
|
||||
|
||||
let m = Gaussian {
|
||||
mu: 1.0,
|
||||
sigma: 1.0,
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
n - m,
|
||||
Gaussian {
|
||||
mu: 24.0,
|
||||
sigma: 8.393118874676116
|
||||
}
|
||||
);
|
||||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||||
let m = Gaussian::from_ms(1.0, 1.0);
|
||||
let r = n - m;
|
||||
assert!((r.mu() - 24.0).abs() < 1e-12);
|
||||
assert!((r.sigma() - 8.393118874676116).abs() < 1e-10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mul() {
|
||||
let n = Gaussian {
|
||||
mu: 25.0,
|
||||
sigma: 25.0 / 3.0,
|
||||
};
|
||||
|
||||
let m = Gaussian {
|
||||
mu: 0.0,
|
||||
sigma: 1.0,
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
n * m,
|
||||
Gaussian {
|
||||
mu: 0.35488958990536273,
|
||||
sigma: 0.992876838486922
|
||||
}
|
||||
);
|
||||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||||
let m = Gaussian::from_ms(0.0, 1.0);
|
||||
let r = n * m;
|
||||
assert!((r.mu() - 0.35488958990536273).abs() < 1e-10);
|
||||
assert!((r.sigma() - 0.992876838486922).abs() < 1e-10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_div() {
|
||||
let n = Gaussian {
|
||||
mu: 25.0,
|
||||
sigma: 25.0 / 3.0,
|
||||
};
|
||||
let n = Gaussian::from_ms(25.0, 25.0 / 3.0);
|
||||
let m = Gaussian::from_ms(0.0, 1.0);
|
||||
let r = m / n;
|
||||
assert!((r.mu() - (-0.3652597402597402)).abs() < 1e-10);
|
||||
assert!((r.sigma() - 1.0072787050317253).abs() < 1e-10);
|
||||
}
|
||||
|
||||
let m = Gaussian {
|
||||
mu: 0.0,
|
||||
sigma: 1.0,
|
||||
};
|
||||
#[test]
|
||||
fn test_n00_is_add_identity() {
|
||||
// N00 (sigma=0) is the additive identity for the variance-convolution Add op.
|
||||
// N_INF (sigma=inf) is the identity for the EP-product Mul op.
|
||||
let g = Gaussian::from_ms(3.0, 2.0);
|
||||
let n00 = Gaussian::from_ms(0.0, 0.0);
|
||||
let r = n00 + g;
|
||||
assert!((r.mu() - g.mu()).abs() < 1e-12);
|
||||
assert!((r.sigma() - g.sigma()).abs() < 1e-12);
|
||||
}
|
||||
|
||||
assert_eq!(
|
||||
m / n,
|
||||
Gaussian {
|
||||
mu: -0.3652597402597402,
|
||||
sigma: 1.0072787050317253
|
||||
}
|
||||
);
|
||||
#[test]
|
||||
fn test_mul_is_factor_product() {
|
||||
// n * m in nat-params should be pi_n + pi_m, tau_n + tau_m
|
||||
let n = Gaussian::from_ms(2.0, 3.0);
|
||||
let m = Gaussian::from_ms(1.0, 2.0);
|
||||
let r = n * m;
|
||||
let expected_pi = n.pi() + m.pi();
|
||||
let expected_tau = n.tau() + m.tau();
|
||||
assert!((r.pi() - expected_pi).abs() < 1e-15);
|
||||
assert!((r.tau() - expected_tau).abs() < 1e-15);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_div_is_cavity() {
|
||||
let n = Gaussian::from_ms(2.0, 1.0);
|
||||
let m = Gaussian::from_ms(1.0, 2.0);
|
||||
let r = n / m;
|
||||
let expected_pi = n.pi() - m.pi();
|
||||
let expected_tau = n.tau() - m.tau();
|
||||
assert!((r.pi() - expected_pi).abs() < 1e-15);
|
||||
assert!((r.tau() - expected_tau).abs() < 1e-15);
|
||||
}
|
||||
}
|
||||
|
||||
1522
src/history.rs
1522
src/history.rs
File diff suppressed because it is too large
Load Diff
72
src/key_table.rs
Normal file
72
src/key_table.rs
Normal file
@@ -0,0 +1,72 @@
|
||||
use std::{
|
||||
borrow::{Borrow, ToOwned},
|
||||
collections::HashMap,
|
||||
hash::Hash,
|
||||
};
|
||||
|
||||
use crate::Index;
|
||||
|
||||
/// Maps user keys to internal `Index` handles.
|
||||
///
|
||||
/// Renamed from the former `IndexMap` to avoid colliding with the `indexmap`
|
||||
/// crate. Power users can promote `&K` to `Index` via `get_or_create` and
|
||||
/// skip the lookup on subsequent hot-path calls.
|
||||
#[derive(Debug)]
|
||||
pub struct KeyTable<K>(HashMap<K, Index>);
|
||||
|
||||
impl<K> KeyTable<K>
|
||||
where
|
||||
K: Eq + Hash,
|
||||
{
|
||||
pub fn new() -> Self {
|
||||
Self(HashMap::new())
|
||||
}
|
||||
|
||||
pub fn get<Q: ?Sized + Hash + Eq>(&self, k: &Q) -> Option<Index>
|
||||
where
|
||||
K: Borrow<Q>,
|
||||
{
|
||||
self.0.get(k).cloned()
|
||||
}
|
||||
|
||||
pub fn get_or_create<Q: ?Sized + Hash + Eq + ToOwned<Owned = K>>(&mut self, k: &Q) -> Index
|
||||
where
|
||||
K: Borrow<Q>,
|
||||
{
|
||||
if let Some(idx) = self.0.get(k) {
|
||||
*idx
|
||||
} else {
|
||||
let idx = Index::from(self.0.len());
|
||||
self.0.insert(k.to_owned(), idx);
|
||||
idx
|
||||
}
|
||||
}
|
||||
|
||||
pub fn key(&self, idx: Index) -> Option<&K> {
|
||||
self.0
|
||||
.iter()
|
||||
.find(|&(_, value)| *value == idx)
|
||||
.map(|(key, _)| key)
|
||||
}
|
||||
|
||||
pub fn keys(&self) -> impl Iterator<Item = &K> {
|
||||
self.0.keys()
|
||||
}
|
||||
|
||||
pub fn len(&self) -> usize {
|
||||
self.0.len()
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.0.is_empty()
|
||||
}
|
||||
}
|
||||
|
||||
impl<K> Default for KeyTable<K>
|
||||
where
|
||||
K: Eq + Hash,
|
||||
{
|
||||
fn default() -> Self {
|
||||
KeyTable::new()
|
||||
}
|
||||
}
|
||||
147
src/lib.rs
147
src/lib.rs
@@ -1,28 +1,50 @@
|
||||
use std::borrow::{Borrow, ToOwned};
|
||||
use std::cmp::Reverse;
|
||||
use std::collections::HashMap;
|
||||
use std::f64::consts::{FRAC_1_SQRT_2, FRAC_2_SQRT_PI, SQRT_2};
|
||||
use std::hash::Hash;
|
||||
use std::{
|
||||
cmp::Reverse,
|
||||
f64::consts::{FRAC_1_SQRT_2, FRAC_2_SQRT_PI, SQRT_2},
|
||||
};
|
||||
|
||||
pub mod agent;
|
||||
#[cfg(feature = "approx")]
|
||||
mod approx;
|
||||
pub mod batch;
|
||||
pub(crate) mod arena;
|
||||
mod time;
|
||||
mod time_slice;
|
||||
pub use time_slice::TimeSlice;
|
||||
mod color_group;
|
||||
mod competitor;
|
||||
mod convergence;
|
||||
pub mod drift;
|
||||
mod error;
|
||||
mod event;
|
||||
mod event_builder;
|
||||
pub(crate) mod factor;
|
||||
pub mod factors;
|
||||
mod game;
|
||||
pub mod gaussian;
|
||||
mod history;
|
||||
mod key_table;
|
||||
mod matrix;
|
||||
mod message;
|
||||
pub mod player;
|
||||
mod observer;
|
||||
mod outcome;
|
||||
mod rating;
|
||||
pub(crate) mod schedule;
|
||||
pub mod storage;
|
||||
|
||||
pub use competitor::Competitor;
|
||||
pub use convergence::{ConvergenceOptions, ConvergenceReport};
|
||||
pub use drift::{ConstantDrift, Drift};
|
||||
pub use game::Game;
|
||||
pub use error::InferenceError;
|
||||
pub use event::{Event, Member, Team};
|
||||
pub use event_builder::EventBuilder;
|
||||
pub use game::{Game, GameOptions, OwnedGame};
|
||||
pub use gaussian::Gaussian;
|
||||
pub use history::History;
|
||||
pub use key_table::KeyTable;
|
||||
use matrix::Matrix;
|
||||
use message::DiffMessage;
|
||||
pub use player::Player;
|
||||
pub use observer::{NullObserver, Observer};
|
||||
pub use outcome::Outcome;
|
||||
pub use rating::Rating;
|
||||
pub use schedule::ScheduleReport;
|
||||
pub use time::{Time, Untimed};
|
||||
|
||||
pub const BETA: f64 = 1.0;
|
||||
pub const MU: f64 = 0.0;
|
||||
@@ -47,61 +69,6 @@ impl From<usize> for Index {
|
||||
}
|
||||
}
|
||||
|
||||
pub struct IndexMap<K>(HashMap<K, Index>);
|
||||
|
||||
impl<K> IndexMap<K>
|
||||
where
|
||||
K: Eq + Hash,
|
||||
{
|
||||
pub fn new() -> Self {
|
||||
Self(HashMap::new())
|
||||
}
|
||||
|
||||
pub fn get<Q: ?Sized>(&self, k: &Q) -> Option<Index>
|
||||
where
|
||||
K: Borrow<Q>,
|
||||
Q: Hash + Eq + ToOwned<Owned = K>,
|
||||
{
|
||||
self.0.get(k).cloned()
|
||||
}
|
||||
|
||||
pub fn get_or_create<Q: ?Sized>(&mut self, k: &Q) -> Index
|
||||
where
|
||||
K: Borrow<Q>,
|
||||
Q: Hash + Eq + ToOwned<Owned = K>,
|
||||
{
|
||||
if let Some(idx) = self.0.get(k) {
|
||||
*idx
|
||||
} else {
|
||||
let idx = Index::from(self.0.len());
|
||||
|
||||
self.0.insert(k.to_owned(), idx);
|
||||
|
||||
idx
|
||||
}
|
||||
}
|
||||
|
||||
pub fn key(&self, idx: Index) -> Option<&K> {
|
||||
self.0
|
||||
.iter()
|
||||
.find(|&(_, value)| *value == idx)
|
||||
.map(|(key, _)| key)
|
||||
}
|
||||
|
||||
pub fn keys(&self) -> impl Iterator<Item = &K> {
|
||||
self.0.keys()
|
||||
}
|
||||
}
|
||||
|
||||
impl<K> Default for IndexMap<K>
|
||||
where
|
||||
K: Eq + Hash,
|
||||
{
|
||||
fn default() -> Self {
|
||||
IndexMap::new()
|
||||
}
|
||||
}
|
||||
|
||||
fn erfc(x: f64) -> f64 {
|
||||
let z = x.abs();
|
||||
let t = 1.0 / (1.0 + z / 2.0);
|
||||
@@ -156,7 +123,7 @@ fn compute_margin(p_draw: f64, sd: f64) -> f64 {
|
||||
ppf(0.5 - p_draw / 2.0, 0.0, sd).abs()
|
||||
}
|
||||
|
||||
fn cdf(x: f64, mu: f64, sigma: f64) -> f64 {
|
||||
pub(crate) fn cdf(x: f64, mu: f64, sigma: f64) -> f64 {
|
||||
let z = -(x - mu) / (sigma * SQRT_2);
|
||||
|
||||
0.5 * erfc(z)
|
||||
@@ -201,9 +168,9 @@ fn trunc(mu: f64, sigma: f64, margin: f64, tie: bool) -> (f64, f64) {
|
||||
}
|
||||
|
||||
pub(crate) fn approx(n: Gaussian, margin: f64, tie: bool) -> Gaussian {
|
||||
let (mu, sigma) = trunc(n.mu, n.sigma, margin, tie);
|
||||
let (mu, sigma) = trunc(n.mu(), n.sigma(), margin, tie);
|
||||
|
||||
Gaussian { mu, sigma }
|
||||
Gaussian::from_ms(mu, sigma)
|
||||
}
|
||||
|
||||
pub(crate) fn tuple_max(v1: (f64, f64), v2: (f64, f64)) -> (f64, f64) {
|
||||
@@ -217,39 +184,18 @@ pub(crate) fn tuple_gt(t: (f64, f64), e: f64) -> bool {
|
||||
t.0 > e || t.1 > e
|
||||
}
|
||||
|
||||
pub(crate) fn sort_perm(x: &[f64], reverse: bool) -> Vec<usize> {
|
||||
let mut v = x.iter().enumerate().collect::<Vec<_>>();
|
||||
pub(crate) fn sort_time<T: Copy + Ord>(xs: &[T], reverse: bool) -> Vec<usize> {
|
||||
let mut x: Vec<(usize, T)> = xs.iter().enumerate().map(|(i, &t)| (i, t)).collect();
|
||||
|
||||
if reverse {
|
||||
v.sort_by(|(_, a), (_, b)| b.partial_cmp(a).unwrap());
|
||||
x.sort_by_key(|&(_, t)| Reverse(t));
|
||||
} else {
|
||||
v.sort_by(|(_, a), (_, b)| a.partial_cmp(b).unwrap());
|
||||
}
|
||||
|
||||
v.into_iter().map(|(i, _)| i).collect()
|
||||
}
|
||||
|
||||
pub(crate) fn sort_time(xs: &[i64], reverse: bool) -> Vec<usize> {
|
||||
let mut x = xs.iter().enumerate().collect::<Vec<_>>();
|
||||
|
||||
if reverse {
|
||||
x.sort_by_key(|&(_, x)| Reverse(x));
|
||||
} else {
|
||||
x.sort_by_key(|&(_, x)| x);
|
||||
x.sort_by_key(|&(_, t)| t);
|
||||
}
|
||||
|
||||
x.into_iter().map(|(i, _)| i).collect()
|
||||
}
|
||||
|
||||
pub(crate) fn evidence(d: &[DiffMessage], margin: &[f64], tie: &[bool], e: usize) -> f64 {
|
||||
if tie[e] {
|
||||
cdf(margin[e], d[e].prior.mu, d[e].prior.sigma)
|
||||
- cdf(-margin[e], d[e].prior.mu, d[e].prior.sigma)
|
||||
} else {
|
||||
1.0 - cdf(margin[e], d[e].prior.mu, d[e].prior.sigma)
|
||||
}
|
||||
}
|
||||
|
||||
/// Calculates the match quality of the given rating groups. A result is the draw probability in the association
|
||||
pub fn quality(rating_groups: &[&[Gaussian]], beta: f64) -> f64 {
|
||||
let flatten_ratings = rating_groups
|
||||
@@ -264,13 +210,13 @@ pub fn quality(rating_groups: &[&[Gaussian]], beta: f64) -> f64 {
|
||||
let mut mean_matrix = Matrix::new(length, 1);
|
||||
|
||||
for (i, rating) in flatten_ratings.iter().enumerate() {
|
||||
mean_matrix[(i, 0)] = rating.mu;
|
||||
mean_matrix[(i, 0)] = rating.mu();
|
||||
}
|
||||
|
||||
let mut variance_matrix = Matrix::new(length, length);
|
||||
|
||||
for (i, rating) in flatten_ratings.iter().enumerate() {
|
||||
variance_matrix[(i, i)] = rating.sigma.powi(2);
|
||||
variance_matrix[(i, i)] = rating.sigma().powi(2);
|
||||
}
|
||||
|
||||
let mut rotated_a_matrix = Matrix::new(rating_groups.len() - 1, length);
|
||||
@@ -318,14 +264,9 @@ mod tests {
|
||||
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_sort_perm() {
|
||||
assert_eq!(sort_perm(&[0.0, 1.0, 2.0, 0.0], true), vec![2, 1, 0, 3]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_sort_time() {
|
||||
assert_eq!(sort_time(&[0, 1, 2, 0], true), vec![2, 1, 0, 3]);
|
||||
assert_eq!(sort_time(&[0i64, 1, 2, 0], true), vec![2, 1, 0, 3]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -1,81 +0,0 @@
|
||||
use crate::{N_INF, gaussian::Gaussian};
|
||||
|
||||
pub(crate) struct TeamMessage {
|
||||
pub(crate) prior: Gaussian,
|
||||
pub(crate) likelihood_lose: Gaussian,
|
||||
pub(crate) likelihood_win: Gaussian,
|
||||
pub(crate) likelihood_draw: Gaussian,
|
||||
}
|
||||
|
||||
impl TeamMessage {
|
||||
/*
|
||||
pub(crate) fn p(&self) -> Gaussian {
|
||||
self.prior * self.likelihood_lose * self.likelihood_win * self.likelihood_draw
|
||||
}
|
||||
*/
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn posterior_win(&self) -> Gaussian {
|
||||
self.prior * self.likelihood_lose * self.likelihood_draw
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn posterior_lose(&self) -> Gaussian {
|
||||
self.prior * self.likelihood_win * self.likelihood_draw
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub(crate) fn likelihood(&self) -> Gaussian {
|
||||
self.likelihood_win * self.likelihood_lose * self.likelihood_draw
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for TeamMessage {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
prior: N_INF,
|
||||
likelihood_lose: N_INF,
|
||||
likelihood_win: N_INF,
|
||||
likelihood_draw: N_INF,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
pub(crate) struct DrawMessage {
|
||||
pub(crate) prior: Gaussian,
|
||||
pub(crate) prior_team: Gaussian,
|
||||
pub(crate) likelihood_lose: Gaussian,
|
||||
pub(crate) likelihood_win: Gaussian,
|
||||
}
|
||||
|
||||
impl DrawMessage {
|
||||
pub(crate) fn p(&self) -> Gaussian {
|
||||
self.prior_team * self.likelihood_lose * self.likelihood_win
|
||||
}
|
||||
|
||||
pub(crate) fn posterior_win(&self) -> Gaussian {
|
||||
self.prior_team * self.likelihood_lose
|
||||
}
|
||||
|
||||
pub(crate) fn posterior_lose(&self) -> Gaussian {
|
||||
self.prior_team * self.likelihood_win
|
||||
}
|
||||
|
||||
pub(crate) fn likelihood(&self) -> Gaussian {
|
||||
self.likelihood_win * self.likelihood_lose
|
||||
}
|
||||
}
|
||||
*/
|
||||
pub(crate) struct DiffMessage {
|
||||
pub(crate) prior: Gaussian,
|
||||
pub(crate) likelihood: Gaussian,
|
||||
}
|
||||
|
||||
impl DiffMessage {
|
||||
/*
|
||||
pub(crate) fn p(&self) -> Gaussian {
|
||||
self.prior * self.likelihood
|
||||
}
|
||||
*/
|
||||
}
|
||||
47
src/observer.rs
Normal file
47
src/observer.rs
Normal file
@@ -0,0 +1,47 @@
|
||||
//! Observer trait for progress reporting during convergence.
|
||||
//!
|
||||
//! Replaces the old `verbose: bool` + `println!` path. Callers wire in any
|
||||
//! observer that implements the trait; default methods are no-ops so users
|
||||
//! override only what they need.
|
||||
|
||||
use crate::time::Time;
|
||||
|
||||
/// Receives progress callbacks during `History::converge`.
|
||||
///
|
||||
/// All methods have default no-op implementations; implement only what's
|
||||
/// interesting.
|
||||
pub trait Observer<T: Time>: Send + Sync {
|
||||
/// Called after each convergence iteration across the whole history.
|
||||
fn on_iteration_end(&self, _iter: usize, _max_step: (f64, f64)) {}
|
||||
|
||||
/// Called after each time slice is processed within an iteration.
|
||||
fn on_batch_processed(&self, _time: &T, _slice_idx: usize, _n_events: usize) {}
|
||||
|
||||
/// Called once when convergence completes (or max iters is reached).
|
||||
fn on_converged(&self, _iters: usize, _final_step: (f64, f64), _converged: bool) {}
|
||||
}
|
||||
|
||||
/// ZST no-op observer; the default when none is configured.
|
||||
#[derive(Copy, Clone, Debug, Default)]
|
||||
pub struct NullObserver;
|
||||
|
||||
impl<T: Time> Observer<T> for NullObserver {}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn null_observer_compiles_for_i64() {
|
||||
let o = NullObserver;
|
||||
<NullObserver as Observer<i64>>::on_iteration_end(&o, 1, (0.0, 0.0));
|
||||
<NullObserver as Observer<i64>>::on_converged(&o, 5, (1e-6, 1e-6), true);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn null_observer_compiles_for_untimed() {
|
||||
use crate::Untimed;
|
||||
let o = NullObserver;
|
||||
<NullObserver as Observer<Untimed>>::on_iteration_end(&o, 1, (0.0, 0.0));
|
||||
}
|
||||
}
|
||||
87
src/outcome.rs
Normal file
87
src/outcome.rs
Normal file
@@ -0,0 +1,87 @@
|
||||
//! Outcome of a match.
|
||||
//!
|
||||
//! In T2, only `Ranked` is supported; `Scored` will be added together with
|
||||
//! `MarginFactor` in T4. The enum is `#[non_exhaustive]` so adding `Scored`
|
||||
//! is non-breaking for downstream `match` expressions.
|
||||
|
||||
use smallvec::SmallVec;
|
||||
|
||||
/// Final outcome of a match.
|
||||
///
|
||||
/// `Ranked(ranks)`: lower rank = better. Equal ranks mean a tie between those
|
||||
/// teams. `ranks.len()` must equal the number of teams in the event.
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
#[non_exhaustive]
|
||||
pub enum Outcome {
|
||||
Ranked(SmallVec<[u32; 4]>),
|
||||
}
|
||||
|
||||
impl Outcome {
|
||||
/// `N`-team outcome where team `winner` won and everyone else tied for last.
|
||||
///
|
||||
/// Panics if `winner >= n`.
|
||||
pub fn winner(winner: u32, n: u32) -> Self {
|
||||
assert!(winner < n, "winner index {winner} out of range 0..{n}");
|
||||
let ranks: SmallVec<[u32; 4]> = (0..n).map(|i| if i == winner { 0 } else { 1 }).collect();
|
||||
Self::Ranked(ranks)
|
||||
}
|
||||
|
||||
/// All `n` teams tied.
|
||||
pub fn draw(n: u32) -> Self {
|
||||
Self::Ranked(SmallVec::from_vec(vec![0; n as usize]))
|
||||
}
|
||||
|
||||
/// Explicit per-team ranking.
|
||||
pub fn ranking<I: IntoIterator<Item = u32>>(ranks: I) -> Self {
|
||||
Self::Ranked(ranks.into_iter().collect())
|
||||
}
|
||||
|
||||
pub fn team_count(&self) -> usize {
|
||||
match self {
|
||||
Self::Ranked(r) => r.len(),
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn as_ranks(&self) -> &[u32] {
|
||||
match self {
|
||||
Self::Ranked(r) => r,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn winner_two_teams() {
|
||||
let o = Outcome::winner(0, 2);
|
||||
assert_eq!(o.as_ranks(), &[0u32, 1]);
|
||||
assert_eq!(o.team_count(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn winner_three_teams_second_wins() {
|
||||
let o = Outcome::winner(1, 3);
|
||||
assert_eq!(o.as_ranks(), &[1u32, 0, 1]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn draw_three_teams() {
|
||||
let o = Outcome::draw(3);
|
||||
assert_eq!(o.as_ranks(), &[0u32, 0, 0]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ranking_from_iter() {
|
||||
let o = Outcome::ranking([2, 0, 1]);
|
||||
assert_eq!(o.as_ranks(), &[2u32, 0, 1]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(expected = "winner index 2 out of range")]
|
||||
fn winner_out_of_range_panics() {
|
||||
let _ = Outcome::winner(2, 2);
|
||||
}
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
use crate::{
|
||||
BETA, GAMMA,
|
||||
drift::{ConstantDrift, Drift},
|
||||
gaussian::Gaussian,
|
||||
};
|
||||
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub struct Player<D: Drift = ConstantDrift> {
|
||||
pub(crate) prior: Gaussian,
|
||||
pub(crate) beta: f64,
|
||||
pub(crate) drift: D,
|
||||
}
|
||||
|
||||
impl<D: Drift> Player<D> {
|
||||
pub fn new(prior: Gaussian, beta: f64, drift: D) -> Self {
|
||||
Self { prior, beta, drift }
|
||||
}
|
||||
|
||||
pub(crate) fn performance(&self) -> Gaussian {
|
||||
self.prior.forget(self.beta.powi(2))
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Player<ConstantDrift> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
prior: Gaussian::default(),
|
||||
beta: BETA,
|
||||
drift: ConstantDrift(GAMMA),
|
||||
}
|
||||
}
|
||||
}
|
||||
46
src/rating.rs
Normal file
46
src/rating.rs
Normal file
@@ -0,0 +1,46 @@
|
||||
use std::marker::PhantomData;
|
||||
|
||||
use crate::{
|
||||
BETA, GAMMA,
|
||||
drift::{ConstantDrift, Drift},
|
||||
gaussian::Gaussian,
|
||||
time::Time,
|
||||
};
|
||||
|
||||
/// Static rating configuration: prior skill, performance noise `beta`, drift.
|
||||
///
|
||||
/// Renamed from `Player` in T2; `Rating` better describes the data
|
||||
/// (a configuration) vs. a person (who's a `Competitor` with state).
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub struct Rating<T: Time = i64, D: Drift<T> = ConstantDrift> {
|
||||
pub(crate) prior: Gaussian,
|
||||
pub(crate) beta: f64,
|
||||
pub(crate) drift: D,
|
||||
pub(crate) _time: PhantomData<T>,
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> Rating<T, D> {
|
||||
pub fn new(prior: Gaussian, beta: f64, drift: D) -> Self {
|
||||
Self {
|
||||
prior,
|
||||
beta,
|
||||
drift,
|
||||
_time: PhantomData,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn performance(&self) -> Gaussian {
|
||||
self.prior.forget(self.beta.powi(2))
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Rating<i64, ConstantDrift> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
prior: Gaussian::default(),
|
||||
beta: BETA,
|
||||
drift: ConstantDrift(GAMMA),
|
||||
_time: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
126
src/schedule.rs
Normal file
126
src/schedule.rs
Normal file
@@ -0,0 +1,126 @@
|
||||
//! Schedule trait and built-in implementations.
|
||||
//!
|
||||
//! A schedule drives factor propagation to convergence. The default
|
||||
//! `EpsilonOrMax` performs one TeamSum sweep (setup) then alternating
|
||||
//! forward/backward sweeps over the iterating factors until the max
|
||||
//! delta drops below epsilon or `max` iterations is reached.
|
||||
|
||||
use crate::factor::{BuiltinFactor, Factor, VarStore};
|
||||
|
||||
/// Result returned by a `Schedule::run` call.
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
pub struct ScheduleReport {
|
||||
pub iterations: usize,
|
||||
pub final_step: (f64, f64),
|
||||
pub converged: bool,
|
||||
}
|
||||
|
||||
/// Drives factor propagation to convergence.
|
||||
pub trait Schedule: Send + Sync {
|
||||
fn run(&self, factors: &mut [BuiltinFactor], vars: &mut VarStore) -> ScheduleReport;
|
||||
}
|
||||
|
||||
/// Default schedule: sweep forward then backward until step ≤ eps or iter == max.
|
||||
///
|
||||
/// Matches the existing `Game::likelihoods` loop bit-for-bit when given the
|
||||
/// same factor layout (TeamSums first, then alternating RankDiff/Trunc pairs).
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
pub struct EpsilonOrMax {
|
||||
pub eps: f64,
|
||||
pub max: usize,
|
||||
}
|
||||
|
||||
impl Default for EpsilonOrMax {
|
||||
fn default() -> Self {
|
||||
// Matches today's hard-coded tolerance and iteration cap.
|
||||
Self { eps: 1e-6, max: 10 }
|
||||
}
|
||||
}
|
||||
|
||||
impl Schedule for EpsilonOrMax {
|
||||
fn run(&self, factors: &mut [BuiltinFactor], vars: &mut VarStore) -> ScheduleReport {
|
||||
// Partition: leading run of TeamSum factors run exactly once (setup).
|
||||
let n_setup = factors
|
||||
.iter()
|
||||
.position(|f| !matches!(f, BuiltinFactor::TeamSum(_)))
|
||||
.unwrap_or(factors.len());
|
||||
|
||||
for f in factors[..n_setup].iter_mut() {
|
||||
f.propagate(vars);
|
||||
}
|
||||
|
||||
let mut iterations = 0;
|
||||
let mut final_step = (f64::INFINITY, f64::INFINITY);
|
||||
let mut converged = false;
|
||||
|
||||
if n_setup < factors.len() {
|
||||
for _ in 0..self.max {
|
||||
let mut step = (0.0_f64, 0.0_f64);
|
||||
|
||||
// Forward sweep over iterating factors.
|
||||
for f in factors[n_setup..].iter_mut() {
|
||||
let d = f.propagate(vars);
|
||||
step.0 = step.0.max(d.0);
|
||||
step.1 = step.1.max(d.1);
|
||||
}
|
||||
|
||||
// Backward sweep.
|
||||
for f in factors[n_setup..].iter_mut().rev() {
|
||||
let d = f.propagate(vars);
|
||||
step.0 = step.0.max(d.0);
|
||||
step.1 = step.1.max(d.1);
|
||||
}
|
||||
|
||||
iterations += 1;
|
||||
final_step = step;
|
||||
|
||||
if step.0 <= self.eps && step.1 <= self.eps {
|
||||
converged = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ScheduleReport {
|
||||
iterations,
|
||||
final_step,
|
||||
converged,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{N_INF, factor::team_sum::TeamSumFactor, gaussian::Gaussian};
|
||||
|
||||
#[test]
|
||||
fn schedule_runs_setup_factors_once() {
|
||||
// Single TeamSum factor; schedule should propagate it exactly once and report 0 iterations.
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let mut factors = vec![BuiltinFactor::TeamSum(TeamSumFactor {
|
||||
inputs: vec![(Gaussian::from_ms(5.0, 1.0), 1.0)],
|
||||
out,
|
||||
})];
|
||||
let schedule = EpsilonOrMax::default();
|
||||
let report = schedule.run(&mut factors, &mut vars);
|
||||
assert_eq!(report.iterations, 0);
|
||||
// The team-perf var should hold the sum.
|
||||
let result = vars.get(out);
|
||||
assert!((result.mu() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn report_marks_converged_when_no_iterating_factors() {
|
||||
// No iterating factors → 0 iterations, converged stays false (loop never ran).
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let mut factors = vec![BuiltinFactor::TeamSum(TeamSumFactor {
|
||||
inputs: vec![(Gaussian::from_ms(0.0, 1.0), 1.0)],
|
||||
out,
|
||||
})];
|
||||
let report = EpsilonOrMax::default().run(&mut factors, &mut vars);
|
||||
assert_eq!(report.iterations, 0);
|
||||
}
|
||||
}
|
||||
127
src/storage/competitor_store.rs
Normal file
127
src/storage/competitor_store.rs
Normal file
@@ -0,0 +1,127 @@
|
||||
use crate::{Index, competitor::Competitor, drift::Drift, time::Time};
|
||||
|
||||
/// Dense Vec-backed store for competitor state in History.
|
||||
///
|
||||
/// Indexed directly by Index.0, eliminating HashMap hashing in the
|
||||
/// forward/backward sweep. Uses `Vec<Option<Competitor<T, D>>>` so slots can be
|
||||
/// absent without an explicit present mask.
|
||||
#[derive(Debug)]
|
||||
pub struct CompetitorStore<T: Time = i64, D: Drift<T> = crate::drift::ConstantDrift> {
|
||||
competitors: Vec<Option<Competitor<T, D>>>,
|
||||
n_present: usize,
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> Default for CompetitorStore<T, D> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
competitors: Vec::new(),
|
||||
n_present: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> CompetitorStore<T, D> {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
fn ensure_capacity(&mut self, idx: usize) {
|
||||
if idx >= self.competitors.len() {
|
||||
self.competitors.resize_with(idx + 1, || None);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn insert(&mut self, idx: Index, competitor: Competitor<T, D>) {
|
||||
self.ensure_capacity(idx.0);
|
||||
if self.competitors[idx.0].is_none() {
|
||||
self.n_present += 1;
|
||||
}
|
||||
self.competitors[idx.0] = Some(competitor);
|
||||
}
|
||||
|
||||
pub fn get(&self, idx: Index) -> Option<&Competitor<T, D>> {
|
||||
self.competitors.get(idx.0).and_then(|slot| slot.as_ref())
|
||||
}
|
||||
|
||||
pub fn get_mut(&mut self, idx: Index) -> Option<&mut Competitor<T, D>> {
|
||||
self.competitors
|
||||
.get_mut(idx.0)
|
||||
.and_then(|slot| slot.as_mut())
|
||||
}
|
||||
|
||||
pub fn contains(&self, idx: Index) -> bool {
|
||||
self.get(idx).is_some()
|
||||
}
|
||||
|
||||
pub fn len(&self) -> usize {
|
||||
self.n_present
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.n_present == 0
|
||||
}
|
||||
|
||||
pub fn iter(&self) -> impl Iterator<Item = (Index, &Competitor<T, D>)> {
|
||||
self.competitors
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(i, slot)| slot.as_ref().map(|a| (Index(i), a)))
|
||||
}
|
||||
|
||||
pub fn iter_mut(&mut self) -> impl Iterator<Item = (Index, &mut Competitor<T, D>)> {
|
||||
self.competitors
|
||||
.iter_mut()
|
||||
.enumerate()
|
||||
.filter_map(|(i, slot)| slot.as_mut().map(|a| (Index(i), a)))
|
||||
}
|
||||
|
||||
pub fn values_mut(&mut self) -> impl Iterator<Item = &mut Competitor<T, D>> {
|
||||
self.competitors.iter_mut().filter_map(|s| s.as_mut())
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> std::ops::Index<Index> for CompetitorStore<T, D> {
|
||||
type Output = Competitor<T, D>;
|
||||
fn index(&self, idx: Index) -> &Competitor<T, D> {
|
||||
self.get(idx).expect("competitor not found at index")
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Time, D: Drift<T>> std::ops::IndexMut<Index> for CompetitorStore<T, D> {
|
||||
fn index_mut(&mut self, idx: Index) -> &mut Competitor<T, D> {
|
||||
self.get_mut(idx).expect("competitor not found at index")
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{competitor::Competitor, drift::ConstantDrift};
|
||||
|
||||
#[test]
|
||||
fn insert_then_get() {
|
||||
let mut store: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
let idx = Index(7);
|
||||
store.insert(idx, Competitor::default());
|
||||
assert!(store.contains(idx));
|
||||
assert_eq!(store.len(), 1);
|
||||
assert!(store.get(idx).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn iter_in_index_order() {
|
||||
let mut store: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
store.insert(Index(2), Competitor::default());
|
||||
store.insert(Index(0), Competitor::default());
|
||||
store.insert(Index(5), Competitor::default());
|
||||
let keys: Vec<Index> = store.iter().map(|(i, _)| i).collect();
|
||||
assert_eq!(keys, vec![Index(0), Index(2), Index(5)]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn index_operator_works() {
|
||||
let mut store: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
store.insert(Index(3), Competitor::default());
|
||||
let _ = &store[Index(3)];
|
||||
}
|
||||
}
|
||||
5
src/storage/mod.rs
Normal file
5
src/storage/mod.rs
Normal file
@@ -0,0 +1,5 @@
|
||||
mod competitor_store;
|
||||
mod skill_store;
|
||||
|
||||
pub use competitor_store::CompetitorStore;
|
||||
pub(crate) use skill_store::SkillStore;
|
||||
130
src/storage/skill_store.rs
Normal file
130
src/storage/skill_store.rs
Normal file
@@ -0,0 +1,130 @@
|
||||
use crate::{Index, time_slice::Skill};
|
||||
|
||||
/// Dense Vec-backed store for per-agent skill state within a TimeSlice.
|
||||
///
|
||||
/// Indexed directly by Index.0, eliminating HashMap hashing in the inner
|
||||
/// convergence loop. Uses a parallel `present` mask so iteration skips
|
||||
/// absent slots without incurring per-slot Option overhead in the hot path.
|
||||
#[derive(Debug, Default)]
|
||||
pub struct SkillStore {
|
||||
skills: Vec<Skill>,
|
||||
present: Vec<bool>,
|
||||
n_present: usize,
|
||||
}
|
||||
|
||||
impl SkillStore {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
fn ensure_capacity(&mut self, idx: usize) {
|
||||
if idx >= self.skills.len() {
|
||||
self.skills.resize_with(idx + 1, Skill::default);
|
||||
self.present.resize(idx + 1, false);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn insert(&mut self, idx: Index, skill: Skill) {
|
||||
self.ensure_capacity(idx.0);
|
||||
if !self.present[idx.0] {
|
||||
self.n_present += 1;
|
||||
}
|
||||
self.skills[idx.0] = skill;
|
||||
self.present[idx.0] = true;
|
||||
}
|
||||
|
||||
pub fn get(&self, idx: Index) -> Option<&Skill> {
|
||||
if idx.0 < self.present.len() && self.present[idx.0] {
|
||||
Some(&self.skills[idx.0])
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_mut(&mut self, idx: Index) -> Option<&mut Skill> {
|
||||
if idx.0 < self.present.len() && self.present[idx.0] {
|
||||
Some(&mut self.skills[idx.0])
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn contains(&self, idx: Index) -> bool {
|
||||
idx.0 < self.present.len() && self.present[idx.0]
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn len(&self) -> usize {
|
||||
self.n_present
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.n_present == 0
|
||||
}
|
||||
|
||||
pub fn iter(&self) -> impl Iterator<Item = (Index, &Skill)> {
|
||||
self.present.iter().enumerate().filter_map(|(i, &p)| {
|
||||
if p {
|
||||
Some((Index(i), &self.skills[i]))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
pub fn iter_mut(&mut self) -> impl Iterator<Item = (Index, &mut Skill)> {
|
||||
self.skills
|
||||
.iter_mut()
|
||||
.zip(self.present.iter())
|
||||
.enumerate()
|
||||
.filter_map(|(i, (s, &p))| if p { Some((Index(i), s)) } else { None })
|
||||
}
|
||||
|
||||
pub fn keys(&self) -> impl Iterator<Item = Index> + '_ {
|
||||
self.present
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(i, &p)| if p { Some(Index(i)) } else { None })
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn insert_then_get() {
|
||||
let mut store = SkillStore::new();
|
||||
let idx = Index(3);
|
||||
store.insert(idx, Skill::default());
|
||||
assert!(store.contains(idx));
|
||||
assert_eq!(store.len(), 1);
|
||||
assert!(store.get(idx).is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn missing_returns_none() {
|
||||
let store = SkillStore::new();
|
||||
assert!(store.get(Index(0)).is_none());
|
||||
assert!(!store.contains(Index(42)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn iter_skips_absent_slots() {
|
||||
let mut store = SkillStore::new();
|
||||
store.insert(Index(0), Skill::default());
|
||||
store.insert(Index(5), Skill::default());
|
||||
let keys: Vec<Index> = store.keys().collect();
|
||||
assert_eq!(keys, vec![Index(0), Index(5)]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn double_insert_does_not_double_count() {
|
||||
let mut store = SkillStore::new();
|
||||
store.insert(Index(2), Skill::default());
|
||||
store.insert(Index(2), Skill::default());
|
||||
assert_eq!(store.len(), 1);
|
||||
}
|
||||
}
|
||||
54
src/time.rs
Normal file
54
src/time.rs
Normal file
@@ -0,0 +1,54 @@
|
||||
//! Generic time axis for `History`.
|
||||
//!
|
||||
//! Users pick the `Time` type based on their domain: `Untimed` when no
|
||||
//! time axis is meaningful, `i64` for integer day/second timestamps.
|
||||
//! Additional impls can be added behind feature flags.
|
||||
|
||||
/// A timestamp on the global ordering axis.
|
||||
///
|
||||
/// Must be `Ord + Copy` so slices can sort events, and `'static` so
|
||||
/// `History` can store it by value without lifetimes.
|
||||
pub trait Time: Copy + Ord + Send + Sync + 'static {
|
||||
/// How much time elapsed between `self` and `later`.
|
||||
///
|
||||
/// Used by `Drift<T>::variance_delta` to compute skill drift. Returning
|
||||
/// zero means no drift accumulates between the two points. Return value
|
||||
/// must be non-negative for `self <= later`.
|
||||
fn elapsed_to(&self, later: &Self) -> i64;
|
||||
}
|
||||
|
||||
/// Zero-sized type representing "no time axis."
|
||||
///
|
||||
/// Used as the default `Time` when events are unordered. Elapsed is always 0,
|
||||
/// so no drift accumulates across slices.
|
||||
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
|
||||
pub struct Untimed;
|
||||
|
||||
impl Time for Untimed {
|
||||
fn elapsed_to(&self, _later: &Self) -> i64 {
|
||||
0
|
||||
}
|
||||
}
|
||||
|
||||
impl Time for i64 {
|
||||
fn elapsed_to(&self, later: &Self) -> i64 {
|
||||
later - self
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn untimed_elapsed_is_zero() {
|
||||
assert_eq!(Untimed.elapsed_to(&Untimed), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn i64_elapsed_is_difference() {
|
||||
assert_eq!(5i64.elapsed_to(&10), 5);
|
||||
assert_eq!(10i64.elapsed_to(&5), -5);
|
||||
assert_eq!(0i64.elapsed_to(&0), 0);
|
||||
}
|
||||
}
|
||||
886
src/time_slice.rs
Normal file
886
src/time_slice.rs
Normal file
@@ -0,0 +1,886 @@
|
||||
//! A single time step's worth of events.
|
||||
//!
|
||||
//! Renamed from `Batch` in T2.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::{
|
||||
Index, N_INF,
|
||||
arena::ScratchArena,
|
||||
color_group::ColorGroups,
|
||||
drift::Drift,
|
||||
game::Game,
|
||||
gaussian::Gaussian,
|
||||
rating::Rating,
|
||||
storage::{CompetitorStore, SkillStore},
|
||||
time::Time,
|
||||
tuple_gt, tuple_max,
|
||||
};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct Skill {
|
||||
pub(crate) forward: Gaussian,
|
||||
backward: Gaussian,
|
||||
likelihood: Gaussian,
|
||||
pub(crate) elapsed: i64,
|
||||
pub(crate) online: Gaussian,
|
||||
}
|
||||
|
||||
impl Skill {
|
||||
pub(crate) fn posterior(&self) -> Gaussian {
|
||||
self.likelihood * self.backward * self.forward
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Skill {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
forward: N_INF,
|
||||
backward: N_INF,
|
||||
likelihood: N_INF,
|
||||
elapsed: 0,
|
||||
online: N_INF,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
struct Item {
|
||||
agent: Index,
|
||||
likelihood: Gaussian,
|
||||
}
|
||||
|
||||
impl Item {
|
||||
fn within_prior<T: Time, D: Drift<T>>(
|
||||
&self,
|
||||
online: bool,
|
||||
forward: bool,
|
||||
skills: &SkillStore,
|
||||
agents: &CompetitorStore<T, D>,
|
||||
) -> Rating<T, D> {
|
||||
let r = &agents[self.agent].rating;
|
||||
let skill = skills.get(self.agent).unwrap();
|
||||
|
||||
if online {
|
||||
Rating::new(skill.online, r.beta, r.drift)
|
||||
} else if forward {
|
||||
Rating::new(skill.forward, r.beta, r.drift)
|
||||
} else {
|
||||
Rating::new(skill.posterior() / self.likelihood, r.beta, r.drift)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
struct Team {
|
||||
items: Vec<Item>,
|
||||
output: f64,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct Event {
|
||||
teams: Vec<Team>,
|
||||
evidence: f64,
|
||||
weights: Vec<Vec<f64>>,
|
||||
}
|
||||
|
||||
impl Event {
|
||||
pub(crate) fn iter_agents(&self) -> impl Iterator<Item = Index> + '_ {
|
||||
self.teams
|
||||
.iter()
|
||||
.flat_map(|t| t.items.iter().map(|it| it.agent))
|
||||
}
|
||||
|
||||
fn outputs(&self) -> Vec<f64> {
|
||||
self.teams
|
||||
.iter()
|
||||
.map(|team| team.output)
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
pub(crate) fn within_priors<T: Time, D: Drift<T>>(
|
||||
&self,
|
||||
online: bool,
|
||||
forward: bool,
|
||||
skills: &SkillStore,
|
||||
agents: &CompetitorStore<T, D>,
|
||||
) -> Vec<Vec<Rating<T, D>>> {
|
||||
self.teams
|
||||
.iter()
|
||||
.map(|team| {
|
||||
team.items
|
||||
.iter()
|
||||
.map(|item| item.within_prior(online, forward, skills, agents))
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
/// Direct in-loop update: mutates self and `skills` inline with no
|
||||
/// intermediate allocation. Used by both the sequential sweep path and,
|
||||
/// via unsafe, by the parallel rayon path for events in the same color
|
||||
/// group (which have disjoint agent sets — see `sweep_color_groups`).
|
||||
fn iteration_direct<T: Time, D: Drift<T>>(
|
||||
&mut self,
|
||||
skills: &mut SkillStore,
|
||||
agents: &CompetitorStore<T, D>,
|
||||
p_draw: f64,
|
||||
arena: &mut ScratchArena,
|
||||
) {
|
||||
let teams = self.within_priors(false, false, skills, agents);
|
||||
let result = self.outputs();
|
||||
let g = Game::ranked_with_arena(teams, &result, &self.weights, p_draw, arena);
|
||||
|
||||
for (t, team) in self.teams.iter_mut().enumerate() {
|
||||
for (i, item) in team.items.iter_mut().enumerate() {
|
||||
let old_likelihood = skills.get(item.agent).unwrap().likelihood;
|
||||
let new_likelihood = (old_likelihood / item.likelihood) * g.likelihoods[t][i];
|
||||
skills.get_mut(item.agent).unwrap().likelihood = new_likelihood;
|
||||
item.likelihood = g.likelihoods[t][i];
|
||||
}
|
||||
}
|
||||
|
||||
self.evidence = g.evidence;
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct TimeSlice<T: Time = i64> {
|
||||
pub(crate) events: Vec<Event>,
|
||||
pub(crate) skills: SkillStore,
|
||||
pub(crate) time: T,
|
||||
p_draw: f64,
|
||||
arena: ScratchArena,
|
||||
pub(crate) color_groups: ColorGroups,
|
||||
}
|
||||
|
||||
impl<T: Time> TimeSlice<T> {
|
||||
pub fn new(time: T, p_draw: f64) -> Self {
|
||||
Self {
|
||||
events: Vec::new(),
|
||||
skills: SkillStore::new(),
|
||||
time,
|
||||
p_draw,
|
||||
arena: ScratchArena::new(),
|
||||
color_groups: ColorGroups::new(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Recompute the color-group partition and reorder `self.events` into
|
||||
/// color-contiguous ranges. After this call, `self.color_groups.groups[c]`
|
||||
/// contains a contiguous ascending range of indices in `self.events`.
|
||||
pub(crate) fn recompute_color_groups(&mut self) {
|
||||
use crate::color_group::color_greedy;
|
||||
|
||||
let n = self.events.len();
|
||||
if n == 0 {
|
||||
self.color_groups = ColorGroups::new();
|
||||
return;
|
||||
}
|
||||
|
||||
let cg = color_greedy(n, |ev_idx| {
|
||||
self.events[ev_idx].iter_agents().collect::<Vec<_>>()
|
||||
});
|
||||
|
||||
let mut reordered: Vec<Event> = Vec::with_capacity(n);
|
||||
let mut new_groups: Vec<Vec<usize>> = Vec::with_capacity(cg.groups.len());
|
||||
let mut taken: Vec<Option<Event>> = self.events.drain(..).map(Some).collect();
|
||||
|
||||
for group in &cg.groups {
|
||||
let mut new_indices: Vec<usize> = Vec::with_capacity(group.len());
|
||||
for &old_idx in group {
|
||||
let ev = taken[old_idx].take().expect("event already taken");
|
||||
new_indices.push(reordered.len());
|
||||
reordered.push(ev);
|
||||
}
|
||||
new_groups.push(new_indices);
|
||||
}
|
||||
|
||||
self.events = reordered;
|
||||
self.color_groups = ColorGroups { groups: new_groups };
|
||||
}
|
||||
|
||||
pub fn add_events<D: Drift<T>>(
|
||||
&mut self,
|
||||
composition: Vec<Vec<Vec<Index>>>,
|
||||
results: Vec<Vec<f64>>,
|
||||
weights: Vec<Vec<Vec<f64>>>,
|
||||
agents: &CompetitorStore<T, D>,
|
||||
) {
|
||||
let mut unique = Vec::with_capacity(10);
|
||||
|
||||
let this_agent = composition.iter().flatten().flatten().filter(|idx| {
|
||||
if !unique.contains(idx) {
|
||||
unique.push(*idx);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
false
|
||||
});
|
||||
|
||||
for idx in this_agent {
|
||||
let elapsed = compute_elapsed(agents[*idx].last_time.as_ref(), &self.time);
|
||||
|
||||
if let Some(skill) = self.skills.get_mut(*idx) {
|
||||
skill.elapsed = elapsed;
|
||||
skill.forward = agents[*idx].receive(&self.time);
|
||||
} else {
|
||||
self.skills.insert(
|
||||
*idx,
|
||||
Skill {
|
||||
forward: agents[*idx].receive(&self.time),
|
||||
elapsed,
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let events = composition.iter().enumerate().map(|(e, event)| {
|
||||
let teams = event
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(t, team)| {
|
||||
let items = team
|
||||
.iter()
|
||||
.map(|&agent| Item {
|
||||
agent,
|
||||
likelihood: N_INF,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
Team {
|
||||
items,
|
||||
output: if results.is_empty() {
|
||||
(event.len() - (t + 1)) as f64
|
||||
} else {
|
||||
results[e][t]
|
||||
},
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let weights = if weights.is_empty() {
|
||||
teams
|
||||
.iter()
|
||||
.map(|team| vec![1.0; team.items.len()])
|
||||
.collect::<Vec<_>>()
|
||||
} else {
|
||||
weights[e].clone()
|
||||
};
|
||||
|
||||
Event {
|
||||
teams,
|
||||
evidence: 0.0,
|
||||
weights,
|
||||
}
|
||||
});
|
||||
|
||||
let from = self.events.len();
|
||||
|
||||
self.events.extend(events);
|
||||
|
||||
self.iteration(from, agents);
|
||||
self.recompute_color_groups();
|
||||
}
|
||||
|
||||
pub(crate) fn posteriors(&self) -> HashMap<Index, Gaussian> {
|
||||
self.skills
|
||||
.iter()
|
||||
.map(|(idx, skill)| (idx, skill.posterior()))
|
||||
.collect::<HashMap<_, _>>()
|
||||
}
|
||||
|
||||
pub fn iteration<D: Drift<T>>(&mut self, from: usize, agents: &CompetitorStore<T, D>) {
|
||||
if from > 0 || self.color_groups.is_empty() {
|
||||
// Initial pass (add_events) or no color groups yet: simple sequential sweep.
|
||||
for event in self.events.iter_mut().skip(from) {
|
||||
let teams = event.within_priors(false, false, &self.skills, agents);
|
||||
let result = event.outputs();
|
||||
|
||||
let g = Game::ranked_with_arena(
|
||||
teams,
|
||||
&result,
|
||||
&event.weights,
|
||||
self.p_draw,
|
||||
&mut self.arena,
|
||||
);
|
||||
|
||||
for (t, team) in event.teams.iter_mut().enumerate() {
|
||||
for (i, item) in team.items.iter_mut().enumerate() {
|
||||
let old_likelihood = self.skills.get(item.agent).unwrap().likelihood;
|
||||
let new_likelihood =
|
||||
(old_likelihood / item.likelihood) * g.likelihoods[t][i];
|
||||
self.skills.get_mut(item.agent).unwrap().likelihood = new_likelihood;
|
||||
item.likelihood = g.likelihoods[t][i];
|
||||
}
|
||||
}
|
||||
|
||||
event.evidence = g.evidence;
|
||||
}
|
||||
} else {
|
||||
self.sweep_color_groups(agents);
|
||||
}
|
||||
}
|
||||
|
||||
/// Full event sweep using the color-group partition. Colors are processed
|
||||
/// sequentially; within each color the inner loop is parallel under rayon.
|
||||
///
|
||||
/// Events within each color group touch disjoint agent sets (guaranteed by
|
||||
/// the greedy coloring). This lets each rayon thread write directly to its
|
||||
/// events' skill likelihoods without a deferred-apply step, matching the
|
||||
/// sequential path's allocation profile. The unsafe block is sound because:
|
||||
/// 1. `self.events[range]` and `self.skills` are separate fields → disjoint.
|
||||
/// 2. Events in the same color group access disjoint `Index` values in
|
||||
/// `self.skills`, so concurrent writes land on different memory locations.
|
||||
/// 3. Each event only writes to its own items' likelihoods (no sharing).
|
||||
#[cfg(feature = "rayon")]
|
||||
fn sweep_color_groups<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) {
|
||||
use rayon::prelude::*;
|
||||
|
||||
thread_local! {
|
||||
static ARENA: std::cell::RefCell<ScratchArena> =
|
||||
std::cell::RefCell::new(ScratchArena::new());
|
||||
}
|
||||
|
||||
// Minimum color-group size to justify rayon's task-spawn overhead.
|
||||
// Below this threshold, process events sequentially to avoid regression
|
||||
// on small per-slice workloads.
|
||||
const RAYON_THRESHOLD: usize = 64;
|
||||
|
||||
for color_idx in 0..self.color_groups.groups.len() {
|
||||
let group_len = self.color_groups.groups[color_idx].len();
|
||||
if group_len == 0 {
|
||||
continue;
|
||||
}
|
||||
let range = self.color_groups.color_range(color_idx);
|
||||
let p_draw = self.p_draw;
|
||||
|
||||
if group_len >= RAYON_THRESHOLD {
|
||||
// Obtain a raw pointer from the unique `&mut self.skills` reference.
|
||||
// Casting back to `&mut` inside the closure is sound because:
|
||||
// 1. The pointer originates from a `&mut` — no aliasing with shared refs.
|
||||
// 2. Events in the same color group touch disjoint `Index` slots in the
|
||||
// underlying Vec, so concurrent writes from different threads land on
|
||||
// different memory locations — no data race.
|
||||
// 3. `self.events[range]` and `self.skills` are separate struct fields,
|
||||
// so the borrow splits cleanly.
|
||||
let skills_addr: usize = (&mut self.skills as *mut SkillStore) as usize;
|
||||
self.events[range].par_iter_mut().for_each(move |ev| {
|
||||
// SAFETY: see above.
|
||||
let skills: &mut SkillStore = unsafe { &mut *(skills_addr as *mut SkillStore) };
|
||||
ARENA.with(|cell| {
|
||||
let mut arena = cell.borrow_mut();
|
||||
arena.reset();
|
||||
ev.iteration_direct(skills, agents, p_draw, &mut arena);
|
||||
});
|
||||
});
|
||||
} else {
|
||||
for ev in &mut self.events[range] {
|
||||
ev.iteration_direct(&mut self.skills, agents, p_draw, &mut self.arena);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Full event sweep using the color-group partition, sequential direct-write path.
|
||||
/// Events within each color group are updated inline — no EventOutput allocation —
|
||||
/// matching the T2 performance profile.
|
||||
#[cfg(not(feature = "rayon"))]
|
||||
fn sweep_color_groups<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) {
|
||||
for color_idx in 0..self.color_groups.groups.len() {
|
||||
if self.color_groups.groups[color_idx].is_empty() {
|
||||
continue;
|
||||
}
|
||||
let range = self.color_groups.color_range(color_idx);
|
||||
|
||||
// Borrow self.events as a mutable slice for this color range.
|
||||
// self.skills and self.arena are separate fields — disjoint borrows are
|
||||
// allowed within a single method body.
|
||||
let p_draw = self.p_draw;
|
||||
for ev in &mut self.events[range] {
|
||||
ev.iteration_direct(&mut self.skills, agents, p_draw, &mut self.arena);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn convergence<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) -> usize {
|
||||
let epsilon = 1e-6;
|
||||
let iterations = 20;
|
||||
|
||||
let mut step = (f64::INFINITY, f64::INFINITY);
|
||||
let mut i = 0;
|
||||
|
||||
while tuple_gt(step, epsilon) && i < iterations {
|
||||
let old = self.posteriors();
|
||||
|
||||
self.iteration(0, agents);
|
||||
|
||||
let new = self.posteriors();
|
||||
|
||||
step = old.iter().fold((0.0, 0.0), |step, (a, old)| {
|
||||
tuple_max(step, old.delta(new[a]))
|
||||
});
|
||||
|
||||
i += 1;
|
||||
}
|
||||
|
||||
i
|
||||
}
|
||||
|
||||
pub(crate) fn forward_prior_out(&self, agent: &Index) -> Gaussian {
|
||||
let skill = self.skills.get(*agent).unwrap();
|
||||
skill.forward * skill.likelihood
|
||||
}
|
||||
|
||||
pub(crate) fn backward_prior_out<D: Drift<T>>(
|
||||
&self,
|
||||
agent: &Index,
|
||||
agents: &CompetitorStore<T, D>,
|
||||
) -> Gaussian {
|
||||
let skill = self.skills.get(*agent).unwrap();
|
||||
let n = skill.likelihood * skill.backward;
|
||||
n.forget(
|
||||
agents[*agent]
|
||||
.rating
|
||||
.drift
|
||||
.variance_for_elapsed(skill.elapsed),
|
||||
)
|
||||
}
|
||||
|
||||
pub(crate) fn new_backward_info<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) {
|
||||
for (agent, skill) in self.skills.iter_mut() {
|
||||
skill.backward = agents[agent].message;
|
||||
}
|
||||
self.iteration(0, agents);
|
||||
}
|
||||
|
||||
pub(crate) fn new_forward_info<D: Drift<T>>(&mut self, agents: &CompetitorStore<T, D>) {
|
||||
for (agent, skill) in self.skills.iter_mut() {
|
||||
skill.forward = agents[agent].receive_for_elapsed(skill.elapsed);
|
||||
}
|
||||
self.iteration(0, agents);
|
||||
}
|
||||
|
||||
pub(crate) fn log_evidence<D: Drift<T>>(
|
||||
&self,
|
||||
online: bool,
|
||||
targets: &[Index],
|
||||
forward: bool,
|
||||
agents: &CompetitorStore<T, D>,
|
||||
) -> f64 {
|
||||
// log_evidence is infrequent; a local arena avoids needing &mut self.
|
||||
let mut arena = ScratchArena::new();
|
||||
|
||||
if targets.is_empty() {
|
||||
if online || forward {
|
||||
self.events
|
||||
.iter()
|
||||
.map(|event| {
|
||||
Game::ranked_with_arena(
|
||||
event.within_priors(online, forward, &self.skills, agents),
|
||||
&event.outputs(),
|
||||
&event.weights,
|
||||
self.p_draw,
|
||||
&mut arena,
|
||||
)
|
||||
.evidence
|
||||
.ln()
|
||||
})
|
||||
.sum()
|
||||
} else {
|
||||
self.events.iter().map(|event| event.evidence.ln()).sum()
|
||||
}
|
||||
} else if online || forward {
|
||||
self.events
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter(|(_, event)| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.flat_map(|team| &team.items)
|
||||
.any(|item| targets.contains(&item.agent))
|
||||
})
|
||||
.map(|(_, event)| {
|
||||
Game::ranked_with_arena(
|
||||
event.within_priors(online, forward, &self.skills, agents),
|
||||
&event.outputs(),
|
||||
&event.weights,
|
||||
self.p_draw,
|
||||
&mut arena,
|
||||
)
|
||||
.evidence
|
||||
.ln()
|
||||
})
|
||||
.sum()
|
||||
} else {
|
||||
self.events
|
||||
.iter()
|
||||
.filter(|event| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.flat_map(|team| &team.items)
|
||||
.any(|item| targets.contains(&item.agent))
|
||||
})
|
||||
.map(|event| event.evidence.ln())
|
||||
.sum()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn get_composition(&self) -> Vec<Vec<Vec<Index>>> {
|
||||
self.events
|
||||
.iter()
|
||||
.map(|event| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.map(|team| team.items.iter().map(|item| item.agent).collect::<Vec<_>>())
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
pub fn get_results(&self) -> Vec<Vec<f64>> {
|
||||
self.events
|
||||
.iter()
|
||||
.map(|event| {
|
||||
event
|
||||
.teams
|
||||
.iter()
|
||||
.map(|team| team.output)
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn compute_elapsed<T: Time>(last: Option<&T>, current: &T) -> i64 {
|
||||
last.map(|l| l.elapsed_to(current).max(0)).unwrap_or(0)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use approx::assert_ulps_eq;
|
||||
|
||||
use super::*;
|
||||
use crate::{
|
||||
KeyTable, competitor::Competitor, drift::ConstantDrift, rating::Rating,
|
||||
storage::CompetitorStore,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn test_one_event_each() {
|
||||
let mut index_map = KeyTable::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
let e = index_map.get_or_create("e");
|
||||
let f = index_map.get_or_create("f");
|
||||
|
||||
let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
|
||||
for agent in [a, b, c, d, e, f] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Competitor {
|
||||
rating: Rating::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut time_slice = TimeSlice::new(0i64, 0.0);
|
||||
|
||||
time_slice.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![c], vec![d]],
|
||||
vec![vec![e], vec![f]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
let post = time_slice.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&d],
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&e],
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&f],
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
|
||||
assert_eq!(time_slice.convergence(&agents), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_same_strength() {
|
||||
let mut index_map = KeyTable::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
let e = index_map.get_or_create("e");
|
||||
let f = index_map.get_or_create("f");
|
||||
|
||||
let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
|
||||
for agent in [a, b, c, d, e, f] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Competitor {
|
||||
rating: Rating::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut time_slice = TimeSlice::new(0i64, 0.0);
|
||||
|
||||
time_slice.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![a], vec![c]],
|
||||
vec![vec![b], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
let post = time_slice.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(24.960978, 6.298544),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(27.095590, 6.010330),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(24.889681, 5.866311),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
|
||||
assert!(time_slice.convergence(&agents) > 1);
|
||||
|
||||
let post = time_slice.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_add_events() {
|
||||
let mut index_map = KeyTable::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
let e = index_map.get_or_create("e");
|
||||
let f = index_map.get_or_create("f");
|
||||
|
||||
let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
|
||||
for agent in [a, b, c, d, e, f] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Competitor {
|
||||
rating: Rating::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut time_slice = TimeSlice::new(0i64, 0.0);
|
||||
|
||||
time_slice.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![a], vec![c]],
|
||||
vec![vec![b], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
time_slice.convergence(&agents);
|
||||
|
||||
let post = time_slice.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(25.000000, 5.419212),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
|
||||
time_slice.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![a], vec![c]],
|
||||
vec![vec![b], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![0.0, 1.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
assert_eq!(time_slice.events.len(), 6);
|
||||
|
||||
time_slice.convergence(&agents);
|
||||
|
||||
let post = time_slice.posteriors();
|
||||
|
||||
assert_ulps_eq!(
|
||||
post[&a],
|
||||
Gaussian::from_ms(25.000003, 3.880150),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&b],
|
||||
Gaussian::from_ms(25.000003, 3.880150),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
post[&c],
|
||||
Gaussian::from_ms(25.000003, 3.880150),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn time_slice_color_groups_reorders_events() {
|
||||
// ev0: [a, b]; ev1: [c, d]; ev2: [a, c]
|
||||
// Greedy coloring: ev0→c0, ev1→c0 (disjoint), ev2→c1 (overlaps both).
|
||||
// After recompute_color_groups, physical order is [ev0, ev1, ev2]
|
||||
// and groups == [[0, 1], [2]].
|
||||
let mut index_map = KeyTable::new();
|
||||
|
||||
let a = index_map.get_or_create("a");
|
||||
let b = index_map.get_or_create("b");
|
||||
let c = index_map.get_or_create("c");
|
||||
let d = index_map.get_or_create("d");
|
||||
|
||||
let mut agents: CompetitorStore<i64, ConstantDrift> = CompetitorStore::new();
|
||||
|
||||
for agent in [a, b, c, d] {
|
||||
agents.insert(
|
||||
agent,
|
||||
Competitor {
|
||||
rating: Rating::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
),
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
let mut ts = TimeSlice::new(0i64, 0.0);
|
||||
|
||||
ts.add_events(
|
||||
vec![
|
||||
vec![vec![a], vec![b]],
|
||||
vec![vec![c], vec![d]],
|
||||
vec![vec![a], vec![c]],
|
||||
],
|
||||
vec![vec![1.0, 0.0], vec![1.0, 0.0], vec![1.0, 0.0]],
|
||||
vec![],
|
||||
&agents,
|
||||
);
|
||||
|
||||
assert_eq!(ts.color_groups.n_colors(), 2);
|
||||
assert_eq!(ts.color_groups.groups[0], vec![0, 1]);
|
||||
assert_eq!(ts.color_groups.groups[1], vec![2]);
|
||||
|
||||
assert_eq!(ts.color_groups.color_range(0), 0..2);
|
||||
assert_eq!(ts.color_groups.color_range(1), 2..3);
|
||||
|
||||
// Events at positions 0 and 1 (color 0) must be disjoint — verify by
|
||||
// checking that the agent sets of self.events[0] and self.events[1] do
|
||||
// not include the agent at self.events[2].
|
||||
let agents_in_ev2: Vec<Index> = ts.events[2].iter_agents().collect();
|
||||
let agents_in_ev0: Vec<Index> = ts.events[0].iter_agents().collect();
|
||||
let agents_in_ev1: Vec<Index> = ts.events[1].iter_agents().collect();
|
||||
// ev0 and ev1 must be disjoint from each other (color-0 invariant).
|
||||
assert!(agents_in_ev0.iter().all(|ag| !agents_in_ev1.contains(ag)));
|
||||
// ev2 must share an agent with ev0 or ev1 (it needed its own color).
|
||||
let ev2_overlaps_ev0 = agents_in_ev2.iter().any(|ag| agents_in_ev0.contains(ag));
|
||||
let ev2_overlaps_ev1 = agents_in_ev2.iter().any(|ag| agents_in_ev1.contains(ag));
|
||||
assert!(ev2_overlaps_ev0 || ev2_overlaps_ev1);
|
||||
}
|
||||
}
|
||||
225
tests/api_shape.rs
Normal file
225
tests/api_shape.rs
Normal file
@@ -0,0 +1,225 @@
|
||||
//! Tests for the new T2 public API surface: typed add_events(iter) and the
|
||||
//! fluent event builder (added in Task 16).
|
||||
|
||||
use smallvec::smallvec;
|
||||
use trueskill_tt::{ConstantDrift, ConvergenceOptions, Event, History, Member, Outcome, Team};
|
||||
|
||||
#[test]
|
||||
fn add_events_bulk_via_iter() {
|
||||
let mut h = History::builder()
|
||||
.mu(0.0)
|
||||
.sigma(2.0)
|
||||
.beta(1.0)
|
||||
.p_draw(0.0)
|
||||
.drift(ConstantDrift(0.0))
|
||||
.convergence(ConvergenceOptions {
|
||||
max_iter: 30,
|
||||
epsilon: 1e-6,
|
||||
})
|
||||
.build();
|
||||
|
||||
let events: Vec<Event<i64, &'static str>> = vec![
|
||||
Event {
|
||||
time: 1,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new("a")]),
|
||||
Team::with_members([Member::new("b")]),
|
||||
],
|
||||
outcome: Outcome::winner(0, 2),
|
||||
},
|
||||
Event {
|
||||
time: 2,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new("b")]),
|
||||
Team::with_members([Member::new("c")]),
|
||||
],
|
||||
outcome: Outcome::winner(0, 2),
|
||||
},
|
||||
];
|
||||
|
||||
h.add_events(events).unwrap();
|
||||
let report = h.converge().unwrap();
|
||||
assert!(report.converged);
|
||||
assert!(h.lookup(&"a").is_some());
|
||||
assert!(h.lookup(&"b").is_some());
|
||||
assert!(h.lookup(&"c").is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn add_events_draw() {
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.25)
|
||||
.drift(ConstantDrift(25.0 / 300.0))
|
||||
.build();
|
||||
|
||||
let events: Vec<Event<i64, &'static str>> = vec![Event {
|
||||
time: 1,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new("alice")]),
|
||||
Team::with_members([Member::new("bob")]),
|
||||
],
|
||||
outcome: Outcome::draw(2),
|
||||
}];
|
||||
h.add_events(events).unwrap();
|
||||
h.converge().unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn add_events_rejects_mismatched_outcome_ranks() {
|
||||
use trueskill_tt::InferenceError;
|
||||
let mut h: History = History::builder().build();
|
||||
let events: Vec<Event<i64, &'static str>> = vec![Event {
|
||||
time: 1,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new("a")]),
|
||||
Team::with_members([Member::new("b")]),
|
||||
],
|
||||
outcome: Outcome::ranking([0, 1, 2]), // 3 ranks but 2 teams
|
||||
}];
|
||||
let err = h.add_events(events).unwrap_err();
|
||||
assert!(matches!(err, InferenceError::MismatchedShape { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fluent_event_builder_basic() {
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.0)
|
||||
.build();
|
||||
|
||||
h.event(1)
|
||||
.team(["alice", "bob"])
|
||||
.weights([1.0, 0.7])
|
||||
.team(["carol"])
|
||||
.ranking([1, 0])
|
||||
.commit()
|
||||
.unwrap();
|
||||
|
||||
let report = h.converge().unwrap();
|
||||
assert!(report.converged);
|
||||
assert!(h.lookup(&"alice").is_some());
|
||||
assert!(h.lookup(&"bob").is_some());
|
||||
assert!(h.lookup(&"carol").is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fluent_event_builder_winner_convenience() {
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.0)
|
||||
.build();
|
||||
|
||||
h.event(1)
|
||||
.team(["alice"])
|
||||
.team(["bob"])
|
||||
.winner(0)
|
||||
.commit()
|
||||
.unwrap();
|
||||
h.converge().unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fluent_event_builder_draw() {
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.25)
|
||||
.build();
|
||||
|
||||
h.event(1)
|
||||
.team(["alice"])
|
||||
.team(["bob"])
|
||||
.draw()
|
||||
.commit()
|
||||
.unwrap();
|
||||
h.converge().unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn current_skill_and_learning_curve() {
|
||||
use trueskill_tt::History;
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.0)
|
||||
.build();
|
||||
h.record_winner(&"a", &"b", 1).unwrap();
|
||||
h.record_winner(&"a", &"b", 2).unwrap();
|
||||
h.converge().unwrap();
|
||||
|
||||
let a = h.current_skill(&"a").unwrap();
|
||||
assert!(a.mu() > 25.0);
|
||||
let b = h.current_skill(&"b").unwrap();
|
||||
assert!(b.mu() < 25.0);
|
||||
|
||||
let a_curve = h.learning_curve(&"a");
|
||||
assert_eq!(a_curve.len(), 2);
|
||||
assert_eq!(a_curve[0].0, 1);
|
||||
assert_eq!(a_curve[1].0, 2);
|
||||
|
||||
let all = h.learning_curves();
|
||||
assert_eq!(all.len(), 2);
|
||||
assert!(all.contains_key("a"));
|
||||
assert!(all.contains_key("b"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn log_evidence_total_vs_subset() {
|
||||
use trueskill_tt::{ConstantDrift, History};
|
||||
let mut h = History::builder()
|
||||
.mu(0.0)
|
||||
.sigma(6.0)
|
||||
.beta(1.0)
|
||||
.p_draw(0.0)
|
||||
.drift(ConstantDrift(0.0))
|
||||
.build();
|
||||
h.record_winner(&"a", &"b", 1).unwrap();
|
||||
h.record_winner(&"b", &"a", 2).unwrap();
|
||||
let total = h.log_evidence();
|
||||
let a_only = h.log_evidence_for(&[&"a"]);
|
||||
assert!(total.is_finite());
|
||||
assert!(a_only.is_finite());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn predict_quality_two_teams() {
|
||||
use trueskill_tt::History;
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.0)
|
||||
.build();
|
||||
h.record_winner(&"a", &"b", 1).unwrap();
|
||||
h.converge().unwrap();
|
||||
|
||||
let q = h.predict_quality(&[&[&"a"], &[&"b"]]);
|
||||
assert!(q > 0.0 && q <= 1.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn predict_outcome_two_teams_sums_to_one() {
|
||||
use trueskill_tt::History;
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.p_draw(0.0)
|
||||
.build();
|
||||
h.record_winner(&"a", &"b", 1).unwrap();
|
||||
h.converge().unwrap();
|
||||
|
||||
let p = h.predict_outcome(&[&[&"a"], &[&"b"]]);
|
||||
assert_eq!(p.len(), 2);
|
||||
assert!((p[0] + p[1] - 1.0).abs() < 1e-9);
|
||||
assert!(p[0] > p[1]);
|
||||
}
|
||||
100
tests/determinism.rs
Normal file
100
tests/determinism.rs
Normal file
@@ -0,0 +1,100 @@
|
||||
//! Determinism tests: identical posteriors across RAYON_NUM_THREADS
|
||||
//! values. Only compiled with the `rayon` feature.
|
||||
|
||||
#![cfg(feature = "rayon")]
|
||||
|
||||
use smallvec::smallvec;
|
||||
use trueskill_tt::{ConstantDrift, ConvergenceOptions, Event, History, Member, Outcome, Team};
|
||||
|
||||
/// Build a deterministic workload using a simple LCG (no external rand crate).
|
||||
fn build_and_converge(seed: u64) -> Vec<(i64, trueskill_tt::Gaussian)> {
|
||||
let mut h = History::<i64, _, _, String>::builder_with_key()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.drift(ConstantDrift(25.0 / 300.0))
|
||||
.convergence(ConvergenceOptions {
|
||||
max_iter: 30,
|
||||
epsilon: 1e-6,
|
||||
})
|
||||
.build();
|
||||
|
||||
// LCG for deterministic pseudo-random ints.
|
||||
let mut rng = seed;
|
||||
let mut next = || {
|
||||
rng = rng
|
||||
.wrapping_mul(6364136223846793005)
|
||||
.wrapping_add(1442695040888963407);
|
||||
rng
|
||||
};
|
||||
|
||||
let mut events: Vec<Event<i64, String>> = Vec::with_capacity(200);
|
||||
for ev_i in 0..200 {
|
||||
let a = (next() % 40) as usize;
|
||||
let mut b = (next() % 40) as usize;
|
||||
while b == a {
|
||||
b = (next() % 40) as usize;
|
||||
}
|
||||
// ~10 events per slice so color groups have material parallelism.
|
||||
events.push(Event {
|
||||
time: (ev_i as i64 / 10) + 1,
|
||||
teams: smallvec![
|
||||
Team::with_members([Member::new(format!("p{a}"))]),
|
||||
Team::with_members([Member::new(format!("p{b}"))]),
|
||||
],
|
||||
outcome: Outcome::winner((next() % 2) as u32, 2),
|
||||
});
|
||||
}
|
||||
h.add_events(events).unwrap();
|
||||
h.converge().unwrap();
|
||||
// Sample one competitor's curve for the comparison.
|
||||
h.learning_curve("p0")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn posteriors_identical_across_thread_counts() {
|
||||
let sizes = [1usize, 2, 4, 8];
|
||||
let mut results: Vec<Vec<(i64, trueskill_tt::Gaussian)>> = Vec::new();
|
||||
for &n in &sizes {
|
||||
let pool = rayon::ThreadPoolBuilder::new()
|
||||
.num_threads(n)
|
||||
.build()
|
||||
.expect("rayon pool build");
|
||||
let curve = pool.install(|| build_and_converge(42));
|
||||
results.push(curve);
|
||||
}
|
||||
|
||||
let reference = &results[0];
|
||||
for (i, curve) in results.iter().enumerate().skip(1) {
|
||||
assert_eq!(
|
||||
curve.len(),
|
||||
reference.len(),
|
||||
"curve length differs at {n} threads",
|
||||
n = sizes[i],
|
||||
);
|
||||
for (j, (&(t_ref, g_ref), &(t, g))) in reference.iter().zip(curve.iter()).enumerate() {
|
||||
assert_eq!(
|
||||
t_ref,
|
||||
t,
|
||||
"time point {j} differs at {n} threads: ref={t_ref} vs got={t}",
|
||||
n = sizes[i],
|
||||
);
|
||||
assert_eq!(
|
||||
g_ref.mu().to_bits(),
|
||||
g.mu().to_bits(),
|
||||
"mu bits differ at {n} threads, time {t}: ref={ref_mu} got={got_mu}",
|
||||
n = sizes[i],
|
||||
ref_mu = g_ref.mu(),
|
||||
got_mu = g.mu(),
|
||||
);
|
||||
assert_eq!(
|
||||
g_ref.sigma().to_bits(),
|
||||
g.sigma().to_bits(),
|
||||
"sigma bits differ at {n} threads, time {t}: ref={ref_sigma} got={got_sigma}",
|
||||
n = sizes[i],
|
||||
ref_sigma = g_ref.sigma(),
|
||||
got_sigma = g.sigma(),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
61
tests/equivalence.rs
Normal file
61
tests/equivalence.rs
Normal file
@@ -0,0 +1,61 @@
|
||||
//! Equivalence tests: every historical golden from the pre-T2 tests is
|
||||
//! reproduced here at the integration level via the new public API.
|
||||
//!
|
||||
//! The in-crate tests in `src/history.rs::tests` and
|
||||
//! `src/time_slice.rs::tests` are the primary regression net for numerical
|
||||
//! behavior. This file provides Game-level goldens that stand alone and are
|
||||
//! more naturally expressed as integration tests.
|
||||
|
||||
use approx::assert_ulps_eq;
|
||||
use trueskill_tt::{ConstantDrift, Game, GameOptions, Gaussian, Outcome, Rating};
|
||||
|
||||
type R = Rating<i64, ConstantDrift>;
|
||||
|
||||
fn ts_rating(mu: f64, sigma: f64, beta: f64, gamma: f64) -> R {
|
||||
R::new(Gaussian::from_ms(mu, sigma), beta, ConstantDrift(gamma))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_1v1_golden_matches_historical() {
|
||||
let a = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
|
||||
let b = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
|
||||
let (a_post, b_post) = Game::<i64, _>::one_v_one(&a, &b, Outcome::winner(0, 2)).unwrap();
|
||||
// Historical golden from pre-T2 test_1vs1 (team 0 wins):
|
||||
assert_ulps_eq!(
|
||||
a_post,
|
||||
Gaussian::from_ms(29.205220, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
b_post,
|
||||
Gaussian::from_ms(20.794779, 7.194481),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_1v1_draw_golden() {
|
||||
let a = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
|
||||
let b = ts_rating(25.0, 25.0 / 3.0, 25.0 / 6.0, 25.0 / 300.0);
|
||||
let g = Game::<i64, _>::ranked(
|
||||
&[&[a], &[b]],
|
||||
Outcome::draw(2),
|
||||
&GameOptions {
|
||||
p_draw: 0.25,
|
||||
convergence: Default::default(),
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
let p = g.posteriors();
|
||||
// Historical golden from pre-T2 test_1vs1_draw:
|
||||
assert_ulps_eq!(
|
||||
p[0][0],
|
||||
Gaussian::from_ms(24.999999, 6.469480),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
assert_ulps_eq!(
|
||||
p[1][0],
|
||||
Gaussian::from_ms(24.999999, 6.469480),
|
||||
epsilon = 1e-6
|
||||
);
|
||||
}
|
||||
96
tests/game.rs
Normal file
96
tests/game.rs
Normal file
@@ -0,0 +1,96 @@
|
||||
use trueskill_tt::{
|
||||
ConstantDrift, ConvergenceOptions, Game, GameOptions, Gaussian, InferenceError, Outcome, Rating,
|
||||
};
|
||||
|
||||
type R = Rating<i64, ConstantDrift>;
|
||||
|
||||
fn default_rating() -> R {
|
||||
R::new(
|
||||
Gaussian::from_ms(25.0, 25.0 / 3.0),
|
||||
25.0 / 6.0,
|
||||
ConstantDrift(25.0 / 300.0),
|
||||
)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_ranked_1v1_golden() {
|
||||
let a = default_rating();
|
||||
let b = default_rating();
|
||||
let g = Game::<i64, _>::ranked(
|
||||
&[&[a], &[b]],
|
||||
Outcome::winner(0, 2),
|
||||
&GameOptions::default(),
|
||||
)
|
||||
.unwrap();
|
||||
let p = g.posteriors();
|
||||
assert!(p[0][0].mu() > 25.0);
|
||||
assert!(p[1][0].mu() < 25.0);
|
||||
assert!((p[0][0].sigma() - p[1][0].sigma()).abs() < 1e-6);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_one_v_one_shortcut() {
|
||||
let a = default_rating();
|
||||
let b = default_rating();
|
||||
let (a_post, b_post) = Game::<i64, _>::one_v_one(&a, &b, Outcome::winner(0, 2)).unwrap();
|
||||
assert!(a_post.mu() > 25.0);
|
||||
assert!(b_post.mu() < 25.0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_ranked_rejects_bad_p_draw() {
|
||||
let a = R::new(Gaussian::default(), 1.0, ConstantDrift(0.0));
|
||||
let err = Game::<i64, _>::ranked(
|
||||
&[&[a], &[a]],
|
||||
Outcome::winner(0, 2),
|
||||
&GameOptions {
|
||||
p_draw: 1.5,
|
||||
convergence: ConvergenceOptions::default(),
|
||||
},
|
||||
)
|
||||
.unwrap_err();
|
||||
assert!(matches!(err, InferenceError::InvalidProbability { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_ranked_rejects_mismatched_ranks() {
|
||||
let a = R::new(Gaussian::default(), 1.0, ConstantDrift(0.0));
|
||||
let err = Game::<i64, _>::ranked(
|
||||
&[&[a], &[a]],
|
||||
Outcome::ranking([0, 1, 2]),
|
||||
&GameOptions::default(),
|
||||
)
|
||||
.unwrap_err();
|
||||
assert!(matches!(err, InferenceError::MismatchedShape { .. }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_free_for_all_three_players() {
|
||||
let a = default_rating();
|
||||
let b = default_rating();
|
||||
let c = default_rating();
|
||||
let g = Game::<i64, _>::free_for_all(
|
||||
&[&a, &b, &c],
|
||||
Outcome::ranking([0, 1, 2]),
|
||||
&GameOptions::default(),
|
||||
)
|
||||
.unwrap();
|
||||
let p = g.posteriors();
|
||||
assert_eq!(p.len(), 3);
|
||||
assert!(p[0][0].mu() > p[1][0].mu());
|
||||
assert!(p[1][0].mu() > p[2][0].mu());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn game_log_evidence_is_finite() {
|
||||
let a = default_rating();
|
||||
let b = default_rating();
|
||||
let g = Game::<i64, _>::ranked(
|
||||
&[&[a], &[b]],
|
||||
Outcome::winner(0, 2),
|
||||
&GameOptions::default(),
|
||||
)
|
||||
.unwrap();
|
||||
assert!(g.log_evidence().is_finite());
|
||||
assert!(g.log_evidence() < 0.0);
|
||||
}
|
||||
54
tests/record_winner.rs
Normal file
54
tests/record_winner.rs
Normal file
@@ -0,0 +1,54 @@
|
||||
use trueskill_tt::{ConstantDrift, ConvergenceOptions, History};
|
||||
|
||||
#[test]
|
||||
fn record_winner_builds_history() {
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.drift(ConstantDrift(25.0 / 300.0))
|
||||
.convergence(ConvergenceOptions {
|
||||
max_iter: 30,
|
||||
epsilon: 1e-6,
|
||||
})
|
||||
.build();
|
||||
|
||||
h.record_winner(&"alice", &"bob", 1).unwrap();
|
||||
h.converge().unwrap();
|
||||
|
||||
let a_idx = h.lookup(&"alice").unwrap();
|
||||
let b_idx = h.lookup(&"bob").unwrap();
|
||||
|
||||
assert_ne!(a_idx, b_idx);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn intern_is_idempotent() {
|
||||
let mut h: History = History::builder().build();
|
||||
let a1 = h.intern(&"alice");
|
||||
let a2 = h.intern(&"alice");
|
||||
assert_eq!(a1, a2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn lookup_returns_none_for_missing() {
|
||||
let h: History = History::builder().build();
|
||||
assert!(h.lookup(&"nobody").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn record_draw_with_p_draw_set() {
|
||||
let mut h = History::builder()
|
||||
.mu(25.0)
|
||||
.sigma(25.0 / 3.0)
|
||||
.beta(25.0 / 6.0)
|
||||
.drift(ConstantDrift(25.0 / 300.0))
|
||||
.p_draw(0.25)
|
||||
.build();
|
||||
|
||||
h.record_draw(&"alice", &"bob", 1).unwrap();
|
||||
h.converge().unwrap();
|
||||
|
||||
assert!(h.lookup(&"alice").is_some());
|
||||
assert!(h.lookup(&"bob").is_some());
|
||||
}
|
||||
Reference in New Issue
Block a user