T0 + T1 + T2: engine redesign through new API surface (#1)
Implements tiers T0, T1, T2 of `docs/superpowers/specs/2026-04-23-trueskill-engine-redesign-design.md`. All three tiers have landed together on this branch because they build on one another; this PR rolls them up for a single review pass. Per-tier plans: - T0: `docs/superpowers/plans/2026-04-23-t0-numerical-parity.md` - T1: `docs/superpowers/plans/2026-04-24-t1-factor-graph.md` - T2: `docs/superpowers/plans/2026-04-24-t2-new-api-surface.md` ## Summary ### T0 — Numerical parity (internal) - `Gaussian` switched to natural-parameter storage `(pi, tau)`; mul/div now ~7× faster (218 ps vs 1.57 ns). - `HashMap<Index, _>` → dense `Vec<_>` keyed by `Index.0` (via `AgentStore<D>`, `SkillStore`). - `ScratchArena` eliminates per-event allocations in `Game::likelihoods`. - `InferenceError` seed type added (1 variant). - 38 → 53 tests passing through T1. - Benchmark: `Batch::iteration` 29.84 → 21.25 µs. ### T1 — Factor graph machinery (internal) - `Factor` trait + `BuiltinFactor` enum (TeamSum / RankDiff / Trunc) driving within-game inference. - `VarStore` flat storage for variable marginals. - `Schedule` trait + `EpsilonOrMax` impl replacing the hand-rolled EP loop. - `Game::likelihoods` rebuilt on the factor-graph machinery; iteration counts and goldens preserved to within 1e-6. - 53 tests passing. - Benchmark: `Batch::iteration` 23.01 µs (slight regression absorbed in T2). ### T2 — New API surface (breaking) **Renames:** - `IndexMap → KeyTable`, `Player → Rating`, `Agent → Competitor`, `Batch → TimeSlice` **New types:** - `Time` trait with `Untimed` ZST and `i64` impls; `Drift<T>`, `Rating<T, D>`, `Competitor<T, D>`, `TimeSlice<T>`, `History<T, D, O, K>` all generic. - `Event<T, K>`, `Team<K>`, `Member<K>`, `Outcome` (`Ranked` variant; `#[non_exhaustive]`). - `Observer<T>` trait + `NullObserver`. - `ConvergenceOptions`, `ConvergenceReport`. - `GameOptions`, `OwnedGame<T, D>`. **Three-tier ingestion:** - `history.record_winner(&K, &K, T)` / `record_draw(&K, &K, T)` — 1v1 convenience. - `history.add_events(iter)` — typed bulk. - `history.event(T).team([...]).weights([...]).ranking([...]).commit()` — fluent. **Query API:** `current_skill`, `learning_curve`, `learning_curves` (keyed on `K`), `log_evidence`, `log_evidence_for`, `predict_quality`, `predict_outcome`. **Game constructors:** `ranked`, `one_v_one`, `free_for_all`, `custom` — all returning `Result<_, InferenceError>`. **`factors` module:** `Factor`, `Schedule`, `VarStore`, `VarId`, `BuiltinFactor`, `EpsilonOrMax`, `ScheduleReport`, `TeamSumFactor`, `RankDiffFactor`, `TruncFactor` now public. **Errors:** `InferenceError` gains `MismatchedShape`, `InvalidProbability`, `ConvergenceFailed`; boundary panics converted to `Result`. **Removed (breaking):** `History::convergence(iters, eps, verbose)`, `HistoryBuilder::gamma(f64)`, `HistoryBuilder::time(bool)`, `History.time: bool`, `learning_curves_by_index`, nested-Vec public `add_events`. ## Behavior change (documented in CHANGELOG) `Time = Untimed` has `elapsed_to → 0`, so no drift accumulates between slices. The old `time=false` mode implicitly forced `elapsed=1` on reappearance via an `i64::MAX` sentinel — that quirk is not reproducible under a typed time axis. Tests that depended on it now use `History::<i64, _>` with explicit `1..=n` timestamps. One test (`test_env_ttt`) had 3 Gaussian goldens updated to reflect the corrected semantics; documented in commit `33a7d90`. ## Final numbers | Metric | Before T0 | After T2 | Delta | |---|---|---|---| | `Batch::iteration` | 29.84 µs | 21.36 µs | **-28%** | | `Gaussian::mul` | 1.57 ns | 219 ps | **-86%** | | `Gaussian::div` | 1.57 ns | 219 ps | **-86%** | | Tests passing | 38 | 90 | +52 | All other Gaussian ops unchanged (~219 ps add/sub, ~264 ps pi/tau reads). ## Test plan - [x] `cargo test --features approx` — 90/90 pass (68 lib + 10 api_shape + 6 game + 4 record_winner + 2 equivalence) - [x] `cargo clippy --all-targets --features approx -- -D warnings` — clean - [x] `cargo +nightly fmt --check` — clean - [x] `cargo bench --bench batch` — 21.36 µs - [x] `cargo bench --bench gaussian` — unchanged from T1 - [x] `cargo run --example atp --features approx` — rewritten in new API, runs clean - [x] Historical Game-level goldens preserved in `tests/equivalence.rs` - [x] Public API matches spec Section 4 (verified by integration tests in `tests/api_shape.rs`) ## Commit history ~45 commits total across T0 + T1 + T2. Each task is self-contained and individually tested; the branch is bisectable. See `git log main..t2-new-api-surface` for the full list. ## Deferred to later tiers - `Outcome::Scored` + `MarginFactor` — T4 - `Damped` / `Residual` schedules — T4 - `Send + Sync` bounds + Rayon parallelism — T3 - N-team `predict_outcome` — T4 - `Game::custom` full ergonomics — T4 🤖 Generated with [Claude Code](https://claude.com/claude-code) Reviewed-on: #1 Co-authored-by: Anders Olsson <anders.e.olsson@gmail.com> Co-committed-by: Anders Olsson <anders.e.olsson@gmail.com>
This commit was merged in pull request #1.
This commit is contained in:
148
src/factor/mod.rs
Normal file
148
src/factor/mod.rs
Normal file
@@ -0,0 +1,148 @@
|
||||
//! Factor graph machinery for within-game inference.
|
||||
|
||||
use crate::gaussian::Gaussian;
|
||||
|
||||
/// Identifier for a variable in a `VarStore`.
|
||||
///
|
||||
/// Variables hold the current Gaussian marginal and are owned by exactly one
|
||||
/// `VarStore`. `VarId` is meaningful only within its owning store.
|
||||
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct VarId(pub u32);
|
||||
|
||||
/// Flat storage of variable marginals.
|
||||
///
|
||||
/// Variables are allocated by `alloc()` and accessed by `VarId`. The store is
|
||||
/// reused across `Game::ranked_with_arena` calls (it lives in the `ScratchArena`); call
|
||||
/// `clear()` before reuse.
|
||||
#[derive(Debug, Default)]
|
||||
pub struct VarStore {
|
||||
pub(crate) marginals: Vec<Gaussian>,
|
||||
}
|
||||
|
||||
impl VarStore {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
pub fn clear(&mut self) {
|
||||
self.marginals.clear();
|
||||
}
|
||||
|
||||
pub fn len(&self) -> usize {
|
||||
self.marginals.len()
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.marginals.is_empty()
|
||||
}
|
||||
|
||||
pub fn alloc(&mut self, init: Gaussian) -> VarId {
|
||||
let id = VarId(self.marginals.len() as u32);
|
||||
self.marginals.push(init);
|
||||
id
|
||||
}
|
||||
|
||||
pub fn get(&self, id: VarId) -> Gaussian {
|
||||
self.marginals[id.0 as usize]
|
||||
}
|
||||
|
||||
pub fn set(&mut self, id: VarId, g: Gaussian) {
|
||||
self.marginals[id.0 as usize] = g;
|
||||
}
|
||||
}
|
||||
|
||||
/// A factor in the EP graph.
|
||||
///
|
||||
/// Factors hold their own outgoing messages and propagate them by reading
|
||||
/// connected variable marginals from a `VarStore` and writing back updated
|
||||
/// marginals.
|
||||
pub trait Factor {
|
||||
/// Update outgoing messages and write back to the var store.
|
||||
///
|
||||
/// Returns the max delta `(|Δmu|, |Δsigma|)` across writes this
|
||||
/// propagation. Used by the `Schedule` to detect convergence.
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64);
|
||||
|
||||
/// Optional log-evidence contribution. Default 0.0 (no contribution).
|
||||
fn log_evidence(&self, _vars: &VarStore) -> f64 {
|
||||
0.0
|
||||
}
|
||||
}
|
||||
|
||||
/// Enum dispatcher for the built-in factor types.
|
||||
///
|
||||
/// Using an enum instead of `Box<dyn Factor>` keeps factor data inline and
|
||||
/// avoids virtual-call overhead in the hot inference loop.
|
||||
#[derive(Debug)]
|
||||
pub enum BuiltinFactor {
|
||||
TeamSum(team_sum::TeamSumFactor),
|
||||
RankDiff(rank_diff::RankDiffFactor),
|
||||
Trunc(trunc::TruncFactor),
|
||||
}
|
||||
|
||||
impl Factor for BuiltinFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
match self {
|
||||
Self::TeamSum(f) => f.propagate(vars),
|
||||
Self::RankDiff(f) => f.propagate(vars),
|
||||
Self::Trunc(f) => f.propagate(vars),
|
||||
}
|
||||
}
|
||||
|
||||
fn log_evidence(&self, vars: &VarStore) -> f64 {
|
||||
match self {
|
||||
Self::Trunc(f) => f.log_evidence(vars),
|
||||
_ => 0.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub mod rank_diff;
|
||||
pub mod team_sum;
|
||||
pub mod trunc;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::N_INF;
|
||||
|
||||
#[test]
|
||||
fn alloc_assigns_sequential_ids() {
|
||||
let mut store = VarStore::new();
|
||||
let a = store.alloc(N_INF);
|
||||
let b = store.alloc(N_INF);
|
||||
let c = store.alloc(N_INF);
|
||||
assert_eq!(a, VarId(0));
|
||||
assert_eq!(b, VarId(1));
|
||||
assert_eq!(c, VarId(2));
|
||||
assert_eq!(store.len(), 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn get_returns_initial_value() {
|
||||
let mut store = VarStore::new();
|
||||
let g = Gaussian::from_ms(2.5, 1.0);
|
||||
let id = store.alloc(g);
|
||||
assert_eq!(store.get(id), g);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn set_updates_value() {
|
||||
let mut store = VarStore::new();
|
||||
let id = store.alloc(N_INF);
|
||||
let new = Gaussian::from_ms(3.0, 0.5);
|
||||
store.set(id, new);
|
||||
assert_eq!(store.get(id), new);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn clear_resets_length_keeping_capacity() {
|
||||
let mut store = VarStore::new();
|
||||
store.alloc(N_INF);
|
||||
store.alloc(N_INF);
|
||||
let cap = store.marginals.capacity();
|
||||
store.clear();
|
||||
assert_eq!(store.len(), 0);
|
||||
assert_eq!(store.marginals.capacity(), cap);
|
||||
}
|
||||
}
|
||||
95
src/factor/rank_diff.rs
Normal file
95
src/factor/rank_diff.rs
Normal file
@@ -0,0 +1,95 @@
|
||||
use crate::factor::{Factor, VarId, VarStore};
|
||||
|
||||
/// Maintains the constraint `diff = team_a - team_b` between three vars.
|
||||
///
|
||||
/// On each propagation:
|
||||
/// - Reads marginals at `team_a` and `team_b` (which already incorporate any
|
||||
/// incoming messages from neighboring factors).
|
||||
/// - Computes `new_diff = team_a - team_b` (variance addition; see Gaussian::Sub).
|
||||
/// - Writes the new marginal to `diff`.
|
||||
/// - Returns the delta against the previous diff value.
|
||||
///
|
||||
/// This factor does NOT store an outgoing message; the diff variable is
|
||||
/// effectively replaced on each propagation. The TruncFactor on the same diff
|
||||
/// var holds the EP-divide message that produces the cavity.
|
||||
#[derive(Debug)]
|
||||
pub struct RankDiffFactor {
|
||||
pub team_a: VarId,
|
||||
pub team_b: VarId,
|
||||
pub diff: VarId,
|
||||
}
|
||||
|
||||
impl Factor for RankDiffFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
let a = vars.get(self.team_a);
|
||||
let b = vars.get(self.team_b);
|
||||
let new_diff = a - b;
|
||||
let old = vars.get(self.diff);
|
||||
vars.set(self.diff, new_diff);
|
||||
old.delta(new_diff)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::{N_INF, gaussian::Gaussian};
|
||||
|
||||
#[test]
|
||||
fn diff_of_two_known_gaussians() {
|
||||
let mut vars = VarStore::new();
|
||||
let team_a = vars.alloc(Gaussian::from_ms(25.0, 3.0));
|
||||
let team_b = vars.alloc(Gaussian::from_ms(20.0, 4.0));
|
||||
let diff = vars.alloc(N_INF);
|
||||
|
||||
let mut f = RankDiffFactor {
|
||||
team_a,
|
||||
team_b,
|
||||
diff,
|
||||
};
|
||||
f.propagate(&mut vars);
|
||||
|
||||
let result = vars.get(diff);
|
||||
// mu = 25 - 20 = 5; var = 9 + 16 = 25; sigma = 5
|
||||
assert!((result.mu() - 5.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delta_zero_on_repeat() {
|
||||
let mut vars = VarStore::new();
|
||||
let team_a = vars.alloc(Gaussian::from_ms(10.0, 2.0));
|
||||
let team_b = vars.alloc(Gaussian::from_ms(8.0, 1.0));
|
||||
let diff = vars.alloc(N_INF);
|
||||
|
||||
let mut f = RankDiffFactor {
|
||||
team_a,
|
||||
team_b,
|
||||
diff,
|
||||
};
|
||||
f.propagate(&mut vars);
|
||||
let (dmu, dsig) = f.propagate(&mut vars);
|
||||
assert!(dmu < 1e-12);
|
||||
assert!(dsig < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delta_reflects_team_change() {
|
||||
let mut vars = VarStore::new();
|
||||
let team_a = vars.alloc(Gaussian::from_ms(10.0, 1.0));
|
||||
let team_b = vars.alloc(Gaussian::from_ms(0.0, 1.0));
|
||||
let diff = vars.alloc(N_INF);
|
||||
|
||||
let mut f = RankDiffFactor {
|
||||
team_a,
|
||||
team_b,
|
||||
diff,
|
||||
};
|
||||
f.propagate(&mut vars);
|
||||
|
||||
// change team_a, repropagate; delta should be positive
|
||||
vars.set(team_a, Gaussian::from_ms(15.0, 1.0));
|
||||
let (dmu, _dsig) = f.propagate(&mut vars);
|
||||
assert!(dmu > 4.0, "expected ~5 delta, got {}", dmu);
|
||||
}
|
||||
}
|
||||
98
src/factor/team_sum.rs
Normal file
98
src/factor/team_sum.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
use crate::{
|
||||
N00,
|
||||
factor::{Factor, VarId, VarStore},
|
||||
gaussian::Gaussian,
|
||||
};
|
||||
|
||||
/// Computes the weighted sum of player performances into a team-perf var.
|
||||
///
|
||||
/// Inputs are pre-computed player performance Gaussians (i.e., rating priors
|
||||
/// already with beta² noise added via `Rating::performance()`). The factor
|
||||
/// runs once per game and writes the weighted sum to the output var.
|
||||
#[derive(Debug)]
|
||||
pub struct TeamSumFactor {
|
||||
pub inputs: Vec<(Gaussian, f64)>,
|
||||
pub out: VarId,
|
||||
}
|
||||
|
||||
impl Factor for TeamSumFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
let perf = self.inputs.iter().fold(N00, |acc, (g, w)| acc + (*g * *w));
|
||||
let old = vars.get(self.out);
|
||||
vars.set(self.out, perf);
|
||||
old.delta(perf)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::N_INF;
|
||||
|
||||
#[test]
|
||||
fn single_player_unit_weight() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g = Gaussian::from_ms(25.0, 5.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g, 1.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let result = vars.get(out);
|
||||
assert!((result.mu() - 25.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn two_players_summed() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g1 = Gaussian::from_ms(20.0, 3.0);
|
||||
let g2 = Gaussian::from_ms(30.0, 4.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g1, 1.0), (g2, 1.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let result = vars.get(out);
|
||||
// sum: mu = 20 + 30 = 50, var = 9 + 16 = 25, sigma = 5
|
||||
assert!((result.mu() - 50.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 5.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn weighted_inputs() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g = Gaussian::from_ms(10.0, 2.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g, 2.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let result = vars.get(out);
|
||||
// g * 2.0: mu = 10*2 = 20, sigma = 2*2 = 4
|
||||
assert!((result.mu() - 20.0).abs() < 1e-12);
|
||||
assert!((result.sigma() - 4.0).abs() < 1e-12);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn delta_is_zero_on_repeat_propagate() {
|
||||
let mut vars = VarStore::new();
|
||||
let out = vars.alloc(N_INF);
|
||||
let g = Gaussian::from_ms(5.0, 1.0);
|
||||
let mut f = TeamSumFactor {
|
||||
inputs: vec![(g, 1.0)],
|
||||
out,
|
||||
};
|
||||
|
||||
f.propagate(&mut vars);
|
||||
let (dmu, dsig) = f.propagate(&mut vars);
|
||||
assert!(dmu < 1e-12, "expected ~0 delta on repeat, got {}", dmu);
|
||||
assert!(dsig < 1e-12);
|
||||
}
|
||||
}
|
||||
130
src/factor/trunc.rs
Normal file
130
src/factor/trunc.rs
Normal file
@@ -0,0 +1,130 @@
|
||||
use crate::{
|
||||
N_INF, approx, cdf,
|
||||
factor::{Factor, VarId, VarStore},
|
||||
gaussian::Gaussian,
|
||||
};
|
||||
|
||||
/// EP truncation factor on a diff variable.
|
||||
///
|
||||
/// Implements the rectified-Gaussian approximation that turns a diff
|
||||
/// distribution into a "this team rank-beats that team" or "tied" likelihood.
|
||||
/// Stores its outgoing message to the diff variable so the cavity computation
|
||||
/// produces the correct EP message on each propagation.
|
||||
#[derive(Debug)]
|
||||
pub struct TruncFactor {
|
||||
pub diff: VarId,
|
||||
pub margin: f64,
|
||||
pub tie: bool,
|
||||
/// Outgoing message to the diff variable (initial: N_INF, the EP identity).
|
||||
pub(crate) msg: Gaussian,
|
||||
/// Cached evidence (linear, not log) computed from the cavity on first propagation.
|
||||
pub(crate) evidence_cached: Option<f64>,
|
||||
}
|
||||
|
||||
impl TruncFactor {
|
||||
pub fn new(diff: VarId, margin: f64, tie: bool) -> Self {
|
||||
Self {
|
||||
diff,
|
||||
margin,
|
||||
tie,
|
||||
msg: N_INF,
|
||||
evidence_cached: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Factor for TruncFactor {
|
||||
fn propagate(&mut self, vars: &mut VarStore) -> (f64, f64) {
|
||||
let marginal = vars.get(self.diff);
|
||||
// Cavity: marginal divided by our outgoing message.
|
||||
let cavity = marginal / self.msg;
|
||||
|
||||
// First-time-only: cache the evidence contribution from the cavity.
|
||||
if self.evidence_cached.is_none() {
|
||||
self.evidence_cached = Some(cavity_evidence(cavity, self.margin, self.tie));
|
||||
}
|
||||
|
||||
// Apply the truncation approximation to the cavity.
|
||||
let trunc = approx(cavity, self.margin, self.tie);
|
||||
|
||||
// New outgoing message such that cavity * new_msg = trunc.
|
||||
let new_msg = trunc / cavity;
|
||||
let old_msg = self.msg;
|
||||
self.msg = new_msg;
|
||||
|
||||
// Update the marginal: marginal_new = cavity * new_msg = trunc.
|
||||
vars.set(self.diff, trunc);
|
||||
|
||||
old_msg.delta(new_msg)
|
||||
}
|
||||
|
||||
fn log_evidence(&self, _vars: &VarStore) -> f64 {
|
||||
self.evidence_cached.unwrap_or(1.0).ln()
|
||||
}
|
||||
}
|
||||
|
||||
/// P(diff > margin) for non-tie, P(|diff| < margin) for tie.
|
||||
fn cavity_evidence(diff: Gaussian, margin: f64, tie: bool) -> f64 {
|
||||
if tie {
|
||||
cdf(margin, diff.mu(), diff.sigma()) - cdf(-margin, diff.mu(), diff.sigma())
|
||||
} else {
|
||||
1.0 - cdf(margin, diff.mu(), diff.sigma())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::factor::VarStore;
|
||||
|
||||
#[test]
|
||||
fn idempotent_after_convergence() {
|
||||
// After enough iterations, propagate should return ~0 delta.
|
||||
let mut vars = VarStore::new();
|
||||
let diff = vars.alloc(Gaussian::from_ms(2.0, 3.0));
|
||||
|
||||
let mut f = TruncFactor::new(diff, 0.0, false);
|
||||
|
||||
// Propagate many times; delta should drop toward 0.
|
||||
let mut last = (f64::INFINITY, f64::INFINITY);
|
||||
for _ in 0..20 {
|
||||
last = f.propagate(&mut vars);
|
||||
}
|
||||
assert!(last.0 < 1e-10, "expected converged delta, got {}", last.0);
|
||||
assert!(last.1 < 1e-10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn evidence_cached_on_first_propagate() {
|
||||
let mut vars = VarStore::new();
|
||||
let diff = vars.alloc(Gaussian::from_ms(2.0, 3.0));
|
||||
|
||||
let mut f = TruncFactor::new(diff, 0.0, false);
|
||||
assert!(f.evidence_cached.is_none());
|
||||
|
||||
f.propagate(&mut vars);
|
||||
assert!(f.evidence_cached.is_some());
|
||||
let first = f.evidence_cached.unwrap();
|
||||
|
||||
// Evidence should be P(diff > 0) for diff ~ N(2, 9) ≈ 0.748
|
||||
assert!(first > 0.7);
|
||||
assert!(first < 0.8);
|
||||
|
||||
// Subsequent propagations don't change it.
|
||||
f.propagate(&mut vars);
|
||||
assert_eq!(f.evidence_cached.unwrap(), first);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tie_evidence_uses_two_sided() {
|
||||
let mut vars = VarStore::new();
|
||||
let diff = vars.alloc(Gaussian::from_ms(0.0, 2.0));
|
||||
|
||||
let mut f = TruncFactor::new(diff, 1.0, true);
|
||||
f.propagate(&mut vars);
|
||||
|
||||
// For diff ~ N(0, 4), tie=true with margin=1: P(-1 < diff < 1) ≈ 0.383
|
||||
let ev = f.evidence_cached.unwrap();
|
||||
assert!(ev > 0.35 && ev < 0.42);
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user