refactor(gaussian): switch to natural-parameter storage (pi, tau)

Mul and Div become two f64 adds/subs with no sqrt in the hot path.
mu() and sigma() are computed on demand from stored pi/tau.

Key implementation notes:
- exclude() returns N00 when var <= 0 to avoid inf/inf = NaN when
  two Gaussians have the same precision (ULP-level round-trip error
  from the pi→sigma accessor).
- Mul<f64> by 0.0 returns N00 (point mass at 0), matching old behavior.
- from_ms(0, 0) == N00 {pi:inf, tau:0}; from_ms(0, inf) == N_INF {pi:0, tau:0}.

Golden values in test_1vs1vs1_draw updated: nat-param arithmetic
rounds mu to 25.0 (was 24.999999) and shifts sigma by ~3e-7.
Both differences are bounded and validated against the original Python
reference values.

Part of T0 engine redesign.
This commit is contained in:
2026-04-24 06:59:43 +02:00
parent 06d3c886fe
commit a667deb7e1
6 changed files with 174 additions and 170 deletions

View File

@@ -476,9 +476,9 @@ mod tests {
epsilon = 1e-6
);
let observed = h.batches[1].skills[&a].forward.sigma;
let observed = h.batches[1].skills[&a].forward.sigma();
let gamma: f64 = 0.15 * 25.0 / 3.0;
let expected = (gamma.powi(2) + h.batches[0].skills[&a].posterior().sigma.powi(2)).sqrt();
let expected = (gamma.powi(2) + h.batches[0].skills[&a].posterior().sigma().powi(2)).sqrt();
assert_ulps_eq!(observed, expected, epsilon = 0.000001);
@@ -743,8 +743,8 @@ mod tests {
);
assert_ulps_eq!(
h.batches[0].skills[&b].posterior().mu,
-1.0 * h.batches[0].skills[&c].posterior().mu,
h.batches[0].skills[&b].posterior().mu(),
-1.0 * h.batches[0].skills[&c].posterior().mu(),
epsilon = 1e-6
);