Three Tests — and One Honest Failure
Benchmark 1: Zero-Shot Generalization
Train on 10% of the thin ring's 2310 combinations. Test on the other 90%.
CRT: 98%
Standard: 0%
Advantage: INFINITE
With 10% training data, every per-channel value (0..p-1) has been seen.
CRT generalizes to all 2310 combinations. Standard has only seen 231/2310.
Benchmark 2: Self-Healing Inference
Corrupt any single CRT channel. The network detects and corrects.
Detection: 100%
Known-loc fix: 100%
Blind fix: 99%
gcd(N/p, L) = 1 for all data primes. Mathematically guaranteed.
The L=11 channel corrects for free. Like ECC in your RAM, but algebraic.
Benchmark 3: Language Model
Byte-level next-byte prediction. Honest test.
Standard wins accuracy
CRT: 10.1x fewer params
ECC: 81-91% detect
Language has cross-channel correlations. CRT independence assumption costs accuracy.
CRT value for language = parameters + ECC + speed, not raw accuracy.
Falsification recorded honestly. The sun does not hide what it cannot do.