The Phantom Attractor Is Not Real
A spurious minimum at ~3.17 traps EML gradient descent on 40/40 seeds when targeting π. PSLQ found no relation to any known constant. It vanishes at higher precision. It was float64 arithmetic lying.
The setup
We trained a small EML tree to represent the constant π ≈ 3.14159265… using gradient descent on mean squared error. Standard initialization, standard Adam optimizer. The result was surprising.
The mystery
On 40 out of 40 random seeds, gradient descent converged to approximately 3.171… — not π. The convergence was clean and consistent. The value was reproducible. It looked like a mathematical constant.
We ran a 320-digit PSLQ integer-relation search against a library of known constants: π, e, φ, ln 2, ζ(3), Catalan's constant, Apéry's constant, and dozens more. No relation found. The attractor value has no known algebraic or transcendental characterization.
The diagnosis
The gradient is not zero there. We computed ∇L numerically at the attractor and analytically: ∇L ≠ 0. It is not a local minimum. It is not a saddle point. It is a region where the gradient is very small but nonzero — and where float64 arithmetic rounds the update to zero before the optimizer can escape.
The Lyapunov exponent of the loss landscape near the attractor is λ ≈ 13.5 — highly chaotic. Trajectories that should diverge from the attractor are numerically trapped by floating-point rounding before they can.
At 15+ decimal places of float precision (using Python's mpmath),
the attractor vanishes completely. Gradient descent finds π on 40/40 seeds.
The fix
The engineering fix: add a complexity penalty with coefficient λcrit. At λ = 0 (no regularization): 0% of seeds reach π. At λ = 0.001: 100% of seeds reach π. This is a sharp phase transition — not a gradual improvement. Below the critical value, the attractor dominates. Above it, the landscape changes enough that gradient descent escapes.
The lesson
What looks deep might be float64 lying. Before concluding that you've found a new mathematical object:
- Verify that ∇L = 0 (is it actually a critical point?)
- Run PSLQ against known constants
- Check at higher precision (mpmath, Decimal, or exact arithmetic)
- Test if regularization makes it disappear
In this case, all four checks pointed the same direction: artifact, not mathematics.
Reproduce
python experiments/phantom_attractor_escape.py --dps 60
# Shows attractor at float64, then disappears at 60-digit precision
python experiments/phantom_phase_transition.py --seeds 40
# Phase transition: 0% convergence at lambda=0, 100% at lambda=0.001 Cite this work
Monogate Research (2026). "The Phantom Attractor Is Not Real." monogate research blog. https://monogate.org/blog/phantom-attractor
License
CC BY 4.0 — free to share and adapt with attribution. ·
Code: pip install monogate ·
Paper: arXiv:2603.21852