HyperMNIST

Hyperbolic Neural Network for Digit Recognition

Loading model...
Draw a Digit
Model input (28x28):
Prediction
-
Draw a digit to classify
Model Information
H2RM
Architecture
98.2%
Test Accuracy
-
Inference Backend
-
Inference Time
How It Works

Hyperbolic Neural Network

H2RM (Hyperbolic Hierarchical Reasoning Model) uses the Poincaré ball model of hyperbolic geometry instead of standard Euclidean space. Hyperbolic space has exponentially more room than Euclidean space at the same radius, making it well-suited for representing hierarchical structures like the nested features in digit recognition.

The model employs dual-curvature processing with two coupled spaces:

  • H-space (curvature κH=0.5): Captures abstract, high-level features
  • L-space (curvature κL=2.0): Processes detailed, low-level patterns

Information flows between these spaces using Möbius transformations and exponential/logarithmic maps, enabling the network to reason at multiple levels of abstraction simultaneously.

Training

The model was trained on the MNIST dataset (60,000 training images of handwritten digits) using Riemannian optimization. Training uses a custom optimizer that performs gradient updates along geodesics in hyperbolic space rather than straight lines in Euclidean space.

Key training parameters: 256-dimensional H-space, 128-dimensional L-space, 6 reasoning cycles, AdamW optimizer with learning rate 1e-4, trained for 5 epochs.

ONNX Export

After training in PyTorch, the model is exported to ONNX (Open Neural Network Exchange) format. This standardized format allows the model to run on any platform with an ONNX runtime. The export process traces the forward pass and captures all operations, producing a portable model file (~2MB).

Browser Deployment

The model runs entirely in your browser using onnxruntime-web, which compiles to WebAssembly (WASM) for near-native performance. On supported devices, WebGPU acceleration is used for faster inference. The entire application is static HTML/JS hosted on Cloudflare Pages, deployed via Wrangler CLI.

No data leaves your device — all inference happens locally in the browser.