I'm Ian Green, founder of VaultScaler — a company building deterministic, audit-friendly infrastructure for AI workloads designed to run where your data already lives.
My focus is Radix: a Kubernetes-native observability and advisory layer for GPU and compute clusters. Radix installs fast and provides visibility into your cluster with flexible deployment options.
Modern AI infrastructure should provide speed to insight with clear provenance. Radix gives you observability and control for your GPU clusters.
Beyond technology, I maintain an active artistic practice exploring the intersection of digital and analog expression. You can view my work at Metonymic Debris, where I explore themes of ambiguity, meaning, and the fragments left behind in our digital communications through oil painting, mixed media, and computational art.
Radix is an offline-first, Kubernetes-native observability and advisory layer for GPU and compute clusters. It installs fast, lives entirely inside your cluster, and never phones home.
Data residency options. Deploy with zero egress if needed, or connect to cloud services as required.
Discovers nodes, pods, and GPU/compute signals without mutating anything. Scoped RBAC with list/watch only — no writes in freemium.
Clean UI accessible via port-forward. Data persists to an in-cluster volume. Instant visibility with no external dependencies.
Generate signed PDF summaries inside your cluster. No egress required. Audit-friendly documentation at the press of a button.
Pinned image digests, SBOMs, and signed artifacts. Deterministic and reproducible. No secrets to rotate or leak in freemium.
Helm-installable with offline defaults. Explore immediately via port-forward. Operationally minimal from day one.
Radix Core is freemium and provides immediate value. When you're ready for ML-assisted guidance and controlled activation, Radix Engine unlocks advanced capabilities.
Enable the engine via a signed, offline-verified license. No cloud callbacks, no phone-home.
Start with advisory-only recommendations. Selectively enforce via canary percentage when you're ready.
Clear "why" behind every suggestion, plus signed reports. ML guidance you can audit and trust.
Deploy in-cluster or with cloud connectivity. Choose the architecture that fits your requirements.
Get read-only visibility without cloud dependencies. Install, port-forward, and start observing your GPU cluster immediately.
Enforce data residency with verifiable provenance. Zero egress, signed artifacts, and audit-friendly reports built in.
Evaluate ML-guided scheduling under strict controls. Advisory-only by default, with activation when you choose.
"Modern AI infrastructure shouldn't require sending cluster metadata to someone else's cloud to get value."