Comma.ai just dropped the specs on its hand-rolled ML data center. Picture this: 600 homegrown GPU rigs (TinyBox Pros), 4PB of flash. The whole thing trains on a PyTorch stack they built themselves, wired up with a custom model tracker and job scheduler they named Miniray.
Inference runs through dynamic Triton servers. Devs keep in sync with featherweight monorepo caching and UV-based package syncing. No cloud. No crutches.











