NTransformer runs Llama 70B on a single RTX 3090 via NVMe-to-GPU streaming
A new open-source C++/CUDA inference engine called NTransformer runs Llama 3.1 70B on a single RTX 3090 (24GB VRAM) by streaming model layers through GPU memory via PCIe, with optional NVMe direct I/O that bypasses the CPU entirely. It achieves 83x speedup over mmap baselines.
The engine uses 3-tier adaptive caching (VRAM-resident, pinned RAM, NVMe fallback) and layer skipping via cosine similarity calibration. Zero external dependencies beyond CUDA Toolkit, no PyTorch or cuBLAS required.
View full digest for February 22, 2026