MKuykendall 6 hours ago

I just released Shimmy v1.7.0 with MoE (Mixture of Experts) CPU offloading support, and the results are pretty exciting for anyone who's hit GPU memory walls. What this solves If you've tried running large language models locally, you know the pain: a 42B parameter model typically needs 80GB+ of VRAM, putting it out of reach for most developers. Even "smaller" 20B models often require 40GB+. The breakthrough MoE CPU offloading intelligently moves expert layers to CPU while keeping active computation on GPU. In practice: Phi-3.5-MoE 42B: Runs on 8GB consumer GPUs (was impossible before) GPT-OSS 20B: 71.5% VRAM reduction (15GB → 4.3GB, measured) DeepSeek-MoE 16B: Down to 800MB VRAM with Q2 quantization The tradeoff is 2-7x slower inference, but you can actually run these models instead of not running them at all. Technical implementation Built on enhanced llama.cpp bindings with new with_cpu_moe() and with_n_cpu_moe(n) methods Two CLI flags: --cpu-moe (automatic) and --n-cpu-moe N (manual control) Cross-platform: Windows MSVC CUDA, macOS Metal, Linux x86_64/ARM64 Still sub-5MB binary with zero Python dependencies Ready-to-use models I've uploaded 9 quantized models to HuggingFace specifically optimized for this: Phi-3.5-MoE variants (Q8.0, Q4 K-M, Q2 K) DeepSeek-MoE variants GPT-OSS 20B baseline Getting started # Install cargo install shimmy

# Download a model huggingface-cli download MikeKuykendall/phi-3.5-moe-q4-k-m-cpu-offload-gguf

# Run with MoE offloading ./shimmy serve --cpu-moe --model-path phi-3.5-moe-q4-k-m.gguf Standard OpenAI-compatible API, so existing code works unchanged. Why this matters This democratizes access to state-of-the-art models. Instead of needing a $10,000 GPU or cloud spending, you can run expert models on gaming laptops or modest server hardware. It's not just about making models "work" - it's about sustainable AI deployment where organizations can experiment with cutting-edge architectures without massive infrastructure investments. The technique itself isn't novel (llama.cpp had MoE support), but the Rust bindings, production packaging, and curated model collection make it accessible to developers who just want to run large models locally. Release: https://github.com/Michael-A-Kuykendall/shimmy/releases/tag/... Models: https://huggingface.co/MikeKuykendall Happy to answer questions about the implementation or performance characteristics.