Get working on real work with local AI. Don't go crazy over AI. Stay Mostlysane.
Loading models…
Run this from your llama.cpp build directory. Models go in ~/AI/models/
by default — put them there or edit the paths above.
One-time calibration per model. The profile is quantization-invariant — same file works for Q8, Q5, Q4 quants.
No manual steps. Just run this in your terminal.
Installs dependencies, clones the Mostlysane fork, builds llama.cpp, and optionally downloads a model. Use the manual guide below for custom setups.