@simon Your post mentioned a ~20GB quantized file via Ollama; did that take up 20GB of RAM or 32?
I’m waiting on delivery this/early next week of a 48GB M4 Pro which is why I'm kinda curious.
@simon Your post mentioned a ~20GB quantized file via Ollama; did that take up 20GB of RAM or 32?
I’m waiting on delivery this/early next week of a 48GB M4 Pro which is why I'm kinda curious.