@webology @simon @ericholscher I'm not gardening on Apple land and didn't know Mac Mini maxes out at 48 GB RAM. llama 3.1 70b only needs 40 GB. It will run on the Mac Mini, and one can always add another Mac Mini for even larger models with distributed llama: https://b4rtaz.medium.com/how-to-run-llama-3-405b-on-home-devices-build-ai-cluster-ad0d5ad3473b. I suspect that we will see advances where many parameters require much less RAM very soon, which would be great for local and private AI. Some devs are already achieving this with tuning: https://www.reddit.com/r/LocalLLaMA/comments/188197j/80_faster_50_less_memory_0_accuracy_loss_llama/
Posts
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years? -
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@webology @simon @ericholscher a similar question was asked on the LocalLLaMA reddit a few days ago. https://www.reddit.com/r/LocalLLaMA/s/5oUdBZvnxx. If it was an option I won't run it on my main laptop but offload it to a mac mini. Bottomline is still go for as much RAM you can afford.