@webology @rochecompaan @simon Appreciate the info! I am already pretty impressed with that 3B llama model that runs pretty fast on my old M1, so definitely feels like the quality of what we can run on a 64GB machine over the next few years is gonna be pretty impressive.
Posts
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years? -
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@simon Makes sense. Basically RAM limits how big of a model you can run, and GPU & memory bandwidth is limits the token/s, from what I've read?
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?