@mahryekuh @ryancheley /me pins to profile...
Posts
-
Getting cramps sucks, but getting them right before a women’s meetup that always has an abundance of snacks is probably the second-best-case scenario. -
Does anyone else know what asottile is referring to in this comment on a pre-commit issue?@mahryekuh @offby1 To save everyone time and frustration, I recommend using https://pypi.org/project/pre-commit-uv/
I run `uv tool run --with pre-commit-uv pre-commit ...` and I might remove the `tool` depending on your setup (in case it's bundled already)
To the other point, no one can read minds, and that's a common theme when trying to contribute in that space. Your miles may vary.
-
Getting cramps sucks, but getting them right before a women’s meetup that always has an abundance of snacks is probably the second-best-case scenario.@mahryekuh @ryancheley "403 No Nut Clusters" for you!
-
Getting cramps sucks, but getting them right before a women’s meetup that always has an abundance of snacks is probably the second-best-case scenario.@mahryekuh @ryancheley, Our household favorite this Halloween was Nerds Gummy Clusters which took me by surprise. I'm not a huge Nerds fans and I was expecting a different flavor of "gummy" but they are the best. https://www.nerdscandy.com/crunchy-gummy-yummy
-
Getting cramps sucks, but getting them right before a women’s meetup that always has an abundance of snacks is probably the second-best-case scenario.@mahryekuh @ryancheley The look like Clusters to me. See various shapes and nuts here https://nuts.com/search/instant?query=clusters
aka what people made before Peanut M&Ms were easier to find and buy then making them.
-
Getting cramps sucks, but getting them right before a women’s meetup that always has an abundance of snacks is probably the second-best-case scenario.@mahryekuh @ryancheley Fwiw, I would *not* dunk pindarots or peanut M&Ms into coffee or milk for logistical reasons. Carry on.
-
Please publish and share moreIf you need an idea or nudge, feel free to reach out.
Many of you write about cool things here, but they never make it to an article, even though you are still doing 99% of the work.
Me: "This is great, please blog about it, so I can share it more easily."
-
Please publish and share morePlease publish and share more
Please publish and share more
Friends, I encourage you to publish more, indirectly meaning you should write more and then share it. It’d be best to publish your work in some evergreen space where you control the domain and URL. Then publish on masto-sky-formerly-known-as-linked-don and any place you share and comment on. You don’t have to change the world with every post. You might publish a quick thought or two that helps encourage someone else to try something new, listen to a new song, or binge-watch a new series.
(micro.webology.dev)
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@rochecompaan @simon @ericholscher That was for Eric’s question of how much ram to be future proof.
If you cant fit the fill model in RAM and the context window in memory, it might take one to ten minutes per token to process if it even works. That article appears to be swapping with a small output window. You can do that but I am not sure its worth it. (1/2)
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@rochecompaan @simon @ericholscher RAM is >90% of what I have seen.
Like checkout the Llama 3.1 models: https://ollama.com/library/llama3.1/tags
8B ~= 8 GB RAM
70B ~= 64 GB RAM
405B ~= (More RAM than any of us can afford or that Apple will put in a Mac Studio)I'm sure the M4 vs M2 is a nice bump for most apps, but I get good performance on my M2 Mac Studio.
I'd get a 64 GB (better choice), 96 GB, or 128 GB MacBook Pro or wait for the M4 Studio.
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@simon @ericholscher 64 GB is going to get you the best of today. 128 GB is hard to justify, but it might give you a bit more runway if model sizes change. I'm not even sure how to predict that.
The latest LLama 3.2 models are fairly reasonably sized (1B to 11B for consumers) https://ollama.com/library/llama3.2
LLM + llm-ollama is a pretty nice combo plus the many other projects Simon writes about. Ollama can run hugging fact models too.
-
@simon any recommendations for what M4 MacBooks I should be looking at if I want a future proof running local LLMs for the next couple years?@ericholscher @simon the unified bus GPU + RAM are what really make Macs nice to work with LLMs.
I have a Mac Studio with 64 GB of RAM and while I sort-of regret not getting more ram, there aren't many models I can't run. I can run 70 billion parameter models. They tend to jump up to 200 B or 400 B models and nothing can run those either way.
https://ollama.com is a really nice project to work with locally that's easy to run with good performance (very cachable.
-
I can't say this is the "best" project from my weekly(ish) Office Hours, but it is what it is...@mahryekuh I do plan on moving it for at least one session in November. Our daylight saving times kick in next week.
-
I can't say this is the "best" project from my weekly(ish) Office Hours, but it is what it is...I can't say this is the "best" project from my weekly(ish) Office Hours, but it is what it is...
Alex Gómez (@[email protected])
Here, at django_wplogin industries inc. @bmispelon and I have received a very serious cease and desist. We're asking for help from the community. What's the funniest way to answer? https://github.com/bmispelon/django_wplogin/issues/7
Mastodon (mastodon.social)
Speaking of, I am hosting office hours today around 2:30 pm CT.
Click the link for details on the date and time. https://time.is/0230PM_1_November_2024_in_CT?Jeff%27s_Office_Hours
Please check the gist with updated meeting details or ask me for the link.
*updated to fix the link*
-
@webology has me thinking about...@jack @simon I think the trick for me is knowing that there doesn't have to be a conclusion. I did TILs a decade or more ago and never published them here.
With my micro-blog, there is no pressure to have a fully baked piece of writing.
It's down when I stop writing and complete the thought. Conclusion be damned.
-
Eligibility requirements in tech are dumb.Just in the last **two months**, here is the pedigree of people who I have heard mention or bring up that they are not qualified to run for a position:
- Former DSF President (technically two)
- CPython Core dev
- Multi-year summer of code mentor
- Past or present Django Fellow
- Former BDFL
- Former DSF Director
- Popular Django package maintainerSo yes. "Eligibility requirements in tech are dumb," and Django's needs re-thought.
Python doesn't get a pass either.
-
Eligibility requirements in tech are dumb.Everything was meant to be low friction for joining and leaving. No pressure.
Git commit access doesn't scale for every project, but a low-friction way to join a foundation or membership does.
In almost a decade, we had only one git delete branch accident, and that was a learning opportunity that was easily fixed.
So it frustrates me to no end to see overqualified people fall through the cracks because of some arbitrary membership requirements that really don't mean much.
-
Eligibility requirements in tech are dumb.Eligibility requirements in tech are dumb.
One of my best decisions with DEFNA and DjangoCon US was inviting people with full access from day one.
We didn't need a policy or hand-wavy requirements for who could or couldn't have access to our documents or even a git commit. There were no hoops to jump through other than you wanted to be involved.
Every year, we asked that people opt back in if they wanted to or we would remove them after x-date.
-
️ Office hours start in about 30 minutes and however long it takes me to get a coffee down the block.@mahryekuh I almost said that about their kittens or kids too.
-
️ Office hours start in about 30 minutes and however long it takes me to get a coffee down the block.️ It's out of control today, folks. "Bring your keyboard to office hours" theme...
Marijke Luttekes (@[email protected])
And we've reached the point where everyone is holding up their Keychrons in front of their camera again. Feels inevitable these days. 😂
Fosstodon (fosstodon.org)