@simon neat!
Where can I look at the code behind this function?
@simon neat!
Where can I look at the code behind this function?
@simon would be cool to see Llama 3.2 1B or similar doing it right inside the browser
@simon how different is that "computer use" from
Large Action Model framework to develop AI Web Agents - lavague-ai/LaVague
GitHub (github.com)
@simon neat!
BTW, have you tried sqlite-vec yet (sqlite as a vector database)?
A vector search SQLite extension that runs anywhere! - asg017/sqlite-vec
GitHub (github.com)
@simon also, since the h20 models are available as safetensors, it should be possible to run with mlx on Mac.
I haven't looked into this rust inferencing engine you wrote about in the OP above. Would you know what model file formats does it support?
@simon no, not yet.
I'm yet to look into the model files, but if they're available as gguf or onnx, it should be possible to run with llama.cpp or wllama for gguf and Transformers.js for onnx.
It's also possible to convert gguf files for running with ollama.
@simon did you see that h2o.ai did well even with 0.8B model?
@slightlyoff came across McMaster.com today.
@slightlyoff this is awesome! Thanks a ton!
UK gov websites aren't relatable to many developers not living there, though I do understand why gov websites need to be so in the first place.
Is there a way to evaluate how do Indian gov.in/nic.in websites fare?
@slightlyoff any good examples that aren't js first but are still feature rich? To help differentiate via show and tell instead of me fumbling to convey the point adequately?