Hacker Newsnew | past | comments | ask | show | jobs | submit | roosgit's commentslogin

I just hit that error a few minutes ago. I build my llama.cpp from source because I use CUDA on Linux. So I made the mistake of trying to run Gemma4 on an older version I had and I got the same error. It’s possible brew installs an older version which doens’t support Gemma4 yet.

Ah it was indeed just that!

I'm now on:

$ llama --version version: 8770 (82764d8) built with GNU 15.2.0 for Linux x86_64

(From Nix unstable)

And this works as advertised, nice chat interface, but no openai API I guess, so no opencode...



Good stuff, thanx!

And that's exactly why llama.cpp is not usable by casual users. They follow the "move fast and break things" model. With ollama, you just have to make sure you're getting/building the latest version.

Its not possible to run the latest model architectures without 'moving fast'. The only thing broken here is that they are trying to use an old version with a new model.

and Ollama suffered the same fate when wanting to try new models

What fate?

the impedance mismatch between when models are released and the capability of Ollama and other servers capability for use.

I'm a bit unsure what that has to do with someone running an outdated version of the program while trying to use a model that is supported in the latest release.

Have you tried other local models?

The 14B Q4_K_M needs 9GB, but Q3_K_M is 7.3GB. But you also need some room for context. Still, maybe using `--override-tensor` in llama.cpp would get you a 50% improvement over "naively" offloading layers to the GPU. Or possibly GPT-OSS-20B. It's 12.1GB in MXFP4, but it’s a MOE model so only a part of it would need to be on the GPU. On my dedicated 12GB 3060 it runs at 85 t/s, with a smallish context. I've also read on Reddit some claims that Qwen3 4B 2507 might be better than 8B, because Qwen never released a "2507" update for 8B.


Haven't tried GPT-OSS-20B yet — the MOE approach is interesting for keeping VRAM usage down while getting better reasoning. 85 t/s on a 3060 is impressive. I'll look into that.

I've been on Qwen3 8B mostly because it was "good enough" for the mechanical stages (scanning, scoring, dedup) and I didn't want to optimize the local model before validating the orchestration pattern itself. Now that the pipeline is proven, experimenting with the local model is the obvious next lever to pull.

The Qwen3 4B 2507 claim is interesting — if the quality holds for structured extraction tasks, halving the VRAM footprint would open up running two models concurrently or leaving more room for larger contexts. Worth testing.

Thanks for the pointers — this is exactly the kind of optimization I haven't had time to dig into yet.


I wasn't sure where I'd seen that "retiring" spiel before, but then I remembered someone was (still is) selling a handmade jewelry website claiming $4.3M revenue and $1.3M profit.


I use an even older Macbook and an even older macOS. Of course, the browsers no longer work with the latest JS, so occasionally when I need to use some webapp I boot up a Linux VM and do what I need to do. With limited RAM even that's a pain, but it works for now.


While on the subject, you can make a calendar in as little as 3 lines of CSS: https://calendartricks.com/a-calendar-in-three-lines-of-css/


Can confirm. I was trying to send the newsletter (with SES) and it didn't work. I was thinking my local boto3 was old, but I figured I should check HN just in case.


I have an RTX 3060 with 12GB VRAM. For simpler questions like "how do I change the modified date of a file in Linux", I use Qwen 14B Q4_K_M. It fits entirely in VRAM. If 14B doesn't answer correctly, I switch to Qwen 32B Q3_K_S, which will be slower because it needs to use the RAM. I haven't tried yet the 30B-A3B which I hear is faster and closer to 32B. BTW, I run these models with llama.cpp.

For image generation, Flux and Qwen Image work with ComfyUI. I also use Nunchaku, which improves speed considerably.


# Runs the DB backup script on Thu at 22:00 -- I download the database backup for a few websites that get new data every week. I do this in case my host bans my account.

# Runs the IP change check on Mon - Sun at 09:00, 10:30, 12:00, 20:00 -- If the power goes out or the router reboots I get a new IP. On the server I use fail2ban and if I log into the admin panel I might get banned for making too many requests. So my IP needs to be "blessed".

# Runs Letsencrypt certificate expiry check on Sundays at 11:00 and 18:00 -- I still have a server where I update the certificates by hand.

# Runs the "daily" backup -- Just rsync

# Download Godaddy auction data every day at 19:00 -- I don't actively do this anymore but I used to check, based on certain criteria, for domains that were about to expire.

# Download the sellers.json on the 1st of every month at 19:00 -- I use this to collect data on websites that appear and disappear from the Mediavine and Adthrive sellers.json


I've known about this issue since Lllama 1. Tried it with Llama 2 and Mistral when those models were released. LLMs are not databases.

The test I ran was to ask the LLM about an expired domain of a doctor (obstetrician). I no longer remember the exact domain, but it was similar to annasmithmd.com. One LLM would tell me it used to belong to a doctor named Megan Smith. Another got the name right, Anna Smith, but when I asked it what kind of a doctor, which specialty, it answered pediatrician.

So the LLM had no clue, but from the name of the domain it could infer (I guess that's why they call it inference) that the "md" part was associated with doctors.

By the way, newer LLMs are very good at making domains more human readable by splitting them into words.


I can answer question 3. Prompt processing (how fast your input is parsed) is highly correlated with computing speed. Inference (how fast the LLM answers) is highly correlated with memory bandwidth. So a good CPU might read your question faster, but it will answer pretty much as slow as a cheap CPU with the same RAM.

I have a Ryzen 3 4100. Just tested Qwen2.5-Coder-32B-Instruct-Q3_K_S.gguf with llama.cpp.

CPU-only:

54.08 t/s prompt eval

2.69 t/s inference

---

CPU + 52/65 layers offloaded to GPU (RTX 3060 12GB):

166.79 t/s prompt eval

6.62 t/s inference


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: