Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd throw a 7900xtx in an AM4 rig with 128gb of ddr4 (which is what I've been using for the past two years)

Fuck nvidia



Or a Strix Halo Ryzen AI Max. Lots of "unified" memory that can be dedicated to the GPU portion, for not that expensive. Read through benchmarks to know if the performance will be enough for your needs though.


Do you think the larger Mistral model would fit on a AI Max 395? I've been thinking about buying one of those machines, but haven't convinced myself yet.


You know, I haven't even been thinking about those AMD gpus for local llms and it is clearly a blind spot for me.

How is it? I'd guess a bunch of the MoE models actually run well?


I've been running local models on an AMD 7800 XT with ollama-rocm. I've had zero technical issues. It's really just the usefulness of a model with only 16GB vram + 64GB of main RAM is questionable, but that isn't an AMD specific issue. It was a similar experience running locally with an nvidia card.


Get a Radeon AI Pro r9700! 32GB of RAM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: