Can you run DeepSeek R1 on a AMD 7900 XTX 24GB GPU?
Hello, I've been hearing a lot about this new DeepSeek LLM, and was wondering, would it be possible to get the 600+ billion parameter model running on my GPU? I've heard something about people have got it to run on their MacBooks. I have i7 4790K, 32GB DDR3, and 7900 XTX 24GB VRAM. I'm running Arch Linux, this computer is just for AI stuff really, not gaming as much. I did tried running the distilled 14B parameter model, but it didn't work for me, I was using GPT4All to run it. I'm thinking about getting one of the NVIDIA 5090s in the future. Thanks in advance!
I run the 32b Version on my 6700xt with an R9 3700x using ollama. It runs well but it gets a bit slower on complex problems. I once ran an 70b Llama model, but it took a long time to finish.
Hey not to side track ops post or your own but I’m new to the home llm space and I was wondering once you have the model set up is there a gui? And how do you input tasks for it to do?
i also have a 6700xt but i don't get ollama running on it. it only defaults to the cpu ryzen 5600
I plan to tackle this problem on a free weekend and now i have a new Reason for solving it.
Well, I dont know what you are running, but on Debian or Fedora it automatically installed Drivers and picked the GPU. I had a Problem like this ones, where it had wrong Drivers (but it was in an NVIDIA GPU).