GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

This post is part “hear me out” and part asking for advice.

Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

  • TheMightyCat@ani.socialOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    My target model is Qwen/Qwen3-235B-A22B-FP8. Ideally its maxium context lenght of 131K but i’m willing to compromise. I find it hard to give an concrete t/s awnser, let’s put it around 50. At max load probably around 8 concurrent users, but these situations will be rare enough that oprimizing for single user is probably more worth it.

    My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090

    It gets nice enough peformance loading 32B models completely in vram, but i am skeptical that a simillar system can run a 671B at higher speeds then a snails space, i currently run vLLM because it has higher peformance with tensor parrelism then lama.cpp but i shall check out ik_lama.cpp.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 hours ago

      Ah, here we go:

      https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF

      Ubergarm is great. See this part in particular: https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF#quick-start

      You will need to modify the syntax for 2x GPUs. I’d recommend starting f16/f16 K/V cache with 32K (to see if that’s acceptable, as then theres no dequantization compute overhead), and try not go lower than q8_0/q5_1 (as the V is more amenable to quantization).

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 hours ago

      Qwen3-235B-A22B-FP8

      Good! An MoE.

      Ideally its maxium context lenght of 131K but i’m willing to compromise.

      I can tell you from experience all Qwen models are terrible past 32K. What’s more, going over 32K, you have to run them in a special “mode” (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.

      Also, you lose a lot of quality with FP8/AWQ quantization unless it’s native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much higher quality, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp’s is good, and exllama’s is excellent, making it less than ideal for >16K. Its niche is more highly parallel, low context size serving.

      My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090

      Honestly, you should be set now. I can get 16+ t/s with high context Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)

      It is poorly documented through. The general strategy is to keep the “core” of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There’s even a project to try and calculate it automatically:

      https://github.com/k-koehler/gguf-tensor-overrider

      IK_llama.cpp can also use special GGUFs regular llama.cpp can’t take, for faster inference in less space. I’m not sure if one for 235B is floating around huggingface, I will check.


      Side note: I hope you can see why I asked. The web of engine strengths/quirks is extremely complicated, heh, and the answer could be totally different for different models.