GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

This post is part “hear me out” and part asking for advice.

Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 hour ago

    Is Nvidia still a defacto requirement? I’ve heard of AMD support being added to OLlama and etc, but I haven’t found robust comparisons on value.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      It depends!

      Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip. Or get lucky with docker.

      Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.

      The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/. They don’t want money I guess.

      And there are… quirks, depending on the model.


      I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.


      NPUs are mostly a nothingburger so far, only good for tiny models.


      Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.


      A lot of people do offload MoE models to Threadripper or EPYC CPUs, via ik_llama.cpp, transformers or some Chinese frameworks. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090 and put more of the money in the CPU platform.


      You wont find a good comparison because it literally changes by the minute. AMD updates ROCM? Better! Oh, but something broke in llama.cpp! Now its fixed an optimized 4 days later! Oh, architecture change, not it doesn’t work again. And look, exl3 support!

      You can literally bench it in a day and have the results be obsolete the next, pretty often.

      • PeriodicallyPedantic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        Thanks!
        That helps when I eventually get around to standing up my own AI server.

        Right now I can’t really justify the cost for my low volume of use, when I can get CloudFlare free tier access to mid-sized models. But it’s something want to bring into my homelab instead for better control and privacy.