GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

This post is part “hear me out” and part asking for advice.

Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

  • AreaKode@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    10 hours ago

    LLMs are experimental, alpha-level technologies. Nvidia showed investors how fast their cards could compute this information. Now investors can just tell the LLM what they want, and it will spit out something that probably looks similar to what they want. But Nvidia is going to sell as many cards as possible before the bubble bursts.

      • AreaKode@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        Any time you need a CPU that can do a shit load of basic math, a GPU will win every time.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          You can design algorithms specifically to mess up parallelism by branching a lot. For example, if you want your password hashes to be GPU-resistant.