GPU VRAM Price (€) Bandwidth (TB/s) TFLOP16 €/GB €/TB/s €/TFLOP16
NVIDIA H200 NVL 141GB 36284 4.89 1671 257 7423 21
NVIDIA RTX PRO 6000 Blackwell 96GB 8450 1.79 126.0 88 4720 67
NVIDIA RTX 5090 32GB 2299 1.79 104.8 71 1284 22
AMD RADEON 9070XT 16GB 665 0.6446 97.32 41 1031 7
AMD RADEON 9070 16GB 619 0.6446 72.25 38 960 8.5
AMD RADEON 9060XT 16GB 382 0.3223 51.28 23 1186 7.45

This post is part “hear me out” and part asking for advice.

Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.

so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    Be specific!

    • What models size (or model) are you looking to host?

    • At what context length?

    • What kind of speed (token/s) do you need?

    • Is it just for you, or many people? How many? In other words should the serving be parallel?

    In other words, it depends, but the sweetpsot option for a self hosted rig, OP, is probably:

    • One 5090 or A6000 ADA GPU. Or maybe 2x 3090s/4090s, underclocked.

    • A cost-effective EPYC CPU/Mobo

    • At least 256 GB DDR5

    Now run ik_llama.cpp, and you can serve Deepseek 671B faster than you can read without burning your house down with H200s: https://github.com/ikawrakow/ik_llama.cpp

    It will also do for dots.llm, kimi, pretty much any of the mega MoEs de joure.

    But there’s all sorts of niches. In a nutshell, don’t think “How much do I need for AI?” But “What is my target use case, what model is good for that, and what’s the best runtime for it?” Then build your rig around that.

    • TheMightyCat@ani.socialOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      My target model is Qwen/Qwen3-235B-A22B-FP8. Ideally its maxium context lenght of 131K but i’m willing to compromise. I find it hard to give an concrete t/s awnser, let’s put it around 50. At max load probably around 8 concurrent users, but these situations will be rare enough that oprimizing for single user is probably more worth it.

      My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090

      It gets nice enough peformance loading 32B models completely in vram, but i am skeptical that a simillar system can run a 671B at higher speeds then a snails space, i currently run vLLM because it has higher peformance with tensor parrelism then lama.cpp but i shall check out ik_lama.cpp.

  • AreaKode@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    “AI” in it’s current form, is a scam. Nvidia is making the most of this grift. They are now worth more money in the world than any other company.

      • AreaKode@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        LLMs are experimental, alpha-level technologies. Nvidia showed investors how fast their cards could compute this information. Now investors can just tell the LLM what they want, and it will spit out something that probably looks similar to what they want. But Nvidia is going to sell as many cards as possible before the bubble bursts.

      • metaStatic@kbin.earth
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        ML has been sold as AI and honestly that’s enough of a scam for me to call it one.

        but I also don’t really see end users getting scammed just venture capital and I’m ok with this.

        • AreaKode@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Correct. Pattern recognition + prompts to desire a positive result even if the answer isn’t entirely true. If it’s close enough to the desired pattern, it get pushed.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Looking at the table above AI gpus are a pure scam

    How much more power are your gaming GPUs going to use? How much more space will they use? How much more heat will you need to dissipate?

    • TheMightyCat@ani.socialOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Well a scam for selfhosters, for datacenters it’s different ofcourse.

      Im looking to upgrade to my first dedicated built server coming from only SBCs so I’m not sure how much of a concern heat will be, but space and power shouldn’t be an issue. (Within reason ofcourse)

  • BrightCandle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Initially a lot of the AI was getting trained on lower class GPUs and none of these AI special cards/blades existed. The problem is that the problems are quite large and hence require a lot of VRAM to work on or you split it and pay enormous latency penalties going across the network. Putting it all into one giant package costs a lot more but it also performs a lot better, because AI is not an embarrassingly parallel problem that can be easily split across many GPUs without penalty. So the goal is often to reduce the number of GPUs you need to get a result quickly enough and it brings its own set of problems of power density in server racks.