| GPU | VRAM | Price (€) | Bandwidth (TB/s) | TFLOP16 | €/GB | €/TB/s | €/TFLOP16 |
|---|---|---|---|---|---|---|---|
| NVIDIA H200 NVL | 141GB | 36284 | 4.89 | 1671 | 257 | 7423 | 21 |
| NVIDIA RTX PRO 6000 Blackwell | 96GB | 8450 | 1.79 | 126.0 | 88 | 4720 | 67 |
| NVIDIA RTX 5090 | 32GB | 2299 | 1.79 | 104.8 | 71 | 1284 | 22 |
| AMD RADEON 9070XT | 16GB | 665 | 0.6446 | 97.32 | 41 | 1031 | 7 |
| AMD RADEON 9070 | 16GB | 619 | 0.6446 | 72.25 | 38 | 960 | 8.5 |
| AMD RADEON 9060XT | 16GB | 382 | 0.3223 | 51.28 | 23 | 1186 | 7.45 |
This post is part “hear me out” and part asking for advice.
Looking at the table above AI gpus are a pure scam, and it would make much more sense to (atleast looking at this) to use gaming gpus instead, either trough a frankenstein of pcie switches or high bandwith network.
so my question is if somebody has build a similar setup and what their experience has been. And what the expected overhead performance hit is and if it can be made up for by having just way more raw peformance for the same price.


My target model is Qwen/Qwen3-235B-A22B-FP8. Ideally its maxium context lenght of 131K but i’m willing to compromise. I find it hard to give an concrete t/s awnser, let’s put it around 50. At max load probably around 8 concurrent users, but these situations will be rare enough that oprimizing for single user is probably more worth it.
My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090
It gets nice enough peformance loading 32B models completely in vram, but i am skeptical that a simillar system can run a 671B at higher speeds then a snails space, i currently run vLLM because it has higher peformance with tensor parrelism then lama.cpp but i shall check out ik_lama.cpp.