I want to buy a new GPU mainly for SD. The machine-learning space is moving quickly so I want to avoid buying a brand new card and then a fresh model or tool comes out and puts my card back behind the times. On the other hand, I also want to avoid needlessly spending extra thousands of dollars pretending I can get a ‘future-proof’ card.
I’m currently interested in SD and training LoRas (etc.). From what I’ve heard, the general advice is just to go for maximum VRAM.
- Is there any extra advice I should know about?
- Is NVIDIA vs. AMD a critical decision for SD performance?
I’m a hobbyist, so a couple of seconds difference in generation or a few extra hours for training isn’t going to ruin my day.
Some example prices in my region, to give a sense of scale:
- 16GB AMD: $350
- 16GB NV: $450
- 24GB AMD: $900
- 24GB NV: $2000
edit: prices are for new, haven’t explored pros and cons of used GPUs
quit you are killing the earth with this inane bullshit.
Do you know how much wattage those GPUs use, even if I disconnected my solar panels and ran the card 100% 24/7? Protip: it rounds down to zero.
If you’re serious about the global environmental crisis, comrade, organize with others to fight industrial-scale culprits instead of wasting your valuable time blaming trivial people.
It’s the training that’s the issue more than inferring.
Enjoy your treats!