• by cjbprime on 12/25/2023, 2:39:26 AM

    Since the amount of VRAM is low (max 24GB), one question to look into before investing might be whether there's support for chaining multiple cards together.

  • by LorenDB on 12/25/2023, 2:30:56 AM

    Ollama just had AMD support merged. I haven't got it working with my 6700 XT eGPU yet, but I anticipate getting there soon.

  • by htrp on 12/25/2023, 3:38:40 AM

    You'll be fine for inference but probably struggle to run any large models requiring multi gpu ops.

  • by InitEnabler on 12/24/2023, 11:15:59 PM

  • by Havoc on 12/25/2023, 2:10:36 AM

    My understanding is inference on latest amds (79*) is mostly fine but training is still shaky

  • by brudgers on 12/25/2023, 5:25:30 PM

    Is there a clear business rationale?

    Can that rationale be mitigated by increasing revenue as an alternative to adding the technical risks implicit in your question?

    I mean it is one thing if this is a hobby project and using AMD will provide interesting challenges to keep yourself occupied. How you spend your own time and money is entirely up to you.

    But spending other people’s money is another thing entirely (unless they are mom and dad). And even more so spending other people’s time particularly when it comes to paychecks.

    Finally, I am not saying there aren’t business cases where AMD doesn’t make sense. Just that Nvidia is the default for good reasons and is the simplest thing that might work. Good luck.