https://www.youtube.com/watch?v=QEbI6v2oPvQ

I had a lot of trouble setting up ROCm and Automatic1111. I tried first with Docker, then natively and failed many times. Then I found this video. It has a good overview for the setup and a couple of critical bits that really helped me. Those were the reinstallation of compatible version of PyTorch and how to test if ROCm and pytorch are working. I still had a few of those Python problems that crop up when updating A1111, but a quick search in A1111 bug reports gave work arounds for those. And a strange HIP hardware error came at startup, but a simple reboot solved that.

Also he says he couldn’t make it work with ROCm 5.7, but for me now 2 months later, ROCm 5.7 with 7900 XTX and Ubuntu 22.04 worked.

And coming from a Windows DirectML setup, the speed is heavenly.

  • liberal_alienOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    So far I’m pretty happy with it. I can do 1024 resolution gens at 2.7 it/s. If I try to put more of them in a batch, then I might run out of memory, but compared to Windows and DirectML this is quite a bit faster and has better memory management.

    I also tried some animatediff for the first time on this, but only managed to render a 256 resolution gif. Even 512 resolution caused a crash.

    I also managed to get ComfyUI setup to serve Krita as a Stable Diffusion backend, but I only just got it to work and don’t have the first clue about how to use it properly yet. I used this plugin: https://github.com/Acly/krita-ai-diffusion.