Linux, Intel Arc, and Stable Diffusion
Note: This page is outdated (8/30/2023)
Getting started
Getting Stable Diffusion running with an Intel Arc is (currently) a bit difficult and quirky, but it’s come a long way very quickly. (Unless you think this should have all been baked before relasing the product, but I feel that’s a bit unreasonable.) A lot of Linux tooling for the Intel Arc has been oriented around Ubuntu. However, it is possible to get it running with Fedora 38. This page will be a bit of a cheat sheet, and not really refined into a walkthrough at this time.
packages
Base Fedora:
dnf install intel-compute-runtime oneapi-level-zero oneVPL ondnn igt-gpu-tools install openmpi-devel
Add the Intel Yum Repo:
note As of this moment with Fedora 38, there’s been a change in how rpm performs crypto checks and how Intel is signing their packages. I need to flip back through my notes and link the bugzilla, but oneAPI packages will fail even if you tell DNF to skip gpg checks.
[oneAPI]
name=Intel® oneAPI repository
baseurl=https://yum.repos.intel.com/oneapi
enabled=1
gpgcheck=0
repo_gpgcheck=0
#gpgkey=https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
dnf install --setopt tsflags=nocrypto intel-oneapi-runtime-mkl
dnf install --setopt tsflags=nocrypto intel-basekit
you will find Intel oneAPI content under /opt/. You should be able to see the status of your video card with intel_gpu_top
.
Install Conda:
dnf install conda
We’ll be using the Intel extensions for Pytorch (ipex). At this time, it is only available for a handful of python versions, is maintained by intel and has not been sent upstream to the PyTorch project. Python 3.10 is the latest.
Create your Python 3.10 virtual environment
conda create -n py310 python=3.10
You may need to add your user to the ‘render’ group. usermod -aG render <your user>
activate your conda virtual environment conda activate py310
install python extension for pytorch with pip
pip install https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/torch-1.13.0a0%2Bgit6c9b55e-cp310-cp310-linux_x86_64.whl
pip install https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/torchvision-0.14.1a0%2B5e8e2f1-cp310-cp310-linux_x86_64.whl
pip install https://intel-extension-for-pytorch.s3.amazonaws.com/ipex_stable/xpu/intel_extension_for_pytorch-1.13.120%2Bxpu-cp310-cp310-linux_x86_64.whl
Get a oneAPI hotfix https://registrationcenter-download.intel.com/akdlm/IRC_NAS/89283df8-c667-47b0-b7e1-c4573e37bd3e/2023.1-linux-hotfix.zip
currently Automatic1111 does not have support for Intel Arc’s XPU tensor device. However, this fork does. https://github.com/vladmandic/automatic
Git clone it and then run the following to not overwrite our intel torch and torchvision packages sed -i ‘/^torch\b/d’ requirements.txt sed -i ‘/^torchvision\b/d’ requirements.txt
You should be able to launch the Automatic1111 fork with python launch.py --use-ipex --listen