rocmate

gfx1101 — RX 7800/7700 XT

Back to matrix

Chip: gfx1101  ·  8 tool(s) with data

Axolotl ✅ tested ROCm 6.2

RX 7800 XT / RX 7700 XT — QLoRA of 7B models fits in 16 GB. Same setup as gfx1100.

ENV vars

  • export HSA_OVERRIDE_GFX_VERSION=11.0.0
  • export PYTORCH_HIP_ALLOC_CONF=expandable_segments:True

Install hints

  • Same install as gfx1100. Use gradient_checkpointing: true in your YAML for memory savings.
ComfyUI ✅ tested ROCm 6.2

RX 7800 XT / RX 7700 XT — same setup as gfx1100. 16 GB VRAM limits Flux.1; SDXL and SD 1.5 run well.

ENV vars

  • export HSA_OVERRIDE_GFX_VERSION=11.0.0
  • export PYTORCH_HIP_ALLOC_CONF=expandable_segments:True

Install hints

  • Same install as gfx1100.
ExLlamaV2 ✅ tested ROCm 6.2

RX 7800 XT / RX 7700 XT — works well. 16 GB VRAM suits 13B Q4 or 7B Q8 models.

ENV vars

  • export HSA_OVERRIDE_GFX_VERSION=11.0.0

Install hints

  • Same install as gfx1100.
faster-whisper 🟡 partial ROCm 6.2

RX 7800 XT / RX 7700 XT — same PyTorch ROCm approach as gfx1100. Transcription speed similar to gfx1100.

ENV vars

  • export HSA_OVERRIDE_GFX_VERSION=11.0.0

Install hints

  • pip install torch torchaudio --index-url https://download.pytorch.org/whl/rocm6.2
llama.cpp ✅ tested ROCm 6.2

RX 7800 XT / RX 7700 XT — same HIP build as gfx1100. 16 GB VRAM; Q4_K_M models up to 32B. Vulkan build is an alternative if ROCm gives trouble.

Install hints

  • Same HIP build as gfx1100.
  • Vulkan alternative: cmake -B build -DGGML_VULKAN=ON && cmake --build build --config Release -j$(nproc)
Ollama ✅ tested ROCm 6.3

RX 7800 XT / RX 7700 XT — works out of the box on Linux with ROCm 6.x, same as gfx1100.

Install hints

  • curl -fsSL https://ollama.com/install.sh | sh
Stable Diffusion WebUI ✅ tested ROCm 6.2

RX 7800 XT / RX 7700 XT — same setup as gfx1100. 16 GB VRAM; SDXL runs well, Flux.1 tight.

ENV vars

  • export HSA_OVERRIDE_GFX_VERSION=11.0.0
  • export PYTORCH_HIP_ALLOC_CONF=expandable_segments:True

Install hints

  • Same as gfx1100.
vLLM ✅ tested ROCm 6.2

RX 7800 XT / RX 7700 XT — works on Linux. 16 GB VRAM limits to <=13B models.

ENV vars

  • export HSA_OVERRIDE_GFX_VERSION=11.0.0

Install hints

  • Same install as gfx1100. Set --gpu-memory-utilization 0.85 to leave headroom.