Step 6: Enable torch.compile (Optional)
The new Version 2.5 support torch.compile:
According to official claim, 20-40% DiT speedup and 15-25% VAE speedup with full graph compilation
But Triton is required for torch.compile with inductor backend
If you do not have Trition installed, you will get warning.
ββββββββ Phase 1: VAE encoding ββββββββ
[15:13:47.522] β [ERROR] Cannot use torch.compile with 'inductor' backend: Triton is not installed.
Triton is required for the inductor backend which performs kernel fusion and optimization.
To fix this issue:
1. Install Triton: pip install triton
2. OR change backend to 'cudagraphs' (lightweight, no Triton needed)
3. OR disable torch.compile
For more info: https://github.com/triton-lang/triton
[15:13:47.523] β οΈ [WARNING] torch.compile failed for VAE submodules: torch.compile with inductor backend requires Triton. Install with: pip install triton
[15:13:47.523] β οΈ [WARNING] Falling back to uncompiled VAE
If you do not have Trition installed, you will get warning.
ββββββββ Phase 2: DiT upscaling ββββββββ
[15:13:49.856] β [ERROR] Cannot use torch.compile with 'inductor' backend: Triton is not installed.
Triton is required for the inductor backend which performs kernel fusion and optimization.
To fix this issue:
1. Install Triton: pip install triton
2. OR change backend to 'cudagraphs' (lightweight, no Triton needed)
3. OR disable torch.compile
For more info: https://github.com/triton-lang/triton
[15:13:49.857] β οΈ [WARNING] torch.compile failed for DiT: torch.compile with inductor backend requires Triton. Install with: pip install triton
[15:13:49.857] β οΈ [WARNING] Falling back to uncompiled model