You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
31c1333 (today's latest main) or 91b14bf (today's latest release/3.1.x)
Intel's triton built from main branch (versioned 3.0.0) or built from release/3.1.x branch (versioned 3.1.1) don't expose triton.ops - see error from pytorch benchmark below. Actually Intel's triton version corresponds to some later state of triton (probably 3.2.x?). triton.ops were dropped from triton main in this PR between 3.1 and 3.2:
$ python3 install.py
$ python3 run_benchmark.py triton --op int4_gemm
Failed to import user benchmark module triton, error: No module named 'triton.ops'
Traceback (most recent call last):
File "/home/dvrogozh/git/pytorch/benchmark/run_benchmark.py", line 41, in run
benchmark.run(bm_args)
File "/home/dvrogozh/git/pytorch/benchmark/userbenchmark/triton/run.py", line 192, in run
_run(args, extra_args)
File "/home/dvrogozh/git/pytorch/benchmark/userbenchmark/triton/run.py", line 143, in _run
Opbench = load_opbench_by_name(args.op)
File "/home/dvrogozh/git/pytorch/benchmark/torchbenchmark/operators/__init__.py", line 67, in load_opbench_by_name
module = importlib.import_module(f".{op_pkg}", package=__name__)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/dvrogozh/git/pytorch/benchmark/torchbenchmark/operators/int4_gemm/__init__.py", line 1, in <module>
from .int4_gemm import Operator
File "/home/dvrogozh/git/pytorch/benchmark/torchbenchmark/operators/int4_gemm/int4_gemm.py", line 16, in <module>
import triton.ops
ModuleNotFoundError: No module named 'triton.ops'
The text was updated successfully, but these errors were encountered:
Another case which breaks is importing bitsandbytes:
$ pip3 install bitsandbytes
$ pip3 list | grep bitsandbytes
bitsandbytes 0.43.3
$ python3 -c 'import bitsandbytes'
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 15, in <module>
from .nn import modules
File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/nn/__init__.py", line 17, in <module>
from .triton_based_modules import (
File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/nn/triton_based_modules.py", line 7, in <module>
from bitsandbytes.triton.int8_matmul_mixed_dequantize import (
File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/triton/int8_matmul_mixed_dequantize.py", line 12, in <module>
from triton.ops.matmul_perf_model import early_config_prune, estimate_matmul_time
ModuleNotFoundError: No module named 'triton.ops'
Thus, any project which imports bitsandbytes might get broken if intel triton version is installed. I noticed that using Huggingface peft and text-generation-inference projects.
With:
release/3.1.x
)Intel's triton built from main branch (versioned 3.0.0) or built from release/3.1.x branch (versioned 3.1.1) don't expose
triton.ops
- see error from pytorch benchmark below. Actually Intel's triton version corresponds to some later state of triton (probably 3.2.x?).triton.ops
were dropped from triton main in this PR between 3.1 and 3.2:You can see that Triton 3.1.x release branch does have
ops
folder in there:and Intel 3.1.x does not:
Can Intel triton builds be correctly versioned? I believe that build for pytorch 2.5 is actually 3.2.x not 3.1.x
@vlad-penkin
The text was updated successfully, but these errors were encountered: