Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's version of Intel triton? or "No module named 'triton.ops'" #2279

Open
dvrogozh opened this issue Sep 18, 2024 · 1 comment
Open

What's version of Intel triton? or "No module named 'triton.ops'" #2279

dvrogozh opened this issue Sep 18, 2024 · 1 comment
Assignees
Labels
question Further information is requested tests: ecosystem

Comments

@dvrogozh
Copy link
Contributor

With:

Intel's triton built from main branch (versioned 3.0.0) or built from release/3.1.x branch (versioned 3.1.1) don't expose triton.ops - see error from pytorch benchmark below. Actually Intel's triton version corresponds to some later state of triton (probably 3.2.x?). triton.ops were dropped from triton main in this PR between 3.1 and 3.2:

You can see that Triton 3.1.x release branch does have ops folder in there:

and Intel 3.1.x does not:

Can Intel triton builds be correctly versioned? I believe that build for pytorch 2.5 is actually 3.2.x not 3.1.x

@vlad-penkin

$ python3 install.py
$ python3 run_benchmark.py triton --op int4_gemm
Failed to import user benchmark module triton, error: No module named 'triton.ops'
Traceback (most recent call last):
  File "/home/dvrogozh/git/pytorch/benchmark/run_benchmark.py", line 41, in run
    benchmark.run(bm_args)
  File "/home/dvrogozh/git/pytorch/benchmark/userbenchmark/triton/run.py", line 192, in run
    _run(args, extra_args)
  File "/home/dvrogozh/git/pytorch/benchmark/userbenchmark/triton/run.py", line 143, in _run
    Opbench = load_opbench_by_name(args.op)
  File "/home/dvrogozh/git/pytorch/benchmark/torchbenchmark/operators/__init__.py", line 67, in load_opbench_by_name
    module = importlib.import_module(f".{op_pkg}", package=__name__)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/dvrogozh/git/pytorch/benchmark/torchbenchmark/operators/int4_gemm/__init__.py", line 1, in <module>
    from .int4_gemm import Operator
  File "/home/dvrogozh/git/pytorch/benchmark/torchbenchmark/operators/int4_gemm/int4_gemm.py", line 16, in <module>
    import triton.ops
ModuleNotFoundError: No module named 'triton.ops'
@vlad-penkin vlad-penkin added question Further information is requested tests: ecosystem labels Sep 19, 2024
@vlad-penkin vlad-penkin self-assigned this Sep 19, 2024
@dvrogozh
Copy link
Contributor Author

Another case which breaks is importing bitsandbytes:

$ pip3 install bitsandbytes
$ pip3 list | grep bitsandbytes
bitsandbytes                             0.43.3
$ python3 -c 'import bitsandbytes'
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 15, in <module>
    from .nn import modules
  File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/nn/__init__.py", line 17, in <module>
    from .triton_based_modules import (
  File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/nn/triton_based_modules.py", line 7, in <module>
    from bitsandbytes.triton.int8_matmul_mixed_dequantize import (
  File "/home/dvrogozh/pytorch.xpu/lib/python3.10/site-packages/bitsandbytes/triton/int8_matmul_mixed_dequantize.py", line 12, in <module>
    from triton.ops.matmul_perf_model import early_config_prune, estimate_matmul_time
ModuleNotFoundError: No module named 'triton.ops'

Thus, any project which imports bitsandbytes might get broken if intel triton version is installed. I noticed that using Huggingface peft and text-generation-inference projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested tests: ecosystem
Projects
None yet
Development

No branches or pull requests

2 participants