[Bug] Import error when running on environment without flash_attn
I'm running this model with below code on an environment with T4 GPU which doesn't support building flash-attention
:
import torch
from transformers import AutoModel, AutoTokenizer
from huggingface_hub import snapshot_download
path = 'OpenGVLab/InternVL2-1B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
And it raises an import error:
ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn
According to the modeling_intern_vit.py
, flash_attn
should be an alternative attention implementation. Is it intended to design or just a bug?
Hello, thank you for your feedback.
Could you please let me know which version of transformers
you are using? When I used transformers
version 4.37.2, it was able to run normally in an environment without flash_attn
. I suspect that there might have been some changes in the newer version of transformers
that caused this bug.
I used transformers==4.43.2
before. I also tested with transformers==4.37.2
on colab and the issue still exists:
!pip install transformers==4.37.2 timm
import torch
from transformers import AutoModel, AutoTokenizer
path = 'OpenGVLab/InternVL2-1B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
Outputs:
Requirement already satisfied: transformers==4.37.2 in /usr/local/lib/python3.10/dist-packages (4.37.2)
Requirement already satisfied: timm in /usr/local/lib/python3.10/dist-packages (1.0.8)
...
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-48060db57bdf> in <cell line: 7>()
5
6 path = 'OpenGVLab/InternVL2-1B'
----> 7 model = AutoModel.from_pretrained(
8 path,
9 torch_dtype=torch.bfloat16,
4 frames
/usr/local/lib/python3.10/dist-packages/transformers/dynamic_module_utils.py in check_imports(filename)
178
179 if len(missing_packages) > 0:
--> 180 raise ImportError(
181 "This modeling file requires the following packages that were not found in your environment: "
182 f"{', '.join(missing_packages)}. Run `pip install {' '.join(missing_packages)}`"
ImportError: This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`
Thanks for your great work!
Are there any pre-trained checkpoint link rather than safetensors?
This has to do with how transformers.dynamic_module_utils looks for and tests availability of imports. It doesn't support nested try/catch.
I proposed a fix in https://huggingface.co/OpenGVLab/InternVL2-1B/discussions/4.
Thank you for your feedback. This bug was caused by transformers.dynamic_module_utils not supporting nested try/catch. I have fixed it.