runtime error
Exit code: 1. Reason: flash-attn Installing collected packages: einops, flash-attn Successfully installed einops-0.8.0 flash-attn-2.6.3 [notice] A new release of pip available: 22.3.1 -> 24.2 [notice] To update, run: /usr/local/bin/python -m pip install --upgrade pip Downloading shards: 0%| | 0/4 [00:00<?, ?it/s][A Downloading shards: 25%|██▌ | 1/4 [00:13<00:39, 13.13s/it][A Downloading shards: 50%|█████ | 2/4 [00:26<00:26, 13.33s/it][A Downloading shards: 75%|███████▌ | 3/4 [00:40<00:13, 13.52s/it][A Downloading shards: 100%|██████████| 4/4 [00:45<00:00, 10.16s/it][A Downloading shards: 100%|██████████| 4/4 [00:45<00:00, 11.34s/it] The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead. Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> from chatbot import model_inference, EXAMPLES, chatbot File "/home/user/app/chatbot.py", line 33, in <module> model = LlavaForConditionalGeneration.from_pretrained(model_id,torch_dtype=torch.float16, use_flash_attention_2=True) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3826, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1556, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1667, in _check_and_enable_flash_attn_2 raise ImportError(f"{preface} Flash Attention 2 is not available. {install_message}") ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.
Container logs:
Fetching error logs...