no space left on device error for falcon-7b-instruct

#72
by Ferraria - opened

Hi There,

I tried to use both chunks of code from the falcon-7b-instruct model card but both code chunks return a "no space left on device" error message. I am using a virtual machine with 128GB of ram so i don't understand why this would be happening.

the code chunks i used:

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="tiiuae/falcon-7b-instruct", trust_remote_code=True)

and

# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True)

can someone please point me in the right direction for resolving this error?

Thank you!

this is the full error message

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
Cell In[2], line 3
      1 # Load model directly
      2 from transformers import AutoModelForCausalLM
----> 3 model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True)

File c:\Users\gg\Documents\generative_ai\genai\lib\site-packages\transformers\models\auto\auto_factory.py:488, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    486     else:
    487         cls.register(config.__class__, model_class, exist_ok=True)
--> 488     return model_class.from_pretrained(
    489         pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
    490     )
    491 elif type(config) in cls._model_mapping.keys():
    492     model_class = _get_model_class(config, cls._model_mapping)

File c:\Users\gg\Documents\generative_ai\genai\lib\site-packages\transformers\modeling_utils.py:2610, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
   2607 # We'll need to download and cache each checkpoint shard if the checkpoint is sharded.
   2608 if is_sharded:
   2609     # rsolved_archive_file becomes a list of files that point to the different checkpoint shards in this case.
-> 2610     resolved_archive_file, sharded_metadata = get_checkpoint_shard_files(
   2611         pretrained_model_name_or_path,
   2612         resolved_archive_file,
   2613         cache_dir=cache_dir,
   2614         force_download=force_download,
   2615         proxies=proxies,
   2616         resume_download=resume_download,
   2617         local_files_only=local_files_only,
   2618         use_auth_token=token,
   2619         user_agent=user_agent,
   2620         revision=revision,
   2621         subfolder=subfolder,
   2622         _commit_hash=commit_hash,
   2623     )
   2625 # load pt weights early so that we know which dtype to init the model under
   2626 if from_pt:

File c:\Users\gg\Documents\generative_ai\genai\lib\site-packages\transformers\utils\hub.py:958, in get_checkpoint_shard_files(pretrained_model_name_or_path, index_filename, cache_dir, force_download, proxies, resume_download, local_files_only, use_auth_token, user_agent, revision, subfolder, _commit_hash)
    955 for shard_filename in tqdm(shard_filenames, desc="Downloading shards", disable=not show_progress_bar):
    956     try:
    957         # Load from URL
--> 958         cached_filename = cached_file(
    959             pretrained_model_name_or_path,
    960             shard_filename,
    961             cache_dir=cache_dir,
...
    481 @_functools.wraps(func)
    482 def func_wrapper(*args, **kwargs):
--> 483     return func(*args, **kwargs)

OSError: [Errno 28] No space left on device

i just saw that the actual memory of my VM was less than what i assumed. it had nothing to do with RAM

Ferraria changed discussion status to closed

Sign up or log in to comment