Deploy this model to SageMaker with Sample Code here resulted in CUDA OOM
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 526.00 MiB. GPU 0 has a total capacty of 22.20 GiB of which 135.12 MiB is free. Process 140645 has 22.06 GiB memory in use. Of the allocated memory 19.95 GiB is allocated by PyTorch, and 1010.96 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Code snippet:
import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'deepseek-ai/deepseek-coder-33b-instruct',
'SM_NUM_GPUS': json.dumps(1)
}
create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
image_uri=get_huggingface_llm_image_uri("huggingface",version="1.4.2"),
env=hub,
role=role,
)
deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.2xlarge",
container_startup_health_check_timeout=300,
)
send request
predictor.predict({
"inputs": "Hey my name is Julien! How are you?",
})