What inputs does the model expect?
#4
by
AlmightYariv
- opened
I'm trying to toy with the model in Amazon SageMaker.
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'openaccess-ai-collective/manticore-13b',
'HF_TASK':'text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "Can you please let us know more details about your "
})
The response i'm getting is ModelError (BadRequest), probably due to inputs not being correctly fed.
Any idea?
My guess is that transformers_version='4.17.0',
is the problem. Llama support wasn't added until 4.29.0 iirc (don't quote me on that, but it was only definitely recently added)