loading as llama model

#4
by KnutJaegersberg - opened

seems to load as DeciLMForCausalLM, is it possible to load it as llama for easy compatibility?

NVIDIA org

Sorry, our custom architecture is not supported by modeling_llama.py

KnutJaegersberg changed discussion status to closed

Sign up or log in to comment