why does this model use FP32??
#6
by
purejomo
- opened
Llama2 default adopted bf16 bit-precision
but this model is FP32, that is not availabe in small size VRAM
is this model baisically trained by FP32 or just some conversion can be possible?
Thank you
The model is actually trained on bf16, it is just OK to load the model with torch_dtype="bfloat16"
.
Raincleared
changed discussion status to
closed