trainer.precision=bf16
#3
by
camenduru
- opened
README.md
CHANGED
@@ -117,7 +117,7 @@ Alternatively, you can use NeMo Megatron training docker container with all depe
|
|
117 |
git clone https://github.com/NVIDIA/NeMo.git
|
118 |
cd NeMo/examples/nlp/language_modeling
|
119 |
git checkout v1.17.0
|
120 |
-
python megatron_gpt_eval.py gpt_model_file=nemo_2b_bf16_tp1.nemo server=True tensor_model_parallel_size=1 trainer.devices=1
|
121 |
```
|
122 |
|
123 |
### Step 3: Send prompts to your model!
|
|
|
117 |
git clone https://github.com/NVIDIA/NeMo.git
|
118 |
cd NeMo/examples/nlp/language_modeling
|
119 |
git checkout v1.17.0
|
120 |
+
python megatron_gpt_eval.py trainer.precision=bf16 gpt_model_file=nemo_2b_bf16_tp1.nemo server=True tensor_model_parallel_size=1 trainer.devices=1
|
121 |
```
|
122 |
|
123 |
### Step 3: Send prompts to your model!
|