Update README.md
Browse files
README.md
CHANGED
@@ -63,6 +63,8 @@ H100, A100 80GB, A100 40GB
|
|
63 |
|
64 |
## Steps to run inference:
|
65 |
|
|
|
|
|
66 |
Pre-requisite: you would need at least a machine with 4 40GB or 2 80GB NVIDIA GPUs, and 300GB of free disk space.
|
67 |
|
68 |
1. Please sign up to get access to container needed **for free** on [NVIDIA NeMo Framework](https://developer.nvidia.com/nemo-framework). If you don’t have an NVIDIA NGC account, you will be prompted to sign up for an account before proceeding.
|
|
|
63 |
|
64 |
## Steps to run inference:
|
65 |
|
66 |
+
We demonstrate inference using NVIDIA NeMo Framework, which allows easy model deployment based on [NVIDIA TRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), a highly optimized inference solution focussing on high throughput and low latency.
|
67 |
+
|
68 |
Pre-requisite: you would need at least a machine with 4 40GB or 2 80GB NVIDIA GPUs, and 300GB of free disk space.
|
69 |
|
70 |
1. Please sign up to get access to container needed **for free** on [NVIDIA NeMo Framework](https://developer.nvidia.com/nemo-framework). If you don’t have an NVIDIA NGC account, you will be prompted to sign up for an account before proceeding.
|