Spaces:
Sleeping
Sleeping
Du Mingzhe
commited on
Commit
•
3a1a5af
1
Parent(s):
859ae4c
Update table
Browse files
app.py
CHANGED
@@ -12,16 +12,16 @@ st.write("* To run NVIDIA L4 GPUs, you must use a G2 accelerator-optimized machi
|
|
12 |
st.write("* Each A3/A2/G2 machine type has a fixed GPU count, vCPU count, and memory size.")
|
13 |
|
14 |
st.markdown("""
|
15 |
-
| GPU | Memory | FP64 | FP32 | Price | Interconnect | Best used for
|
16 |
-
| --------- | ------------------------- | --------- | ----------| --------- | ----------------------------- |
|
17 |
| H100 80GB | 80 GB HBM3 @ 3.35 TBps | 34 | 67 | 12.11 | NVLink Full Mesh @ 900 GBps | Large models with massive data tables for ML Training, Inference, HPC, BERT, DLRM |
|
18 |
| A100 80GB | 80 GB HBM2e @ 1.9 TBps | 9.7 | 19.5 | 2.61 | NVLink Full Mesh @ 600 GBps | Large models with massive data tables for ML Training, Inference, HPC, BERT, DLRM |
|
19 |
-
| A100 40GB | 40 GB HBM2 @ 1.6 TBps | 9.7 | 19.5 | 1.67 | NVLink Full Mesh @ 600 GBps | ML Training, Inference, HPC
|
20 |
| L4 | 24 GB GDDR6 @ 300 GBps | 0.5 | 30.3 | 0.28 | N/A | ML Inference, Training, Remote Visualization Workstations, Video Transcoding, HPC |
|
21 |
-
| T4 | 16 GB GDDR6 @ 320 GBps | 0.25 | 8.1 | 0.15 | N/A | ML Inference, Training, Remote Visualization Workstations, Video Transcoding
|
22 |
-
| V100 | 16 GB HBM2 @ 900 GBps | 7.8 | 15.7 | 0.99 | NVLink Ring @ 300 GBps | ML Training, Inference, HPC
|
23 |
-
| P4 | 8 GB GDDR5 @ 192 GBps | 0.2 | 5.5 | 0.30 | N/A | Remote Visualization Workstations, ML Inference, and Video Transcoding
|
24 |
-
| P100 | 16 GB HBM2 @ 732 GBps | 4.7 | 9.3 | 0.58 | N/A | ML Training, Inference, HPC, Remote Visualization Workstations
|
25 |
""")
|
26 |
|
27 |
|
|
|
12 |
st.write("* Each A3/A2/G2 machine type has a fixed GPU count, vCPU count, and memory size.")
|
13 |
|
14 |
st.markdown("""
|
15 |
+
| GPU | Memory | FP64 | FP32 | Price | Interconnect | Best used for |
|
16 |
+
| --------- | ------------------------- | --------- | ----------| --------- | ----------------------------- | --------------------------------------------------------------------------------- |
|
17 |
| H100 80GB | 80 GB HBM3 @ 3.35 TBps | 34 | 67 | 12.11 | NVLink Full Mesh @ 900 GBps | Large models with massive data tables for ML Training, Inference, HPC, BERT, DLRM |
|
18 |
| A100 80GB | 80 GB HBM2e @ 1.9 TBps | 9.7 | 19.5 | 2.61 | NVLink Full Mesh @ 600 GBps | Large models with massive data tables for ML Training, Inference, HPC, BERT, DLRM |
|
19 |
+
| A100 40GB | 40 GB HBM2 @ 1.6 TBps | 9.7 | 19.5 | 1.67 | NVLink Full Mesh @ 600 GBps | ML Training, Inference, HPC |
|
20 |
| L4 | 24 GB GDDR6 @ 300 GBps | 0.5 | 30.3 | 0.28 | N/A | ML Inference, Training, Remote Visualization Workstations, Video Transcoding, HPC |
|
21 |
+
| T4 | 16 GB GDDR6 @ 320 GBps | 0.25 | 8.1 | 0.15 | N/A | ML Inference, Training, Remote Visualization Workstations, Video Transcoding |
|
22 |
+
| V100 | 16 GB HBM2 @ 900 GBps | 7.8 | 15.7 | 0.99 | NVLink Ring @ 300 GBps | ML Training, Inference, HPC |
|
23 |
+
| P4 | 8 GB GDDR5 @ 192 GBps | 0.2 | 5.5 | 0.30 | N/A | Remote Visualization Workstations, ML Inference, and Video Transcoding |
|
24 |
+
| P100 | 16 GB HBM2 @ 732 GBps | 4.7 | 9.3 | 0.58 | N/A | ML Training, Inference, HPC, Remote Visualization Workstations |
|
25 |
""")
|
26 |
|
27 |
|