Like it says.
Takes about 120GB ram or vram...
Vicuna template, I hear. Works for me, including SYSTEM. YMMV
| 0 N/A N/A 46923 C /usr/bin/python3 46976MiB | | 1 N/A N/A 1483 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 46923 C /usr/bin/python3 46700MiB | | 2 N/A N/A 1483 G /usr/lib/xorg/Xorg 4MiB | | 2 N/A N/A 46923 C /usr/bin/python3 24934MiB | +
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.