Long Context - 16k,32k,64k,128k,200k,256k,512k,1000k Collection Q6/Q8 models here. Mixtrals/Mistral (and merges) generally have 32k context (not listed here) . Please see org model card for usage / templates. • 53 items • Updated 5 days ago • 4
DanielAWrightGabrielAI/mpt-7b-storywriter-4bit-128g-65kTokens-CPU Text Generation • Updated May 16, 2023 • 9 • 9