Text Generation
Transformers
GGUF
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
jondurbin/bagel-dpo-7b-v0.1
dataset:ai2_arc
dataset:unalignment/spicy-3.1
dataset:codeparrot/apps
dataset:facebook/belebele
dataset:boolq
dataset:jondurbin/cinematika-v0.1
dataset:drop
dataset:lmsys/lmsys-chat-1m
dataset:TIGER-Lab/MathInstruct
dataset:cais/mmlu
dataset:Muennighoff/natural-instructions
dataset:openbookqa
dataset:piqa
dataset:Vezora/Tested-22k-Python-Alpaca
dataset:cakiki/rosetta-code
dataset:Open-Orca/SlimOrca
dataset:spider
dataset:squad_v2
dataset:migtissera/Synthia-v1.3
dataset:datasets/winogrande
dataset:nvidia/HelpSteer
dataset:Intel/orca_dpo_pairs
dataset:unalignment/toxic-dpo-v0.1
dataset:jondurbin/truthy-dpo-v0.1
dataset:allenai/ultrafeedback_binarized_cleaned
Inference Endpoints
conversational
File size: 135 Bytes
6bc2340 |
1 2 3 4 |
version https://git-lfs.github.com/spec/v1
oid sha256:cce6f50573f184747b93cfb7e2c22fd2f68409e893616486ebd49b437e42607f
size 4368439488
|