--- license: bigscience-openrail-m --- This model is a merge of 80% starchatplus_beta and 20% wizardcoder. It is intended as a research tool into merging and routing of experts. "multiple-py": { "pass@1": 0.36645962732919257 } * this is just using a .1 sample of the eval for test purposes * * hf-causal (pretrained=Multi-Domain-Expert-Layers/scorpius_16b,dtype=bfloat16), limit: 0.1, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric | Value | |Stderr| |-------------------------------------------------|------:|-----------|------:|---|-----:| |arc_challenge | 0|acc | 0.4103|± |0.0457| | | |acc_norm | 0.4103|± |0.0457| |arc_easy | 0|acc | 0.7350|± |0.0410| | | |acc_norm | 0.6923|± |0.0429| |hellaswag | 0|acc | 0.5812|± |0.0458| | | |acc_norm | 0.7778|± |0.0386|