audio
audioduration (s) 12.3
29.2
| text
stringclasses 6
values | start_time
stringclasses 6
values | end_time
stringclasses 6
values |
---|---|---|---|
Spin. SPIN is self-play fine-tuning that improves LLMs. Tricksy, it's a form of fast inference involving sparsity. Phi-2 is a model from Microsoft. Lightning Attention 2 is an alternative to Flash Attention. Mixtral 8x7B, this is a Mixture of Experts model. Solar 10.7B is a Mistral model with some extra layers added in. | 00:00:00.000 | 00:00:29.220 |
|
OpenChat is a self play fine tune of the Mistral model. Notux-8x7B-v1. This is a version of Mixtral-8x7B, fine-tuned. Gemini Pro is Google's best model, or perhaps not as good as Gemini Ultra. Phi-2, I've already mentioned. DeciLM-7B is a high-speed 7B | 00:00:29.220 | 00:00:58.240 |
|
model. That's DeciLM-7B. Arena Elo is a means of comparing LMs. MT-bench is another metric and MMLU is also | 00:00:58.240 | 00:01:10.560 |
|
gpt4 turbo is a fast gpt4 model by openai. Mistral Medium is a mixture of experts but with larger experts than Mixtral-8x7B. Claude 1, Claude 2 or Claude 2.0 are the | 00:01:11.680 | 00:01:30.060 |
|
latest Claude models. Mixtral-8x7B-Instruct-v0.1 is the latest mixture of experts. Yi-34B-Chat is a very strong fine-tune of Llama. Claude-Instant-1 is one of the Claude models. Tulu-2-DPO-70B is a | 00:01:30.060 | 00:01:58.960 |
|
DPO fine-tuned model by Allen AI. It's a fine-tune of the Llama 2 model, the 70B. WizardLM-70B-v1.0 WizardLM-70B-v1.0 is also a fine-tune of the Llama-70B model. | 00:01:58.960 | 00:02:16.000 |