Rename README.md to In
ed8aac5
-
1.52 kB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
19.4 kB
Rename README.md to In
-
42 Bytes
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
728 Bytes
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
638 Bytes
Upload MistralForCausalLM (#20)
-
553 Bytes
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
111 Bytes
Upload MistralForCausalLM (#20)
-
1.89 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.95 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.98 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.95 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.98 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.95 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.98 GB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
816 MB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
24 kB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.89 GB
Upload MistralForCausalLM (#20)
-
1.95 GB
Upload MistralForCausalLM (#20)
-
1.98 GB
Upload MistralForCausalLM (#20)
-
1.95 GB
Upload MistralForCausalLM (#20)
-
1.98 GB
Upload MistralForCausalLM (#20)
-
1.95 GB
Upload MistralForCausalLM (#20)
-
1.98 GB
Upload MistralForCausalLM (#20)
-
816 MB
Upload MistralForCausalLM (#20)
-
24 kB
Upload MistralForCausalLM (#20)
-
168 Bytes
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.8 MB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
493 kB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
1.43 kB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
195 Bytes
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
-
309 kB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded
training_args.bin
Detected Pickle imports (11)
- "h4.training.config.DPOTrainingArguments",
- "transformers.trainer_utils.HubStrategy",
- "accelerate.utils.dataclasses.DistributedType",
- "transformers.trainer_utils.SchedulerType",
- "transformers.training_args.OptimizerNames",
- "accelerate.utils.dataclasses.DeepSpeedPlugin",
- "accelerate.utils.deepspeed.HfDeepSpeedConfig",
- "transformers.integrations.deepspeed.HfDeepSpeedConfig",
- "transformers.trainer_utils.IntervalStrategy",
- "accelerate.state.PartialState",
- "torch.device"
How to fix it?
5.44 kB
Duplicate from HuggingFaceH4-colab/zephyr-7b-beta-sharded