Adding tags to the model
be2d226
verified
-
docs
move dict loading to function in eval utils
-
examples
Create multi-task_cell_classification.ipynb (#418)
-
fine_tuned_models
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
geneformer
update pretrainer to not use distributed sampler (Trainer uses accelerate)
-
gf-12L-30M-i2048
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
gf-12L-95M-i4096
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
gf-12L-95M-i4096_CLcancer
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
gf-20L-95M-i4096
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
gf-6L-30M-i2048
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
1.27 kB
precommit formatting
-
3.08 kB
Add gitignore file
-
737 Bytes
Add pre-commit config
-
396 Bytes
update readthedocs.yaml
-
195 Bytes
update setup with req and manifest with updated filenames
-
8.46 kB
Adding tags to the model
-
591 Bytes
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
90 Bytes
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
152 MB
update with 12L and 20L i4096 gc95M models, multitask and quantiz code
-
362 Bytes
update transformers version to match pretrainer using accelerate
-
1.06 kB
Add find_packages to automatically include mtl and subpackages in setup.py (#410)
training_args.bin
Detected Pickle imports (8)
- "transformers.trainer_utils.IntervalStrategy",
- "transformers.trainer_utils.SchedulerType",
- "transformers.training_args.OptimizerNames",
- "accelerate.utils.dataclasses.DistributedType",
- "torch.device",
- "transformers.trainer_utils.HubStrategy",
- "transformers.training_args.TrainingArguments",
- "accelerate.state.PartialState"
How to fix it?
4.92 kB
update with 12L and 20L i4096 gc95M models, multitask and quantiz code