Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
emre
/
java-RoBERTa-Tara-small
like
2
Fill-Mask
Transformers
PyTorch
Safetensors
code_search_net
roberta
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
main
java-RoBERTa-Tara-small
2 contributors
History:
16 commits
emre
SFconvertbot
Adding `safetensors` variant of this model (
#1
)
4362c8d
over 1 year ago
.gitattributes
Safe
1.23 kB
Adding `safetensors` variant of this model (#1)
over 1 year ago
README.md
Safe
765 Bytes
Update README.md
over 2 years ago
config.json
Safe
636 Bytes
Upload config.json
over 2 years ago
merges.txt
Safe
550 kB
add tokenizer
over 2 years ago
model.safetensors
Safe
334 MB
LFS
Adding `safetensors` variant of this model (#1)
over 1 year ago
optimizer.pt
Safe
668 MB
LFS
Upload optimizer.pt with git-lfs
over 2 years ago
pytorch_model.bin
Safe
334 MB
LFS
Upload pytorch_model.bin with git-lfs
over 2 years ago
rng_state.pth
pickle
Detected Pickle imports (7)
"collections.OrderedDict"
,
"numpy.core.multiarray._reconstruct"
,
"numpy.dtype"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.ByteStorage"
,
"numpy.ndarray"
,
"_codecs.encode"
How to fix it?
14.5 kB
LFS
Upload rng_state.pth with git-lfs
over 2 years ago
scheduler.pt
Safe
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
623 Bytes
LFS
Upload scheduler.pt with git-lfs
over 2 years ago
special_tokens_map.json
Safe
772 Bytes
add tokenizer
over 2 years ago
tokenizer.json
Safe
1.56 MB
add tokenizer
over 2 years ago
tokenizer_config.json
Safe
1.11 kB
add tokenizer
over 2 years ago
trainer_state.json
Safe
11.3 kB
Upload trainer_state.json
over 2 years ago
training_args.bin
pickle
Detected Pickle imports (5)
"transformers.training_args.TrainingArguments"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.trainer_utils.HubStrategy"
,
"torch.device"
How to fix it?
2.8 kB
LFS
Upload training_args.bin with git-lfs
over 2 years ago
vocab.json
Safe
904 kB
add tokenizer
over 2 years ago