modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
157M
likes
int64
0
6.51k
library_name
stringclasses
339 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
913k
ineo-ocr/ocr
ineo-ocr
"2024-01-07T18:00:55Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:00:55Z"
Entry not found
abhishektandon/ddpm
abhishektandon
"2024-01-07T18:08:14Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:08:14Z"
Entry not found
MarcAmil/esm2_t12_35M_UR50D-finetuned-localization
MarcAmil
"2024-01-07T18:11:09Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:11:09Z"
Entry not found
TriasAI/UgurYucel
TriasAI
"2024-01-07T18:16:52Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T18:15:14Z"
--- license: openrail ---
Adishah31/mistral_4bit_lora_model
Adishah31
"2024-01-07T18:40:54Z"
0
0
peft
[ "peft", "safetensors", "dataset:yahma/alpaca-cleaned", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-bnb-4bit", "region:us" ]
null
"2024-01-07T18:16:30Z"
--- library_name: peft base_model: unsloth/mistral-7b-bnb-4bit datasets: - yahma/alpaca-cleaned --- # Model Card for Model ID A 4bit Mistral 7B model finetuned using unsloth on T4 GPU ## Model Details ### Model Description - **Finetuned from model:** unsloth/mistral-7b-bnb-4bit - **Repository:** https://github.com/unslothai/unsloth ## Training Details ### Training Data https://huggingface.co/datasets/yahma/alpaca-cleaned ### Training Procedure #### Preprocessing Alpaca prompt template is used: ``` alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {}""" ``` #### Training Hyperparameters ``` per_device_train_batch_size = 2, gradient_accumulation_steps = 4, warmup_steps = 5, max_steps = 60, learning_rate = 2e-4, fp16 = not torch.cuda.is_bf16_supported(), bf16 = torch.cuda.is_bf16_supported(), logging_steps = 1, optim = "adamw_8bit", weight_decay = 0.01, lr_scheduler_type = "linear", seed = 3407 ``` - **Hardware Type:** T4 GPU - **Cloud Provider:** Google Colab ### Framework versions - PEFT 0.7.1
mahdi1717/jamshid
mahdi1717
"2024-01-07T18:16:54Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:16:54Z"
Entry not found
smrynrz0220/bart_cg_model
smrynrz0220
"2024-01-07T18:20:23Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:20:23Z"
Entry not found
daniel-gordon/GradientPolicy-CartPole-v1-500reward
daniel-gordon
"2024-01-07T18:20:36Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T18:20:25Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: GradientPolicy-CartPole-v1-500reward results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
abhishektandon/ddpm-butterflies-128
abhishektandon
"2024-01-07T18:21:56Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:21:56Z"
Entry not found
ib1368/Reinforce-CartPole-v1
ib1368
"2024-01-07T18:23:12Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T18:23:00Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Arzen221/phi-orca-1-percent
Arzen221
"2024-01-07T19:25:22Z"
0
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
"2024-01-07T18:23:46Z"
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
yy0514/bert-lek-full-train-4epochs
yy0514
"2024-01-07T18:29:06Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "multiple-choice", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-01-07T18:24:34Z"
Entry not found
amuchinaa/lunaai
amuchinaa
"2024-01-07T18:25:28Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:25:28Z"
Entry not found
inkstar/wuh
inkstar
"2024-01-07T18:29:59Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:29:59Z"
Entry not found
Olena25/distilgpt2
Olena25
"2024-01-07T18:30:57Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T18:30:57Z"
--- license: openrail ---
Loren85/Angel-Dust-Hazbin-Hotel-Ita
Loren85
"2024-01-07T18:36:23Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T18:35:24Z"
--- license: openrail ---
jysssacc/bloomz-560m_IA3_lr5e-05_bs4_epoch20_wd0.01
jysssacc
"2024-01-08T09:02:04Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
"2024-01-07T18:37:38Z"
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_IA3_lr5e-05_bs4_epoch20_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_IA3_lr5e-05_bs4_epoch20_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4710 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.3089 | 1.0 | 157 | 4.0251 | | 4.2527 | 2.0 | 314 | 3.9726 | | 4.1732 | 3.0 | 471 | 3.8935 | | 4.0635 | 4.0 | 628 | 3.8054 | | 4.0039 | 5.0 | 785 | 3.7344 | | 3.9169 | 6.0 | 942 | 3.6770 | | 3.8693 | 7.0 | 1099 | 3.6325 | | 3.7869 | 8.0 | 1256 | 3.5966 | | 3.8279 | 9.0 | 1413 | 3.5689 | | 3.7502 | 10.0 | 1570 | 3.5471 | | 3.7021 | 11.0 | 1727 | 3.5299 | | 3.6739 | 12.0 | 1884 | 3.5160 | | 3.6696 | 13.0 | 2041 | 3.5043 | | 3.6395 | 14.0 | 2198 | 3.4947 | | 3.6539 | 15.0 | 2355 | 3.4873 | | 3.601 | 16.0 | 2512 | 3.4812 | | 3.6461 | 17.0 | 2669 | 3.4765 | | 3.657 | 18.0 | 2826 | 3.4734 | | 3.6959 | 19.0 | 2983 | 3.4716 | | 3.6035 | 20.0 | 3140 | 3.4710 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
Bruh110/YNWMELLY
Bruh110
"2024-01-07T18:41:13Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T18:38:09Z"
--- license: openrail ---
etienne1222/donut-invoice-receipt-test-V1
etienne1222
"2024-01-11T05:05:43Z"
0
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "endpoints_compatible", "region:us" ]
null
"2024-01-07T18:39:48Z"
Entry not found
homersimpson/4-cat-belebele-fr
homersimpson
"2024-01-07T18:45:11Z"
0
0
transformers
[ "transformers", "safetensors", "camembert", "multiple-choice", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-01-07T18:44:54Z"
Entry not found
daniel-gordon/PolicyGradient-Pixelcopter-PLE-v0
daniel-gordon
"2024-01-07T18:51:21Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T18:47:23Z"
5000 steps --- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PolicyGradient-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 4.00 +/- 5.25 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
CluckRookie/CambioBert-base
CluckRookie
"2024-04-14T10:40:03Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
"2024-01-07T18:48:05Z"
--- license: apache-2.0 ---
Beckhusen/beckhusen
Beckhusen
"2024-01-07T18:52:18Z"
0
0
null
[ "license:unknown", "region:us" ]
null
"2024-01-07T18:52:18Z"
--- license: unknown ---
UMI-DUINO/TheDuino-007-13B
UMI-DUINO
"2024-01-07T19:01:34Z"
0
0
null
[ "doi:10.57967/hf/1579", "region:us" ]
null
"2024-01-07T19:01:32Z"
Entry not found
kisLottUS/Mafanya
kisLottUS
"2024-01-07T19:05:25Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:03:59Z"
Entry not found
FilledtotheBrim/falcon_finetuned
FilledtotheBrim
"2024-01-07T19:06:40Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:tiiuae/falcon-7b-instruct", "base_model:adapter:tiiuae/falcon-7b-instruct", "region:us" ]
null
"2024-01-07T19:06:32Z"
--- library_name: peft base_model: tiiuae/falcon-7b-instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
waveydaveygravy/swap-mukham
waveydaveygravy
"2024-03-08T11:20:55Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:07:42Z"
Entry not found
lourvalli/code-search-net-tokenizer
lourvalli
"2024-01-07T19:08:29Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:08:27Z"
Entry not found
NLPProject2023Z/xlnet_regression
NLPProject2023Z
"2024-01-07T19:38:16Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:10:50Z"
Entry not found
AlbelTec/dpo_mistral_7B_v_0_1
AlbelTec
"2024-01-07T19:12:55Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:12:55Z"
Entry not found
satendra4u2022/mistral_7b_guanaco_202401
satendra4u2022
"2024-01-07T19:16:42Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-01-07T19:16:42Z"
--- license: mit ---
ALOQAS/gpt2-aloqas-scientific-papers
ALOQAS
"2024-03-10T16:32:59Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:22:11Z"
# Algorithmic Learning and Optimized Quantum Artificial Solutions (ALOQAS) <p> <a href="https://huggingface.co/spaces/ALOQAS/aloqas-gradio">Démo. Gradio sur Hugging Face Spaces</a> </p> <p> <a href="https://github.com/LucasAguetai/ALOQAS">Lien vers le repository GitHub</a> </p> <p> <a href="https://drive.google.com/drive/folders/1MrW-UftHd0HVgLjJ_C5HmwBG3ymEY_qY?usp=drive_link">Lien vers les notebooks Google Colaboratory (sur demande)</a> </p> ## Projet : Création d'un Système de Chatbot Conversationnel basé sur GPT-2 Ce projet a pour objectif de développer un chatbot conversationnel intelligent en utilisant le modèle GPT-2 comme base. <br /> Le chatbot sera capable d'engager des conversations naturelles avec les utilisateurs, de répondre à leurs questions et de fournir des informations utiles. ## Membres du projet <ul> <li><b>A</b>urélien ZUFIC</li> <li><b>L</b>ucas AGUETAÏ</li> <li><b>O</b>ny ANDRIATSAHAVOJAONA</li> <li><b>Q</b>uentin VERMEERSCH</li> <li><b>A</b>lexandre HUYNH</li> <li><b>S</b>amuel DORISMOND</li> </ul> ## Jeux de données traité Dataset TensorFlow sur des articles scientifiques : <a href="https://www.tensorflow.org/datasets/catalog/scientific_papers">scientific_papers</a> ## Tâches du projet ### Compréhension de GPT-2 : Étudiez le fonctionnement de GPT-2 en utilisant l'API TensorFlow.<br /> Explorez comment GPT-2 génère du texte en réponse à des stimuli. ### Collecte de Données : Identifiez un domaine spécifique ou une application pour votre chatbot (par exemple, un chatbot de service client, un chatbot éducatif, etc.).<br /> Collectez ou préparez un ensemble de données de dialogue adapté à votre domaine d'application. ### Fine-tuning de GPT-2 : Fine-tunez le modèle GPT-2 en utilisant l'ensemble de données de dialogue.<br /> Optimisez le modèle pour la génération de réponses de chatbot cohérentes et pertinentes.<br /> Évaluez les performances du modèle fine-tuné en utilisant des mesures de qualité de dialogue. ### Intégration de Gradio : Utilisez la bibliothèque Gradio pour intégrer une interface utilisateur conviviale à votre chatbot.<br /> Personnalisez l'interface pour qu'elle corresponde à l'esthétique de votre application. ### Tests et Optimisation : Testez le chatbot avec des utilisateurs pour recueillir des commentaires et des données de performance.<br /> Effectuez des ajustements en fonction des commentaires des utilisateurs pour améliorer la qualité des réponses du chatbot. ### Documentation et Présentation : Rédigez une documentation complète expliquant comment utiliser le chatbot.<br /> Préparez une présentation pour montrer et expliquer votre chatbot à vos pairs et enseignants. ### Ressources : Vous pouvez utiliser l'API GPT-2 de TensorFlow pour le fine-tuning et la génération de réponses de chatbot.<br /> Flask est une bibliothèque Python populaire pour le développement de serveurs web.<br /> Gradio propose des ressources et des exemples pour développer des interfaces utilisateur interactives.
Faithshield/marvel_model
Faithshield
"2024-01-07T19:46:12Z"
0
0
tf-keras
[ "tf-keras", "license:apache-2.0", "region:us" ]
null
"2024-01-07T19:23:55Z"
--- license: apache-2.0 ---
JacobLinCool/whisper-small-tw
JacobLinCool
"2024-01-08T21:22:15Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-01-07T19:24:14Z"
Entry not found
vwxyzjn/models_EleutherAI_pythia-6.9b-deduped_sft_model_77713__reward__tldr
vwxyzjn
"2024-01-07T19:24:39Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:24:39Z"
Entry not found
vwxyzjn/models_EleutherAI_pythia-6.9b-deduped_sft_model_55513__reward__tldr
vwxyzjn
"2024-01-07T19:24:48Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:24:48Z"
Entry not found
vwxyzjn/models_EleutherAI_pythia-6.9b-deduped_sft_model_66613__reward__tldr
vwxyzjn
"2024-01-07T19:24:49Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:24:49Z"
Entry not found
vwxyzjn/models_EleutherAI_pythia-6.9b-deduped_sft_model_44413__reward__tldr
vwxyzjn
"2024-01-07T19:25:20Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:25:20Z"
Entry not found
johnatanebonilla/fr_pipeline
johnatanebonilla
"2024-01-07T23:31:18Z"
0
0
spacy
[ "spacy", "token-classification", "fr", "model-index", "region:us" ]
token-classification
"2024-01-07T19:25:34Z"
--- tags: - spacy - token-classification language: - fr model-index: - name: fr_pipeline results: - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.973761619 - task: name: POS type: token-classification metrics: - name: POS (UPOS) Accuracy type: accuracy value: 0.9726634506 - task: name: MORPH type: token-classification metrics: - name: Morph (UFeats) Accuracy type: accuracy value: 0.9612141653 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.8499876513 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.7929110925 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9819527996 --- | Feature | Description | | --- | --- | | **Name** | `fr_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.6.1,<3.7.0` | | **Default Pipeline** | `transformer`, `morphologizer`, `parser`, `tagger` | | **Components** | `transformer`, `morphologizer`, `parser`, `tagger` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (240 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `POS=INTJ`, `POS=PUNCT`, `POS=ADP`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=ADV`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=CCONJ`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `POS=X`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=DET\|PronType=Dem`, `ExtPos=INTJ\|POS=INTJ`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=SCONJ`, `POS=VERB`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `ExtPos=ADV\|POS=ADP`, `Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person[psor]=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|Number[psor]=Plur\|POS=DET\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Sing\|Number[psor]=Plur\|POS=DET\|Person[psor]=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `ExtPos=SCONJ\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Number=Sing\|POS=NOUN`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `ExtPos=CCONJ\|POS=CCONJ`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=PRON`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=ADV\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `ExtPos=PRON\|POS=ADV\|Polarity=Neg`, `POS=PRON\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `POS=AUX\|VerbForm=Inf`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Masc\|POS=ADJ`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=ADP\|POS=ADP`, `ExtPos=INTJ\|POS=VERB`, `ExtPos=ADV\|POS=ADV`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `ExtPos=PRON\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=NUM`, `ExtPos=ADV\|POS=ADV\|Polarity=Neg`, `ExtPos=ADV\|POS=SCONJ`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Number=Sing\|Number[psor]=Plur\|POS=DET\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `Number=Plur\|POS=VERB\|Person=1`, `POS=ADV\|PronType=Exc`, `POS=DET`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `ExtPos=PRON\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|PronType=Neg`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `ExtPos=NOUN\|POS=ADV`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `POS=PRON\|Person=3\|PronType=Ind`, `ExtPos=ADV\|POS=CCONJ`, `ExtPos=ADP\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `ExtPos=DET\|POS=ADP`, `ExtPos=ADP\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `ExtPos=ADV\|Number=Plur\|POS=NUM`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=NOUN`, `ExtPos=PROPN\|POS=PROPN`, `ExtPos=VERB\|POS=X`, `Definite=Ind\|ExtPos=ADV\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Prs`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `ExtPos=NOUN\|POS=ADP`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `ExtPos=SCONJ\|POS=CCONJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:cleft`, `advmod`, `amod`, `appos`, `aux:pass`, `aux:tense`, `case`, `cc`, `ccomp`, `conj`, `cop`, `dep`, `det`, `discourse`, `dislocated`, `expl:comp`, `expl:subj`, `fixed`, `flat:name`, `iobj`, `mark`, `nmod`, `nmod:appos`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obj:lvc`, `obl:arg`, `obl:mod`, `punct`, `reparandum`, `xcomp` | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `VERB`, `X` | </details> ### Accuracy | Type | Score | | --- | --- | | `POS_ACC` | 97.27 | | `MORPH_ACC` | 96.12 | | `DEP_UAS` | 85.00 | | `DEP_LAS` | 79.29 | | `SENTS_P` | 98.24 | | `SENTS_R` | 98.15 | | `SENTS_F` | 98.20 | | `TAG_ACC` | 97.38 | | `TRANSFORMER_LOSS` | 670926.24 | | `MORPHOLOGIZER_LOSS` | 1747.33 | | `PARSER_LOSS` | 1719102.10 | | `TAGGER_LOSS` | 198.51 |
AiAF/furrydiffusion.safetensora
AiAF
"2024-01-07T19:27:14Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:27:14Z"
Entry not found
Dybala10/Test
Dybala10
"2024-01-07T19:28:47Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:28:47Z"
Entry not found
daniel-gordon/Reinforce-Pixelcopter-PLE-v0
daniel-gordon
"2024-01-07T19:30:47Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T19:30:42Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 19.90 +/- 18.44 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Bananamonkey011/models
Bananamonkey011
"2024-01-07T19:32:29Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:32:29Z"
Entry not found
Gabriel1945/Velha
Gabriel1945
"2024-01-07T19:39:16Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T19:38:53Z"
--- license: openrail ---
Dremmar/juggernaut_v8
Dremmar
"2024-01-07T20:07:17Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:43:27Z"
Entry not found
satendra4u2022/mistral_7b_guanaco
satendra4u2022
"2024-01-07T19:56:38Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2024-01-07T19:44:47Z"
Entry not found
yy0514/llama2-7b-chat-qlora-lek-train-4-epochs-run2
yy0514
"2024-01-07T20:44:24Z"
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
"2024-01-07T19:52:03Z"
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: llama2-7b-chat-qlora-lek-train-4-epochs-recheck results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7b-chat-qlora-lek-train-4-epochs-recheck This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Nick088/Jayne_Secker_Sky_News_Reporter
Nick088
"2024-01-07T19:55:47Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:54:48Z"
Entry not found
roktimsardar123/riley_reid
roktimsardar123
"2024-01-07T19:57:07Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T19:56:29Z"
Entry not found
Keiser41/ModelMaker
Keiser41
"2024-01-21T20:20:54Z"
0
0
null
[ "music", "en", "es", "ja", "license:creativeml-openrail-m", "region:us" ]
null
"2024-01-07T19:56:56Z"
--- license: creativeml-openrail-m language: - en - es - ja tags: - music ---
Cool629/OtherModels
Cool629
"2024-08-16T23:06:54Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T19:57:36Z"
--- license: openrail ---
AbdulSamad101/llama-2-7b-AbdulSamad-FT
AbdulSamad101
"2024-01-07T20:24:05Z"
0
0
peft
[ "peft", "region:us" ]
null
"2024-01-07T20:01:59Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
ptailor3/SAINTS_Large
ptailor3
"2024-01-07T20:02:46Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2024-01-07T20:02:46Z"
--- license: mit ---
zhan1993/kate_test
zhan1993
"2024-01-07T20:04:14Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:04:11Z"
Entry not found
ostapeno/library-phi_2-v3-10-flan-clusters-parallel_evol
ostapeno
"2024-01-07T20:08:24Z"
0
1
null
[ "region:us" ]
null
"2024-01-07T20:06:29Z"
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | phi2_joint_lora_embed_clustersc8_2e_3epoch | phi-2 | sordonia/flan-10k-flat/race_middle_Read_the_article_and_answer_the_question_no_option_,race_high_Select_the_best_answer,quail_description_context_question_answer_id,quail_context_question_description_text,race_high_Read_the_article_and_answer_the_question_no_option_,race_high_Select_the_best_answer_no_instructions_,quail_context_description_question_answer_id,race_high_Taking_a_test,super_glue_multirc_1_0_2,race_middle_Select_the_best_answer,quail_context_question_description_answer_id,quail_description_context_question_answer_text,quail_context_question_answer_description_text,race_high_Select_the_best_answer_generate_span_,race_middle_Select_the_best_answer_generate_span_,quail_context_question_answer_description_id,quail_context_description_question_answer_text,quail_context_description_question_text,quail_context_question_description_answer_text,quail_description_context_question_text,race_middle_Taking_a_test,quail_no_prompt_id,quail_no_prompt_text,race_middle_Select_the_best_answer_no_instructions_ | lora | | phi2_joint_lora_embed_clustersc4_2e_3epoch | phi-2 | sordonia/flan-10k-flat/wiki_qa_found_on_google,app_reviews_categorize_rating_using_review,race_middle_Is_this_the_right_answer,super_glue_cb_1_0_2,wiki_qa_Topic_Prediction_Answer_Only,wiki_qa_Direct_Answer_to_Question,super_glue_wsc_fixed_1_0_2,cot_gsm8k_ii,unified_qa_science_inst,race_high_Is_this_the_right_answer,cot_strategyqa,cot_ecqa_ii,quarel_do_not_use,wiki_qa_exercise,wiki_qa_automatic_system,cot_creak_ii,quarel_heres_a_story,quarel_choose_between,stream_qed_ii,wiki_qa_Topic_Prediction_Question_Only,glue_qnli_2_0_0,cot_sensemaking_ii,super_glue_copa_1_0_2,social_i_qa_Generate_the_question_from_the_answer,social_i_qa_Show_choices_and_generate_index,quarel_testing_students,wiki_qa_Topic_Prediction_Question_and_Answer_Pair,wiki_qa_Decide_good_answer,wiki_qa_Jeopardy_style,wiki_qa_Generate_Question_from_Topic,definite_pronoun_resolution_1_1_0,wiqa_effect_with_label_answer,glue_wnli_2_0_0,cot_qasc,cot_strategyqa_ii,quarel_logic_test,stream_aqua_ii | lora | | phi2_joint_lora_embed_clustersc9_2e_3epoch | phi-2 | sordonia/flan-10k-flat/natural_questions_open_1_0_0,web_questions_whats_the_answer,web_questions_question_answer,dbpedia_14_pick_one_category_for_the_following_text,kilt_tasks_hotpotqa_combining_facts,web_questions_short_general_knowledge_q,kilt_tasks_hotpotqa_straighforward_qa,adversarial_qa_dbidaf_generate_question,adversarial_qa_droberta_based_on,web_questions_get_the_answer,kilt_tasks_hotpotqa_complex_question,web_questions_potential_correct_answer,trivia_qa_rc_1_1_0,kilt_tasks_hotpotqa_formulate,adversarial_qa_dbert_based_on,adversarial_qa_dbidaf_based_on,squad_v1_1_3_0_0 | lora | | phi2_joint_lora_embed_clustersc6_2e_3epoch | phi-2 | sordonia/flan-10k-flat/super_glue_rte_1_0_2,cot_sensemaking,super_glue_wic_1_0_2,cos_e_v1_11_rationale,anli_r3_0_1_0,dream_generate_last_utterance,paws_wiki_1_1_0,cos_e_v1_11_generate_explanation_given_text,cot_creak,stream_aqua,snli_1_1_0,cos_e_v1_11_i_think,glue_qqp_2_0_0,cos_e_v1_11_explain_why_human,anli_r2_0_1_0,anli_r1_0_1_0,glue_stsb_2_0_0,cos_e_v1_11_aligned_with_common_sense,glue_mnli_2_0_0,social_i_qa_I_was_wondering,cosmos_qa_1_0_0,glue_mrpc_2_0_0,social_i_qa_Generate_answer | lora | | phi2_joint_lora_embed_clustersc7_2e_3epoch | phi-2 | sordonia/flan-10k-flat/dream_read_the_following_conversation_and_answer_the_question,app_reviews_convert_to_star_rating,cos_e_v1_11_question_option_description_text,social_i_qa_Show_choices_and_generate_answer,quartz_answer_question_based_on,sciq_Direct_Question_Closed_Book_,qasc_qa_with_separated_facts_3,quartz_given_the_fact_answer_the_q,quartz_answer_question_below,kilt_tasks_hotpotqa_final_exam,sciq_Multiple_Choice,wiqa_does_the_supposed_perturbation_have_an_effect,cos_e_v1_11_question_description_option_text,wiki_qa_Is_This_True_,quartz_use_info_from_question_paragraph,sciq_Direct_Question,qasc_qa_with_separated_facts_2,wiqa_which_of_the_following_is_the_supposed_perturbation,app_reviews_convert_to_rating,cos_e_v1_11_question_option_description_id,wiqa_effect_with_string_answer,qasc_qa_with_separated_facts_5,dream_baseline,quartz_having_read_above_passage,cos_e_v1_11_question_description_option_id,qasc_qa_with_separated_facts_1,cos_e_v1_11_description_question_option_text,qasc_qa_with_combined_facts_1,qasc_is_correct_1,cos_e_v1_11_description_question_option_id,social_i_qa_Check_if_a_random_answer_is_valid_or_not,sciq_Multiple_Choice_Closed_Book_,quartz_use_info_from_paragraph_question,qasc_is_correct_2,qasc_qa_with_separated_facts_4,quartz_read_passage_below_choose,quartz_paragraph_question_plain_concat,sciq_Multiple_Choice_Question_First | lora | | phi2_joint_lora_embed_clustersc3_2e_3epoch | phi-2 | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process,wiqa_what_is_the_final_step_of_the_following_process,wmt16_translate_ro_en_1_0_0,wiqa_what_might_be_the_last_step_of_the_process,wiki_bio_key_content,gem_common_gen_1_1_0,duorc_SelfRC_build_story_around_qa,app_reviews_generate_review,wiki_bio_what_content,wiki_bio_who,gem_e2e_nlg_1_1_0,cot_esnli_ii,wmt16_translate_tr_en_1_0_0,wiqa_what_is_the_missing_first_step,wiki_bio_comprehension,coqa_1_0_0,duorc_ParaphraseRC_build_story_around_qa,multi_news_1_0_0 | lora | | phi2_joint_lora_embed_clustersc2_2e_3epoch | phi-2 | sordonia/flan-10k-flat/adversarial_qa_dbidaf_question_context_answer,super_glue_record_1_0_2,wiki_hop_original_generate_object,adversarial_qa_droberta_tell_what_it_is,dbpedia_14_given_a_choice_of_categories_,wiki_hop_original_choose_best_object_affirmative_3,quac_1_0_0,wiki_hop_original_choose_best_object_interrogative_1,wiki_hop_original_choose_best_object_affirmative_1,adversarial_qa_dbert_answer_the_following_q,wiki_hop_original_choose_best_object_interrogative_2,adversarial_qa_droberta_question_context_answer,squad_v2_0_3_0_0,wiki_hop_original_generate_subject,wiki_bio_guess_person,adversarial_qa_dbidaf_answer_the_following_q,adversarial_qa_droberta_answer_the_following_q,adversarial_qa_dbert_tell_what_it_is,race_high_Write_a_multi_choice_question_options_given_,wiki_hop_original_choose_best_object_affirmative_2,wiki_hop_original_generate_subject_and_object,drop_2_0_0,adversarial_qa_dbert_question_context_answer,adversarial_qa_dbidaf_tell_what_it_is | lora | | phi2_joint_lora_embed_clustersc0_2e_3epoch | phi-2 | sordonia/flan-10k-flat/ropes_background_new_situation_answer,ropes_prompt_bottom_no_hint,ropes_plain_background_situation,ropes_new_situation_background_answer,ropes_given_background_situation,ropes_prompt_bottom_hint_beginning,ropes_prompt_beginning,ropes_read_background_situation,ropes_plain_bottom_hint,ropes_plain_no_background,ropes_prompt_mix,ropes_background_situation_middle | lora | | phi2_joint_lora_embed_clustersc1_2e_3epoch | phi-2 | sordonia/flan-10k-flat/glue_sst2_2_0_0,adversarial_qa_droberta_generate_question,true_case,stream_qed,huggingface_xsum,cot_esnli,cot_gsm8k,trec_1_0_0,yelp_polarity_reviews_0_2_0,lambada_1_0_0,glue_cola_2_0_0,ag_news_subset_1_0_0,gem_dart_1_1_0,math_dataset_algebra__linear_1d_1_0_0,cnn_dailymail_3_4_0,wiki_hop_original_explain_relation,dbpedia_14_given_list_what_category_does_the_paragraph_belong_to,gem_wiki_lingua_english_en_1_1_0,fix_punct,imdb_reviews_plain_text_1_0_0,race_middle_Write_a_multi_choice_question_for_the_following_article,gigaword_1_2_0,dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to,gem_web_nlg_en_1_1_0,word_segment,race_high_Write_a_multi_choice_question_for_the_following_article,wmt16_translate_de_en_1_0_0,cot_ecqa,aeslc_1_0_0,dream_generate_first_utterance,wmt16_translate_fi_en_1_0_0,dream_answer_to_dialogue,para_crawl_enes,adversarial_qa_dbert_generate_question,race_middle_Write_a_multi_choice_question_options_given_,wmt14_translate_fr_en_1_0_0 | lora | | phi2_joint_lora_embed_clustersc5_2e_3epoch | phi-2 | sordonia/flan-10k-flat/quoref_Context_Contains_Answer,duorc_SelfRC_generate_question_by_answer,quoref_Find_Answer,duorc_ParaphraseRC_movie_director,duorc_ParaphraseRC_answer_question,quoref_Found_Context_Online,quoref_Read_And_Extract_,duorc_ParaphraseRC_title_generation,duorc_ParaphraseRC_decide_worth_it,quoref_What_Is_The_Answer,duorc_ParaphraseRC_generate_question,quoref_Guess_Title_For_Context,quoref_Answer_Test,duorc_SelfRC_question_answering,duorc_SelfRC_title_generation,duorc_ParaphraseRC_generate_question_by_answer,duorc_ParaphraseRC_extract_answer,duorc_SelfRC_answer_question,duorc_SelfRC_decide_worth_it,duorc_ParaphraseRC_question_answering,quoref_Answer_Question_Given_Context,duorc_SelfRC_extract_answer,quoref_Guess_Answer,quoref_Answer_Friend_Question,duorc_SelfRC_movie_director,duorc_SelfRC_generate_question,quoref_Given_Context_Answer_Question | lora | Last updated on: 2024-01-07 20:06:29+00:00
ryusangwon/7243_Llama-2-13b-hf
ryusangwon
"2024-01-07T20:10:43Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:cnn_dailymail", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "region:us" ]
null
"2024-01-07T20:10:35Z"
--- base_model: meta-llama/Llama-2-13b-hf tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: 7243_Llama-2-13b-hf results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 7243_Llama-2-13b-hf This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
VoidZeroe/llama6.3-model
VoidZeroe
"2024-01-07T20:12:46Z"
0
0
peft
[ "peft", "region:us" ]
null
"2024-01-07T20:12:02Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
Kiwihead15/whatsapp_Llama-2-13b-hf-Eternos
Kiwihead15
"2024-01-08T13:08:18Z"
0
0
null
[ "tensorboard", "safetensors", "region:us" ]
null
"2024-01-07T20:21:10Z"
Entry not found
llananailo/Baseline-1
llananailo
"2024-01-07T20:22:26Z"
0
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
"2024-01-07T20:22:23Z"
Entry not found
drakrig/ppo-LunarLander-v2
drakrig
"2024-01-11T18:51:55Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T20:25:20Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.09 +/- 20.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
frydziu/speecht5_tts_voxpopuli_nl
frydziu
"2024-01-07T20:25:51Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:25:51Z"
Entry not found
frydziu/speecht5_tts_voxpopuli_pl
frydziu
"2024-01-07T20:28:53Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:28:53Z"
Entry not found
AswanthCManoj/azma-tinyllama-instruct-v2-adapter
AswanthCManoj
"2024-01-07T20:37:10Z"
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
"2024-01-07T20:33:08Z"
--- library_name: peft base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
s3nh/BEE-spoke-data-TinyLlama-3T-1.1bee-GGUF
s3nh
"2024-01-07T20:34:46Z"
0
0
transformers
[ "transformers", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-07T20:34:46Z"
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference TODO # Original model card
asasamad/llama-2-7b-asasamad_test
asasamad
"2024-01-07T21:01:41Z"
0
0
peft
[ "peft", "region:us" ]
null
"2024-01-07T20:35:00Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
Gypsy086/Gypsy
Gypsy086
"2024-01-07T20:36:33Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-01-07T20:36:33Z"
--- license: apache-2.0 ---
pierian-data/cthx-boy-lora
pierian-data
"2024-01-07T20:40:29Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:40:29Z"
Entry not found
s-gladkykh/sky_diffusion_ddim_512_lr1e-5_bs4_e150
s-gladkykh
"2024-01-20T17:41:54Z"
0
0
null
[ "tensorboard", "region:us" ]
null
"2024-01-07T20:43:57Z"
Entry not found
pierian-data/tok-boy-lora
pierian-data
"2024-01-08T02:28:03Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:48:06Z"
Entry not found
Vijish/vits_mongolian_monospeaker
Vijish
"2024-01-16T12:58:09Z"
0
0
transformers
[ "transformers", "safetensors", "vits", "endpoints_compatible", "region:us" ]
null
"2024-01-07T20:48:16Z"
Entry not found
NLPProject2023Z/roberta_regression_corrected
NLPProject2023Z
"2024-01-07T20:50:01Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
"2024-01-07T20:49:35Z"
--- tags: - generated_from_trainer model-index: - name: roberta_regression_corrected results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_regression_corrected This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 25 - eval_batch_size: 25 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 160 | 0.5899 | | No log | 2.0 | 320 | 0.5899 | | No log | 3.0 | 480 | 0.5899 | | 0.5781 | 4.0 | 640 | 0.5899 | | 0.5781 | 5.0 | 800 | 0.5899 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
AstroZeta7/test_bowl
AstroZeta7
"2024-01-07T20:51:59Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:50:13Z"
Entry not found
Dontin/Ai_Project
Dontin
"2024-01-07T20:51:32Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2024-01-07T20:51:32Z"
--- license: apache-2.0 ---
tjbleo616/my_awesome_swag_model
tjbleo616
"2024-01-07T21:15:08Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "multiple-choice", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-01-07T20:55:25Z"
Entry not found
asasamad/testing_model
asasamad
"2024-01-07T20:58:33Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T20:58:33Z"
Entry not found
Minata/method2test-mistral-7B_v0
Minata
"2024-01-08T01:02:18Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
"2024-01-07T21:00:58Z"
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: method2test-mistral-7B_v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # method2test-mistral-7B_v0 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9248 | 0.09 | 100 | 1.6111 | | 0.8949 | 0.18 | 200 | 1.5985 | | 0.9269 | 0.27 | 300 | 1.5983 | | 0.8941 | 0.36 | 400 | 1.6074 | | 0.8719 | 0.44 | 500 | 1.6126 | | 0.8133 | 0.53 | 600 | 1.6316 | | 0.7936 | 0.62 | 700 | 1.6385 | | 0.7532 | 0.71 | 800 | 1.6554 | | 0.7424 | 0.8 | 900 | 1.6479 | | 0.7573 | 0.89 | 1000 | 1.6527 | ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/bloomz-560m_lora_lr5e-05_bs4_epoch20_wd0.01
jysssacc
"2024-01-08T11:30:06Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
"2024-01-07T21:01:09Z"
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_lora_lr5e-05_bs4_epoch20_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_lora_lr5e-05_bs4_epoch20_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.2484 | 1.0 | 157 | 3.6028 | | 3.5565 | 2.0 | 314 | 3.3295 | | 3.3894 | 3.0 | 471 | 3.2916 | | 3.2189 | 4.0 | 628 | 3.2876 | | 3.1204 | 5.0 | 785 | 3.3139 | | 2.9755 | 6.0 | 942 | 3.3707 | | 2.919 | 7.0 | 1099 | 3.4452 | | 2.7196 | 8.0 | 1256 | 3.5097 | | 2.6831 | 9.0 | 1413 | 3.6234 | | 2.5105 | 10.0 | 1570 | 3.7513 | | 2.3998 | 11.0 | 1727 | 3.8193 | | 2.314 | 12.0 | 1884 | 3.9362 | | 2.1941 | 13.0 | 2041 | 4.1743 | | 2.1129 | 14.0 | 2198 | 4.2149 | | 2.0442 | 15.0 | 2355 | 4.3023 | | 1.8967 | 16.0 | 2512 | 4.4912 | | 1.9345 | 17.0 | 2669 | 4.5690 | | 1.8908 | 18.0 | 2826 | 4.6751 | | 1.8891 | 19.0 | 2983 | 4.7869 | | 1.7753 | 20.0 | 3140 | 4.8047 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
Sojde/Sojde1
Sojde
"2024-01-07T21:02:14Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:02:14Z"
Entry not found
tjbleo616/bert-base-uncased-finetuned-swag
tjbleo616
"2024-01-07T21:10:32Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:10:32Z"
Entry not found
Viivvzz0742/mistral-MistralMiniMed2
Viivvzz0742
"2024-01-07T23:27:30Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "base_model:adapter:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "region:us" ]
null
"2024-01-07T21:11:03Z"
--- license: mit library_name: peft tags: - generated_from_trainer base_model: HuggingFaceH4/mistral-7b-sft-beta model-index: - name: mistral-MistralMiniMed2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-MistralMiniMed2 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 100 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Floyd93/eli5_mlm_model
Floyd93
"2024-01-07T21:11:58Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:11:58Z"
Entry not found
shravanthis/layoutlmv3-finetuned-cord_100
shravanthis
"2024-01-07T21:14:13Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:14:13Z"
Entry not found
jeiku/Toxic_DPO_StableLM
jeiku
"2024-01-07T21:21:05Z"
0
1
null
[ "safetensors", "en", "dataset:diffnamehard/toxic-dpo-v0.1-NoWarning-alpaca", "license:other", "region:us" ]
null
"2024-01-07T21:19:57Z"
--- license: other datasets: - diffnamehard/toxic-dpo-v0.1-NoWarning-alpaca language: - en ---
Donislebew00/lucintaluna
Donislebew00
"2024-01-07T21:27:18Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T21:20:03Z"
--- license: openrail ---
jysssacc/opt-350m_lora_lr5e-05_bs4_epoch20_wd0.01
jysssacc
"2024-01-08T14:51:15Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
"2024-01-07T21:30:20Z"
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_lora_lr5e-05_bs4_epoch20_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_lora_lr5e-05_bs4_epoch20_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9852 | 1.0 | 157 | 3.5147 | | 3.5297 | 2.0 | 314 | 3.4018 | | 3.4497 | 3.0 | 471 | 3.3918 | | 3.3979 | 4.0 | 628 | 3.4111 | | 3.3625 | 5.0 | 785 | 3.3598 | | 3.3397 | 6.0 | 942 | 3.3665 | | 3.2897 | 7.0 | 1099 | 3.3855 | | 3.2289 | 8.0 | 1256 | 3.3887 | | 3.287 | 9.0 | 1413 | 3.3999 | | 3.1938 | 10.0 | 1570 | 3.4151 | | 3.1485 | 11.0 | 1727 | 3.4140 | | 3.0905 | 12.0 | 1884 | 3.4125 | | 3.0712 | 13.0 | 2041 | 3.4385 | | 3.0604 | 14.0 | 2198 | 3.4475 | | 3.0445 | 15.0 | 2355 | 3.4718 | | 2.9898 | 16.0 | 2512 | 3.4659 | | 2.9948 | 17.0 | 2669 | 3.4765 | | 2.984 | 18.0 | 2826 | 3.4740 | | 3.0367 | 19.0 | 2983 | 3.4817 | | 2.9656 | 20.0 | 3140 | 3.4811 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
ai-tools-search/genderslider
ai-tools-search
"2024-01-07T21:30:39Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:30:22Z"
Entry not found
andlock/ppo-LunarLander-v2
andlock
"2024-01-07T21:31:03Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T21:30:43Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.43 +/- 19.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
leosol/bert-base-uncased-issues-128
leosol
"2024-01-07T21:35:45Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:35:45Z"
Entry not found
yy0514/bert-lek-full-train-4epochs-run2
yy0514
"2024-01-07T21:42:52Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "multiple-choice", "endpoints_compatible", "region:us" ]
multiple-choice
"2024-01-07T21:38:10Z"
Entry not found
tjbleo616/my_awesome_model
tjbleo616
"2024-01-07T21:39:35Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:39:35Z"
Entry not found
Loren85/icarly-Opening-Singer
Loren85
"2024-01-07T21:46:17Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T21:45:38Z"
--- license: openrail ---
gputrain/rl_course_vizdoom_health_gathering_supreme
gputrain
"2024-01-07T21:46:19Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T21:45:56Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.44 +/- 3.57 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r gputrain/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
pankaj4u4m/code-search-net-tokenizer
pankaj4u4m
"2024-02-17T14:13:21Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:47:09Z"
Entry not found
bpugnaire/a2c-PandaReachDense-v3
bpugnaire
"2024-04-21T11:06:18Z"
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-07T21:51:19Z"
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.17 +/- 0.10 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pihalf/erbd
pihalf
"2024-01-07T21:53:01Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:53:01Z"
Entry not found
jysssacc/roberta-base_IA3_lr0.0005_bs4_epoch20_wd0.01
jysssacc
"2024-01-07T22:19:05Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
"2024-01-07T21:56:55Z"
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: roberta-base_IA3_lr0.0005_bs4_epoch20_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_IA3_lr0.0005_bs4_epoch20_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 19.5038 | 1.0 | 157 | 18.4857 | | 8.9314 | 2.0 | 314 | 5.8495 | | 5.4493 | 3.0 | 471 | 4.2791 | | 3.8453 | 4.0 | 628 | 3.4174 | | 3.4001 | 5.0 | 785 | 2.8767 | | 2.8518 | 6.0 | 942 | 2.5189 | | 2.7181 | 7.0 | 1099 | 2.2672 | | 2.3938 | 8.0 | 1256 | 2.0897 | | 2.2025 | 9.0 | 1413 | 1.9660 | | 2.1035 | 10.0 | 1570 | 1.8055 | | 1.9748 | 11.0 | 1727 | 1.6968 | | 1.8698 | 12.0 | 1884 | 1.6367 | | 1.7843 | 13.0 | 2041 | 1.5600 | | 1.7277 | 14.0 | 2198 | 1.5018 | | 1.6915 | 15.0 | 2355 | 1.4518 | | 1.5865 | 16.0 | 2512 | 1.4089 | | 1.5934 | 17.0 | 2669 | 1.3896 | | 1.5713 | 18.0 | 2826 | 1.3617 | | 1.5521 | 19.0 | 2983 | 1.3453 | | 1.5471 | 20.0 | 3140 | 1.3406 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
ppardee/ControlNetSDXL
ppardee
"2024-01-07T21:57:20Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:57:20Z"
Entry not found
ostapeno/newt_adaNeo1B_quarel_heres_a_story_lora_sim_sgd_full_ft_CG
ostapeno
"2024-01-08T02:48:43Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T21:59:16Z"
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | quarel_heres_a_story_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v8 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | | quarel_heres_a_story_v9 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora | Last updated on: 2024-01-08 02:48:40+00:00
TheRealheavy/SteveHarwell
TheRealheavy
"2024-01-07T22:01:39Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2024-01-07T22:00:02Z"
--- license: openrail ---
cigdemm/xlm-roberta-base-finetuned-ner
cigdemm
"2024-01-07T22:01:13Z"
0
0
null
[ "region:us" ]
null
"2024-01-07T22:01:13Z"
Entry not found