|
--- |
|
library_name: transformers |
|
tags: [] |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
This model is created based on the instructions provided in https://www.datacamp.com/tutorial/fine-tuning-google-gemma. It is a PEFT adapter on Gemma-7B fine tuned on a character dataset dialogue. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
Model can play role play interesting fictional, celebrity characters as they appear in the dataset |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Model type:** Causal LM |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model Gemma-7B-it:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
Just a demo model example to try out in training PEFT models |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
[More Information Needed] |
|
|
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
Role Play training data hieunguyenminh/roleplay |
|
|
|
|
|
|
|
## Environmental Impact |
|
|
|
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** T4 (2) |
|
- **Hours used:** 4 |
|
- **Cloud Provider:** Kaggle |
|
- **Compute Region:** North America |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
[More Information Needed] |
|
|