File size: 10,292 Bytes
f9822c1 dd8849b f9822c1 28091aa dd8849b 5e57148 dd8849b 5e57148 dd8849b 5e57148 dd8849b 5e57148 dd8849b 5e57148 dd8849b 5e57148 dd8849b 4da46ba d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b 777553c d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b 1ca3a9b d80967b be14976 dd8849b 5e57148 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
---
language:
- en
- hi
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
- DIBT/10k_prompts_ranked
metrics:
- bleu
pipeline_tag: text-generation
model-index:
- name: JARVIS
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 32.08
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 56.86
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.15
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.33
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.14
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.14
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS
name: Open LLM Leaderboard
tags:
- code
---
# Model Card for Model ID
Overview
This model is a conversational AI designed to engage in natural language interactions with users. It is based on the Causal Language Modeling (CLM) architecture and has been fine-tuned on conversational datasets to generate coherent and contextually relevant responses.
Usage
To use this model, you can interact with it via the Hugging Face Inference API. Provide a text prompt, and the model will generate a response based on the given input.
Intended Use
This model is intended for various conversational applications, including chatbots, virtual assistants, and dialogue systems. It can be deployed in environments where human-like interactions are required, such as customer service, educational platforms, or entertainment applications.
Limitations and Ethical Considerations
While this model is capable of generating human-like responses, it may occasionally produce outputs that are inappropriate, offensive, or misleading. It is essential to monitor its responses and ensure responsible deployment to mitigate potential harms.
License
The model is released under the Apache License 2.0, which allows for both commercial and non-commercial use with proper attribution.
Acknowledgments
This model was trained using the Hugging Face Transformers library and fine-tuned on conversational datasets. We acknowledge the contributions of the open-source community and the developers of the Transformers library.
Contact Information
For inquiries or feedback regarding this model, please contact [your contact information].
References
Provide any relevant references, citations, or links to resources used in training or developing this model.
## Model Details
### Model Description
This model is a state-of-the-art conversational AI system based on the Causal Language Modeling (CLM) architecture. It has been fine-tuned on large-scale conversational datasets to generate contextually relevant and coherent responses to user inputs. The model utilizes self-attention mechanisms and deep neural networks to understand and process natural language inputs, allowing it to engage in human-like conversations across a wide range of topics and contexts.
Architecture
The architecture of this model consists of multiple layers of transformer blocks, including self-attention mechanisms and feed-forward neural networks. It employs techniques such as positional encoding and layer normalization to enhance its ability to capture and process sequential information in text data. The model's parameters are optimized through training on conversational datasets using techniques such as gradient descent and backpropagation.
Fine-Tuning
During the fine-tuning process, the model is trained on conversational datasets, where it learns to generate appropriate responses based on input prompts. Fine-tuning involves adjusting the parameters of the pre-trained model to better suit the conversational task at hand, thereby improving its performance in generating contextually relevant and coherent responses.
Performance
The performance of this model is evaluated based on various metrics, including fluency, coherence, relevance, and engagement. It has been extensively tested on benchmark datasets and real-world conversational applications to assess its ability to produce human-like responses and maintain meaningful interactions with users.
Use Cases
This model can be deployed in a variety of conversational applications, including chatbots, virtual assistants, customer support systems, and interactive storytelling platforms. It can facilitate natural language interactions between users and systems, enhancing user experience and providing valuable assistance across different domains and industries.
Limitations and Ethical Considerations
While this model demonstrates advanced capabilities in generating human-like responses, it may occasionally produce outputs that are inappropriate, biased, or misleading. Careful monitoring and evaluation are necessary to ensure responsible deployment and mitigate potential risks, such as spreading misinformation or perpetuating harmful stereotypes.
License
The model is released under the Apache License 2.0, allowing for both commercial and non-commercial use with proper attribution.
Contact Information
For inquiries or feedback regarding this model, please contact [your contact information].
References
Provide any relevant references, citations, or links to resources used in training or developing this model.
- **Developed by:** [VAIBHAV VERMA]
- **Model type:** [conversational AI]
- **Language(s) (NLP):** [PYTHON]
- **License:** [Apache License 2.0]
- INSPIRED BY [OEvortex/vortex-3b ]
## Uses
The model can be utilized in various conversational applications across different domains and industries. Some potential uses include:
Chatbots: Deploy the model as a chatbot to engage with users in natural language conversations, providing assistance, answering questions, and offering recommendations.
Virtual Assistants: Integrate the model into virtual assistant applications to help users with tasks such as scheduling appointments, setting reminders, and retrieving information from the web.
Customer Support Systems: Use the model to power customer support chat systems, where it can handle customer inquiries, troubleshoot issues, and escalate complex queries to human agents when necessary.
Interactive Storytelling: Employ the model in interactive storytelling platforms to create immersive narrative experiences where users can engage with virtual characters and influence the plot through their interactions.
Language Learning: Develop language learning applications that leverage the model to provide conversational practice and feedback to learners, helping them improve their language skills through realistic dialogue simulations.
Social Media Engagement: Integrate the model into social media platforms to enhance user engagement by enabling automated responses to comments, messages, and posts, personalized recommendations, and conversational interactions.
Healthcare Assistants: Adapt the model for use in healthcare applications, where it can assist patients with medical inquiries, provide health-related information, and offer support for mental health and wellness.
Educational Tools: Incorporate the model into educational applications to create interactive tutoring systems, virtual classroom assistants, and language practice tools that engage students in conversational learning experiences.
Note:
This AI model marks my first deployment on the Hugging Face platform. I am grateful for the invaluable assistance provided by Vortex Bahi throughout the development and deployment process. Their guidance and support have been instrumental in bringing this project to fruition.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAIBHAV22334455__JARVIS)
| Metric |Value|
|---------------------------------|----:|
|Avg. |35.78|
|AI2 Reasoning Challenge (25-Shot)|32.08|
|HellaSwag (10-Shot) |56.86|
|MMLU (5-Shot) |27.15|
|TruthfulQA (0-shot) |37.33|
|Winogrande (5-shot) |60.14|
|GSM8k (5-shot) | 1.14| |