Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ This model utilizes the `MistralForCausalLM` architecture with a `LlamaTokenizer
|
|
60 |
|
61 |
## Training Data
|
62 |
|
63 |
-
The model was fine-tuned on the [Bitext Telco Dataset](https://huggingface.co/datasets/bitext/Bitext-telco-llm-chatbot-training-dataset) comprising various telco-related intents, including: set_usage_limits, activate_phone, check_mobile_payments, check_signal_coverage, invoices, and more. Totaling
|
64 |
|
65 |
This comprehensive training helps the model address a broad spectrum of telco-related questions effectively. The dataset follows the same structured approach as our dataset published on Hugging Face as [bitext/Bitext-customer-support-llm-chatbot-training-dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset), but with a focus on telco.
|
66 |
|
|
|
60 |
|
61 |
## Training Data
|
62 |
|
63 |
+
The model was fine-tuned on the [Bitext Telco Dataset](https://huggingface.co/datasets/bitext/Bitext-telco-llm-chatbot-training-dataset) comprising various telco-related intents, including: set_usage_limits, activate_phone, check_mobile_payments, check_signal_coverage, invoices, and more. Totaling 26 intents, and each intent is represented by approximately 1000 examples.
|
64 |
|
65 |
This comprehensive training helps the model address a broad spectrum of telco-related questions effectively. The dataset follows the same structured approach as our dataset published on Hugging Face as [bitext/Bitext-customer-support-llm-chatbot-training-dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset), but with a focus on telco.
|
66 |
|