Update README.md
Browse files
README.md
CHANGED
@@ -59,9 +59,9 @@ This model utilizes the `MistralForCausalLM` architecture with a `LlamaTokenizer
|
|
59 |
|
60 |
## Training Data
|
61 |
|
62 |
-
The model was fine-tuned on the [Bitext Travel Dataset](https://huggingface.co/datasets/bitext/Bitext-travel-llm-chatbot-training-dataset) comprising various travel-related intents, including: book_flight, choose_seat, check_arrival_time, book_trip, purchase_flight_insurance, check_cancellation_fee, check_baggage_allowance, and more.
|
63 |
|
64 |
-
|
65 |
|
66 |
## Training Procedure
|
67 |
|
|
|
59 |
|
60 |
## Training Data
|
61 |
|
62 |
+
The model was fine-tuned on the [Bitext Travel Dataset](https://huggingface.co/datasets/bitext/Bitext-travel-llm-chatbot-training-dataset) comprising various travel-related intents, including: book_flight, choose_seat, check_arrival_time, book_trip, purchase_flight_insurance, check_cancellation_fee, check_baggage_allowance, and more. Totaling 33 intents, and each intent is represented by approximately 1000 examples.
|
63 |
|
64 |
+
This comprehensive training helps the model address a broad spectrum of travel-related questions effectively. The dataset follows the same structured approach as our dataset published on Hugging Face as [bitext/Bitext-customer-support-llm-chatbot-training-dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset), but with a focus on travel.
|
65 |
|
66 |
## Training Procedure
|
67 |
|