dhrubasumatary commited on
Commit
5d71dd0
1 Parent(s): 06a3dd3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - model
5
+ - quantized
6
+ - sarvam
7
+ - llama
8
+ - text-generation
9
+ - inference
10
+ base_model:
11
+ - sarvamai/sarvam-1
12
+ ---
13
+
14
+ # Sarvam-1 Quantized Model
15
+
16
+ ## Model Description
17
+ The Sarvam-1 quantized model is a reduced-size version of the original Sarvam-1 model, specifically optimized for efficient inference on local machines using Ollama or similar tools. This quantization preserves the model's capabilities while significantly lowering the computational requirements, making it accessible for wider use.
18
+
19
+ This model is particularly effective for generating text in 10 Indic languages (bn, gu, hi, kn, ml, mr, or, pa, ta, te) and maintains competitive performance compared to larger models like Llama-3.1-8B.
20
+
21
+ ## Key Features
22
+ - **Quantization for Efficiency:** This model has been quantized to reduce its memory footprint and enhance inference speed, making it suitable for local deployment.
23
+ - **Support for Multiple Indian Languages:** Optimized for generating text in major Indian languages alongside English.
24
+ - **High-Quality Training Data:** Trained on a large, curated dataset with a focus on Indic languages, ensuring high-quality outputs.
25
+
26
+ ## Model Architecture
27
+ - **Original Model Size:** 2 billion parameters
28
+ - **Quantized Model Size:** [Specify the size if different]
29
+ - **Key Features:** Retains core architecture characteristics of Sarvam-1, including token efficiency and inference capabilities.
30
+
31
+ ## Performance
32
+ While specific quantitative performance metrics for the quantized model are not provided, it is expected to exhibit similar capabilities to the original Sarvam-1 model, particularly in handling text generation tasks in Indian languages.
33
+
34
+ ## Usage
35
+ To utilize the quantized model, follow the instructions below:
36
+
37
+ ```python
38
+ from transformers import AutoModelForCausalLM, AutoTokenizer
39
+
40
+ # Load model and tokenizer
41
+ model_name = "your_username/sarvam-1-quantized" # Replace with your model's path
42
+ model = AutoModelForCausalLM.from_pretrained(model_name)
43
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
44
+
45
+ # Example usage
46
+ text = "कर्नाटक की राजधानी है:"
47
+ inputs = tokenizer(text, return_tensors="pt")
48
+ outputs = model.generate(**inputs, max_new_tokens=5)
49
+ result = tokenizer.decode(outputs[0])
50
+ print(result)