varma007ut commited on
Commit
ba622e3
1 Parent(s): db1b43a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -3
README.md CHANGED
@@ -12,10 +12,95 @@ tags:
12
  - sft
13
  ---
14
 
15
- # Uploaded model
 
 
16
 
17
- - **Developed by:** varma007ut
18
- - **License:** apache-2.0
 
 
 
 
 
19
 
 
 
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
 
12
  - sft
13
  ---
14
 
15
+ Indian Legal Assistant: A LLaMA-based Model for Indian Legal Text Generation
16
+ This repository contains information and code for using the Indian Legal Assistant, a LLaMA-based model finetuned on Indian legal texts. This model is designed to assist with various legal tasks and queries related to Indian law.
17
+ Table of Contents
18
 
19
+ Model Description
20
+ Model Details
21
+ Installation
22
+ Usage
23
+ Evaluation
24
+ Contributing
25
+ License
26
 
27
+ Model Description
28
+ The Indian Legal Assistant is a text generation model specifically trained to understand and generate text related to Indian law. It can be used for tasks such as:
29
 
30
+ Legal question answering
31
+ Case summarization
32
+ Legal document analysis
33
+ Statute interpretation
34
+
35
+ Model Details
36
+
37
+ Model Name: Indian_Legal_Assitant
38
+ Developer: varma007ut
39
+ Model Size: 8.03B parameters
40
+ Architecture: LLaMA
41
+ Language: English
42
+ License: Apache 2.0
43
+ Hugging Face Repo: varma007ut/Indian_Legal_Assitant
44
+
45
+ Installation
46
+ To use this model, you'll need to install the required libraries:
47
+ bashCopypip install transformers torch
48
+ # For GGUF support
49
+ pip install llama-cpp-python
50
+ Usage
51
+ There are several ways to use the Indian Legal Assistant model:
52
+ 1. Using Hugging Face Pipeline
53
+ pythonCopyfrom transformers import pipeline
54
+
55
+ pipe = pipeline("text-generation", model="varma007ut/Indian_Legal_Assitant")
56
+
57
+ prompt = "Summarize the key points of the Indian Contract Act, 1872:"
58
+ result = pipe(prompt, max_length=200)
59
+ print(result[0]['generated_text'])
60
+ 2. Using Hugging Face Transformers directly
61
+ pythonCopyfrom transformers import AutoTokenizer, AutoModelForCausalLM
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained("varma007ut/Indian_Legal_Assitant")
64
+ model = AutoModelForCausalLM.from_pretrained("varma007ut/Indian_Legal_Assitant")
65
+
66
+ prompt = "What are the fundamental rights in the Indian Constitution?"
67
+ inputs = tokenizer(prompt, return_tensors="pt")
68
+ outputs = model.generate(**inputs, max_length=200)
69
+ print(tokenizer.decode(outputs[0]))
70
+ 3. Using GGUF format with llama-cpp-python
71
+ pythonCopyfrom llama_cpp import Llama
72
+
73
+ llm = Llama.from_pretrained(
74
+ repo_id="varma007ut/Indian_Legal_Assitant",
75
+ filename="ggml-model-q4_0.gguf", # Replace with the actual GGUF filename if different
76
+ )
77
+
78
+ response = llm.create_chat_completion(
79
+ messages = [
80
+ {
81
+ "role": "user",
82
+ "content": "Explain the concept of judicial review in India."
83
+ }
84
+ ]
85
+ )
86
+
87
+ print(response['choices'][0]['message']['content'])
88
+ 4. Using Inference Endpoints
89
+ This model supports Hugging Face Inference Endpoints. You can deploy the model and use it via API calls. Refer to the Hugging Face documentation for more information on setting up and using Inference Endpoints.
90
+ Evaluation
91
+ To evaluate the model's performance:
92
+
93
+ Prepare a test set of Indian legal queries or tasks.
94
+ Use standard NLP evaluation metrics such as perplexity, BLEU score, or task-specific metrics.
95
+
96
+ Example using BLEU score:
97
+ pythonCopyfrom datasets import load_metric
98
+
99
+ bleu = load_metric("bleu")
100
+ predictions = model.generate(encoded_input)
101
+ results = bleu.compute(predictions=predictions, references=references)
102
+ Contributing
103
+ We welcome contributions to improve the model or extend its capabilities. Please see our Contributing Guidelines for more details.
104
+ License
105
+ This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
106