varma007ut commited on
Commit
b1f1f6e
1 Parent(s): ba622e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -43
README.md CHANGED
@@ -11,11 +11,10 @@ tags:
11
  - trl
12
  - sft
13
  ---
14
-
15
  Indian Legal Assistant: A LLaMA-based Model for Indian Legal Text Generation
16
- This repository contains information and code for using the Indian Legal Assistant, a LLaMA-based model finetuned on Indian legal texts. This model is designed to assist with various legal tasks and queries related to Indian law.
17
- Table of Contents
18
 
 
19
  Model Description
20
  Model Details
21
  Installation
@@ -23,84 +22,70 @@ Usage
23
  Evaluation
24
  Contributing
25
  License
26
-
27
  Model Description
28
- The Indian Legal Assistant is a text generation model specifically trained to understand and generate text related to Indian law. It can be used for tasks such as:
29
 
30
  Legal question answering
31
  Case summarization
32
  Legal document analysis
33
  Statute interpretation
34
-
35
  Model Details
36
-
37
- Model Name: Indian_Legal_Assitant
38
  Developer: varma007ut
39
  Model Size: 8.03B parameters
40
  Architecture: LLaMA
41
  Language: English
42
  License: Apache 2.0
43
- Hugging Face Repo: varma007ut/Indian_Legal_Assitant
44
-
45
  Installation
46
- To use this model, you'll need to install the required libraries:
47
- bashCopypip install transformers torch
 
 
 
48
  # For GGUF support
49
  pip install llama-cpp-python
50
  Usage
51
  There are several ways to use the Indian Legal Assistant model:
 
52
  1. Using Hugging Face Pipeline
53
- pythonCopyfrom transformers import pipeline
 
 
54
 
55
- pipe = pipeline("text-generation", model="varma007ut/Indian_Legal_Assitant")
56
 
57
  prompt = "Summarize the key points of the Indian Contract Act, 1872:"
58
  result = pipe(prompt, max_length=200)
59
  print(result[0]['generated_text'])
60
- 2. Using Hugging Face Transformers directly
61
- pythonCopyfrom transformers import AutoTokenizer, AutoModelForCausalLM
 
 
62
 
63
- tokenizer = AutoTokenizer.from_pretrained("varma007ut/Indian_Legal_Assitant")
64
- model = AutoModelForCausalLM.from_pretrained("varma007ut/Indian_Legal_Assitant")
65
 
66
  prompt = "What are the fundamental rights in the Indian Constitution?"
67
  inputs = tokenizer(prompt, return_tensors="pt")
68
  outputs = model.generate(**inputs, max_length=200)
69
  print(tokenizer.decode(outputs[0]))
70
- 3. Using GGUF format with llama-cpp-python
71
- pythonCopyfrom llama_cpp import Llama
 
 
72
 
73
  llm = Llama.from_pretrained(
74
- repo_id="varma007ut/Indian_Legal_Assitant",
75
  filename="ggml-model-q4_0.gguf", # Replace with the actual GGUF filename if different
76
  )
77
 
78
  response = llm.create_chat_completion(
79
- messages = [
80
- {
81
- "role": "user",
82
- "content": "Explain the concept of judicial review in India."
83
- }
84
  ]
85
  )
86
 
87
  print(response['choices'][0]['message']['content'])
88
  4. Using Inference Endpoints
89
  This model supports Hugging Face Inference Endpoints. You can deploy the model and use it via API calls. Refer to the Hugging Face documentation for more information on setting up and using Inference Endpoints.
90
- Evaluation
91
- To evaluate the model's performance:
92
-
93
- Prepare a test set of Indian legal queries or tasks.
94
- Use standard NLP evaluation metrics such as perplexity, BLEU score, or task-specific metrics.
95
-
96
- Example using BLEU score:
97
- pythonCopyfrom datasets import load_metric
98
-
99
- bleu = load_metric("bleu")
100
- predictions = model.generate(encoded_input)
101
- results = bleu.compute(predictions=predictions, references=references)
102
- Contributing
103
- We welcome contributions to improve the model or extend its capabilities. Please see our Contributing Guidelines for more details.
104
- License
105
- This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
106
-
 
11
  - trl
12
  - sft
13
  ---
 
14
  Indian Legal Assistant: A LLaMA-based Model for Indian Legal Text Generation
15
+ This repository contains information and code for the Indian Legal Assistant, a LLaMA-based model finetuned on Indian legal texts. The model is designed to assist with various legal tasks and queries related to Indian law.
 
16
 
17
+ Table of Contents
18
  Model Description
19
  Model Details
20
  Installation
 
22
  Evaluation
23
  Contributing
24
  License
 
25
  Model Description
26
+ The Indian Legal Assistant is a text generation model specifically trained to understand and generate text related to Indian law. It is suitable for various legal tasks such as:
27
 
28
  Legal question answering
29
  Case summarization
30
  Legal document analysis
31
  Statute interpretation
 
32
  Model Details
33
+ Model Name: Indian_Legal_Assistant
 
34
  Developer: varma007ut
35
  Model Size: 8.03B parameters
36
  Architecture: LLaMA
37
  Language: English
38
  License: Apache 2.0
39
+ Hugging Face Repository: varma007ut/Indian_Legal_Assistant
 
40
  Installation
41
+ To use this model, install the required libraries:
42
+
43
+ bash
44
+ Copy code
45
+ pip install transformers torch
46
  # For GGUF support
47
  pip install llama-cpp-python
48
  Usage
49
  There are several ways to use the Indian Legal Assistant model:
50
+
51
  1. Using Hugging Face Pipeline
52
+ python
53
+ Copy code
54
+ from transformers import pipeline
55
 
56
+ pipe = pipeline("text-generation", model="varma007ut/Indian_Legal_Assistant")
57
 
58
  prompt = "Summarize the key points of the Indian Contract Act, 1872:"
59
  result = pipe(prompt, max_length=200)
60
  print(result[0]['generated_text'])
61
+ 2. Using Hugging Face Transformers Directly
62
+ python
63
+ Copy code
64
+ from transformers import AutoTokenizer, AutoModelForCausalLM
65
 
66
+ tokenizer = AutoTokenizer.from_pretrained("varma007ut/Indian_Legal_Assistant")
67
+ model = AutoModelForCausalLM.from_pretrained("varma007ut/Indian_Legal_Assistant")
68
 
69
  prompt = "What are the fundamental rights in the Indian Constitution?"
70
  inputs = tokenizer(prompt, return_tensors="pt")
71
  outputs = model.generate(**inputs, max_length=200)
72
  print(tokenizer.decode(outputs[0]))
73
+ 3. Using GGUF Format with llama-cpp-python
74
+ python
75
+ Copy code
76
+ from llama_cpp import Llama
77
 
78
  llm = Llama.from_pretrained(
79
+ repo_id="varma007ut/Indian_Legal_Assistant",
80
  filename="ggml-model-q4_0.gguf", # Replace with the actual GGUF filename if different
81
  )
82
 
83
  response = llm.create_chat_completion(
84
+ messages=[
85
+ {"role": "user", "content": "Explain the concept of judicial review in India."}
 
 
 
86
  ]
87
  )
88
 
89
  print(response['choices'][0]['message']['content'])
90
  4. Using Inference Endpoints
91
  This model supports Hugging Face Inference Endpoints. You can deploy the model and use it via API calls. Refer to the Hugging Face documentation for more information on setting up and using Inference Endpoints.