elucidator8918 commited on
Commit
d2f03a1
1 Parent(s): 458e930

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - starmpcc/Asclepius-Synthetic-Clinical-Notes
5
+ language:
6
+ - en
7
+ ---
8
+ ## Overview
9
+
10
+ This model, elucidator8918/apigen-prototype-0.1, is tailored for API generation, based on the Mistral-7B-Instruct-v0.1-sharded architecture fine-tuned on the LLAMA-2 Instruct 121k Code dataset.
11
+
12
+ ## Key Information
13
+
14
+ - **Model Name**: Mistral-7B-Instruct-v0.1-sharded
15
+ - **Fine-tuned Model Name**: elucidator8918/apigen-prototype-0.1
16
+ - **Dataset**: emre/llama-2-instruct-121k-code
17
+ - **Language**: English (en)
18
+
19
+ ## Model Details
20
+
21
+ - **LoRA Parameters (QLoRA):**
22
+ - LoRA attention dimension: 64
23
+ - Alpha parameter for LoRA scaling: 16
24
+ - Dropout probability for LoRA layers: 0.1
25
+
26
+ - **bitsandbytes Parameters:**
27
+ - Activate 4-bit precision base model loading
28
+ - Compute dtype for 4-bit base models: float16
29
+ - Quantization type: nf4
30
+ - Activate nested quantization for 4-bit base models: No
31
+
32
+ - **TrainingArguments Parameters:**
33
+ - Number of training epochs: 1
34
+ - Batch size per GPU for training: 4
35
+ - Batch size per GPU for evaluation: 4
36
+ - Gradient accumulation steps: 1
37
+ - Enable gradient checkpointing: Yes
38
+ - Maximum gradient norm: 0.3
39
+ - Initial learning rate: 2e-4
40
+ - Weight decay: 0.001
41
+ - Optimizer: paged_adamw_32bit
42
+ - Learning rate scheduler type: cosine
43
+ - Warm-up ratio: 0.03
44
+ - Group sequences into batches with the same length: Yes
45
+
46
+ ## Usage
47
+
48
+
49
+ - **Example Code (API Generation):**
50
+
51
+ ```python
52
+ from transformers import pipeline
53
+
54
+ api_gen_pipeline = pipeline("text-generation", model="elucidator8918/apigen-prototype-0.1")
55
+
56
+ # Run text generation pipeline with our next model
57
+ prompt = "Write code to do a POST request in FastAPI framework to find the multiplication of two matrices using NumPy"
58
+ pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500)
59
+ result = pipe(f"[INST] {prompt} [/INST]")
60
+ print(result[0]['generated_text'])
61
+ ```
62
+
63
+ - **Output API Generation:**
64
+ ```
65
+ [INST] Write code to do a POST request in fastapi framework to find the multiplication of two matrices using numpy [/INST]
66
+ Below is an example of how to make a POST request in FastAPI to find the multiplication of two matrices using numpy:
67
+ ```
68
+ ```python
69
+ from fastapi import FastAPI, HTTPException
70
+ import numpy as np
71
+
72
+ app = FastAPI()
73
+
74
+ @app.post("/matrix_multiplication")
75
+ async def matrix_multiplication(matrix1: np.ndarray, matrix2: np.ndarray):
76
+ if matrix1.shape[1]!= matrix2.shape[0]:
77
+ raise HTTPException(status_code=400, detail="The number of columns in matrix1 must be equal to the number of rows in matrix2")
78
+ result = np.matmul(matrix1, matrix2)
79
+ return {"result": result}
80
+ ```
81
+ This code defines a FastAPI endpoint at `/matrix_multiplication` that takes two matrices as input and returns the multiplication of the two matrices. The `np.matmul` function is used to perform the multiplication. The endpoint also includes a check to ensure that the number of columns in the first matrix is equal to the number of rows in the second matrix.
82
+
83
+ To use this endpoint, you can make a POST request to `http://localhost:8000/matrix_multiplication` with the two matrices as input. The response will include the multiplication of the two matrices.
84
+ ```python
85
+ import requests
86
+
87
+ matrix1 = np.array([[1, 2], [3, 4]])
88
+ matrix2 = np.array([[5, 6], [7, 8]])
89
+
90
+ response = requests.post("http://localhost:8000/matrix_multiplication", json={"matrix1": matrix1, "matrix2": matrix2})
91
+
92
+ print(response.json())
93
+ ```
94
+ This code makes a POST request to the endpoint with the two matrices as input and prints the response. The response should include the multiplication of the two matrices, which is `[[11, 14], [29, 36]]`.
95
+
96
+ ## License
97
+
98
+ This model is released under the MIT License.