File size: 1,254 Bytes
5fd2ec4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Model Card for qa-expert-7B-V1.0-GGUF

<!-- Provide a quick summary of what the model is/does. -->
This repo contains the GGUF format model files for [khaimaitien/qa-expert-7B-V1.0](https://huggingface.co/khaimaitien/qa-expert-7B-V1.0).

You can get more information about how to **use/train** the model from this repo: https://github.com/khaimt/qa_expert

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [https://github.com/khaimt/qa_expert]

## How to Get Started with the Model
First, you need to clone the repo: https://github.com/khaimt/qa_expert

Then install the requirements:

```shell
pip install -r requirements.txt
```
Then install [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)

Here is the example code:

```python
from qa_expert import get_inference_model, InferenceType
def retrieve(query: str) -> str:
    # You need to implement this retrieval function, input is a query and output is a string
    # This can be treated as the function to call in function calling of OpenAI
    return context

model_inference = get_inference_model(InferenceType.llama_cpp, "qa-expert-7B-V1.0.q4_0.gguf")
answer, messages = model_inference.generate_answer(question, retriever_func)
```