File size: 6,382 Bytes
183e10e 6bfefc7 f0ba1c2 dcacbb4 183e10e 14e0d66 f5690ed 14e0d66 6abaf20 f0ba1c2 7ca5635 9e6f0fc d4fa763 4dfe35e 8e6d3a9 14e0d66 f5690ed a0d6837 f5690ed 14e0d66 f5690ed 14e0d66 f0ba1c2 f5690ed 14e0d66 f5690ed 14e0d66 f5690ed 14e0d66 f5690ed 14e0d66 f5690ed 14e0d66 f5690ed 14e0d66 3cd2c66 14e0d66 9e372d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-1_5/blob/main/Research%20License.docx
inference: false
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
bling-phi-1_5b-v0 is part of the BLING ("Best Little Instruct No GPU required") model series, RAG-instruct trained on top of a Microsoft Phi-1.5B base model.
BLING models are fine-tuned with high-quality custom instruct datasets, designed for rapid protyping in RAG scenarios.
**Please note that use of this model is subject to the [Microsoft Research License](https://huggingface.co/microsoft/phi-1_5/blob/main/Research%20License.docx) - and may not be used for any commercial purpose. We are providing our fine-tuned version, including the benchmarking results, solely for purpose of research and advancing insights on the performance of smaller models in RAG scenarios - it provides another point of comparison with other similarly sized and similarly finetuned BLING models.**
For models with comparable performance and open source licenses that permit commercial use cases in RAG deployments, please see:
[**bling-falcon-1b-0.1**](https://huggingface.co/llmware/bling-falcon-1b-0.1)
[**bling-sheared-llama-1.3b-0.1**](https://huggingface.co/llmware/bling-sheared-llama-1.3b-0.1)
[**bling-1b-0.1**](https://huggingface.co/llmware/bling-1b-0.1)
[**bling-1.4b-0.1**](https://huggingface.co/llmware/bling-1.4b-0.1)
[**bling-tiny-llama-v0**](https://huggingface.co/llmware/bling-tiny-llama-v0)
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **87.75** correct out of 100
--Not Found Classification: 47.50%
--Boolean: 80.00%
--Math/Logic: 53.75%
--Complex Questions (1-5): 3 (Average-to-Low)
--Summarization Quality (1-5): 3 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Phi-1.5B
- **Language(s) (NLP):** English
- **License:** [Microsoft Research License](https://huggingface.co/microsoft/phi-1_5/blob/main/Research%20License.docx)
- **Finetuned from model:** Microsoft Phi-1.5B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1. Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.
2. BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.
BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with dRAGon is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bling-phi-1_5b-v0", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("bling-phi-1_5b-v0", trust_remote_code=True)
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.3 for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
## Model Card Contact
Darren Oberst & llmware team |