--- license: apache-2.0 --- # Model Card for Model ID bling-stable-lm-3b-4e1t-0.1 part of the BLING ("Best Little Instruction-following No-GPU-required") model series, RAG-instruct trained on top of a StabilityAI stablelm-3b-4e1t base model. BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even without using any advanced quantization optimizations. ### Model Description - **Developed by:** llmware - **Model type:** Instruct-trained decoder - **Language(s) (NLP):** English - **License:** [CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/) - **Finetuned from model:** stabilityai/stablelm-3b-4e1t ## Uses The intended use of BLING models is two-fold: 1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases. 2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks. ### Direct Use BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1-3B parameter GPT model. BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without having to send sensitive information over an Internet-based API. The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with BLING is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1") model = AutoModelForCausalLM.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1") The BLING model was fine-tuned with a simple "\ and \ wrapper", so to get the best results, wrap inference entries as: full_prompt = "\\: " + my_prompt + "\n" + "\\:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} ## Citations This model has been fine-tuned on the base StableLM-3B-4E1T model from StabilityAI. For more information about this base model, please see the citation below: @misc{StableLM-3B-4E1T, url={[https://huggingface.co/stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)}, title={StableLM 3B 4E1T}, author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos} } ## Model Card Contact Darren Oberst & llmware team Please reach out anytime if you are interested in this project and would like to participate and work with us!