Edit model card

Model Card

This is a fine-tuned granite-7b-lab on OpenShift 4.15 documentation using 45212 Q&A pairs.

  • Fine tuned by: William Caban
  • License: Apache 2.0
  • Context length: 32K (base model)
  • OpenShift 4.15 Knowledge cutoff date: April 12 2024

Method

The Q&A corpus was generated using the following methodology:

  1. Generated 5 Q&A pairs for each page on OpenShift (OCP) 4.15 PDFs with lengths greater than 1500 characters. The length was chosen to remove the title page and pages without much content.
  2. The Mistral-7B-Instruct-v0.2 was used to generate the questions for each page.
  3. The Mixtral-8x22B-Instruct-v0.1 was used to generate the answer from the content in the page.
  4. A voting evaluation between Mixtral-8x22B and Llama3-7B was used to evaluate the quality of Q&A pair in relation to the page content and removed low quality entries.
  5. Removed Q&A pairs with questions containing phrases or words like "this example", "this context", "this document", "trademark" and "copyright"

The resulting corpus is a 45212 Q&A-pairs. The corups was divided into training (42951 Q&A pairs) and eval (2261 Q&A pairs).

The model was trained on 3000 iterations.

KNOWN LIMITATIONS There is significant drop in accuracy and performance when using a quantized version of this model.

Using the model

When using in combination with RAG, the model has preference for a CONTEXT section from which to augment its knowledge.

## INSTRUCTIONS
<your_instructions_here>

## TASK
<what_you_want_the_model_to_achieve>

## CONTEXT
<any_new_or_additional_context_for_answering_question>

## QUESTION
<question_from_user>

Intended Use

  • This model is a quick proof of concept (POC) for the fine tuning a base model with expertise and basic guardrails to reduce the reliance on prompts and multiple filtering mechanism to moderate the results.
  • The model improves the quality of responses about OpenShift topics without RAG content while further improving responses when RAG context is provided.
  • The model was created as a POC in a lab environment and as such it is not intended for production use.

Bias, Risks, and Limitations

  • The model was trained with basic instructions to refuse answering questions unrelated to Kubernetes, OpenShift and Kubernetes related topics. Due to strict instructions during training, the model may refuse to answer valid Kubernetes or OpenShift questions when topics of the context were not present during training.

  • The model has not been aligned to human social preferences, so the model might produce problematic output. The model might also maintain the limitations and constraints that arise from the base model.

  • The model undergoes training on synthetic data, leading to the potential inheritance of both advantages and limitations from the underlying data generation methods.

  • In the absence of adequate safeguards and RLHF, there exists a risk of malicious utilization of these models for generating disinformation or harmful content. Caution is urged against complete reliance on a specific language model for crucial decisions or impactful information, as preventing these models from fabricating content is not straightforward. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in ungrounded generation scenarios due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain.

Downloads last month
14
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.