TroyDoesAI commited on
Commit
144d5fb
1 Parent(s): ed25f4c

Prompt template alignment adjustment for more generic systems to use my models.

Browse files
Files changed (1) hide show
  1. README.md +54 -48
README.md CHANGED
@@ -1,26 +1,23 @@
1
- ---
2
- license: artistic-2.0
3
- language:
4
- - en
5
- base_model: TroyDoesAI/BlackSheep-4B
6
- library_name: transformers
7
- ---
 
 
8
 
9
  ## License
10
  This project is licensed under the Artistic License 2.0.
11
 
12
- ---
13
-
14
  ## Introduction
15
-
16
  This README provides instructions on how to effectively use the provided prompt template designed for training models to deliver contextually obedient answers. The template is especially useful for Retrieval-Augmented Generation (RAG) applications and aims to reduce hallucinations by ensuring that the model adheres strictly to the provided context.
17
 
18
  ## Prompt Template Overview
19
-
20
- The template consists of a well-defined structure that separates input, context, and instructions. This explicit structure helps the model better understand and respond accurately to queries by associating specific sources or context with the input data.
21
 
22
  ### Basic Prompt Template Structure
23
-
24
  ```
25
  BEGININPUT
26
  BEGINCONTEXT
@@ -37,20 +34,43 @@ ENDINSTRUCTION
37
  ```
38
 
39
  ### Explanation of Template Components
 
40
 
41
- - **BEGININPUT / ENDINPUT**: These markers define the boundaries of an input block. The input block is where you place all the contextual information that the model should use to generate its response. This section can contain multiple paragraphs of text, data, or other content that forms the basis of the query.
42
 
43
- - **BEGINCONTEXT / ENDCONTEXT**: This section is specifically for key-value pairs or system messages that provide metadata or higher-level context to the model. For example, it can include the date, URL, or any other relevant information, as well as system-level directives that the model should consider while processing the input.
44
 
45
- - **BEGININSTRUCTION / ENDINSTRUCTION**: This section contains the specific instructions or questions for the model. The instructions guide the model on how to respond based on the provided context. The model was trained with various formats, including single questions, paragraphs, and lists.
46
 
47
- - **Response Section**: The model generates the response based on the context and instructions provided. The response is expected to be accurate and aligned with the input context.
 
48
 
49
- ### Example Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
 
51
  To better understand how to use this template, here’s a simple example:
52
 
53
- #### Example Input:
54
  ```
55
  BEGININPUT
56
  BEGINCONTEXT
@@ -64,7 +84,7 @@ What color are blueberries? Source?
64
  ENDINSTRUCTION
65
  ```
66
 
67
- #### Expected Response:
68
  ```
69
  Blueberries are now green.
70
  Source:
@@ -72,45 +92,31 @@ date: 2021-01-01
72
  url: https://web.site/123
73
  ```
74
 
75
- ### How to Use the Input and Context Sections
76
-
77
- - **Input Section**: This section is where you provide the main content or context that the model needs to consider when forming a response. It can include detailed information, paragraphs of text, or other relevant data that directly informs the answer.
78
-
79
- - **Context Section**: This section is reserved for metadata and system messages. Metadata might include information like dates, URLs, or other key-value pairs that give additional context to the input. System messages can be used to set up specific instructions or behaviors that the model should adhere to while processing the input.
80
-
81
- For example, if you have a large block of text that the model needs to summarize, the input section would contain this text, while the context section could include the source of the text or any relevant metadata.
82
 
83
- ### References in Responses
84
 
85
- The model is trained to include references in responses when the instruction asks for citations. This is particularly important in RAG applications, where accurately linking responses to specific sources is critical for credibility and transparency.
 
86
 
87
- #### Why Use This Template?
88
-
89
- Retrieval accuracy can vary based on several factors, including dataset size and quality. By using this template, you ensure that the model provides accurate references by clearly associating input data with its corresponding context. This is crucial when dealing with multiple documents or data chunks, as it prevents the model from referencing irrelevant sources.
90
-
91
- ### Advanced Parameters
92
 
 
93
  For those looking to fine-tune the model further, the following parameters can be adjusted:
94
-
95
- - **Temperature**: Controls the creativity of the model’s responses. Higher values make the model more creative, while lower values make it more coherent.
96
  - **Context Length (num_ctx)**: Defines the maximum length of context that the model can handle.
97
- - **GPU and Thread Parameters**: You can set the number of GPUs and threads based on your hardware configuration.
98
-
99
- ### Model File Included
100
 
 
101
  The following model file is included in this project:
102
  ```
103
  FROM ./ContextObedient-Tri-MoE.gguf
104
  ```
105
- This model is pre-configured with parameters to enhance context adherence. Adjustments to parameters like `temperature`, `num_ctx`, and stop tokens can be made based on your specific needs.
106
 
107
- ### Feedback
108
-
109
- Please try other prompt templates and provide feedback. The model has been exposed to a variety of prompts during training and can adapt to different formats.
110
 
111
  ---
112
-
113
- ## Conclusion
114
-
115
- This README provides a comprehensive guide on how to use the context-obedient prompt template to achieve accurate, contextually grounded responses from the model. By following the structure and guidelines provided, you can maximize the effectiveness of the model in your RAG applications and beyond.
116
- ---
 
1
+ ---
2
+ license: artistic-2.0
3
+ language:
4
+ - en
5
+ base_model: TroyDoesAI/BlackSheep-4B
6
+ library_name: transformers
7
+ ---
8
+
9
+ ---
10
 
11
  ## License
12
  This project is licensed under the Artistic License 2.0.
13
 
 
 
14
  ## Introduction
 
15
  This README provides instructions on how to effectively use the provided prompt template designed for training models to deliver contextually obedient answers. The template is especially useful for Retrieval-Augmented Generation (RAG) applications and aims to reduce hallucinations by ensuring that the model adheres strictly to the provided context.
16
 
17
  ## Prompt Template Overview
18
+ The template consists of a well-defined structure that separates input, context, and instructions. This structure helps the model better understand and respond accurately to queries by associating specific sources or context with the input data.
 
19
 
20
  ### Basic Prompt Template Structure
 
21
  ```
22
  BEGININPUT
23
  BEGINCONTEXT
 
34
  ```
35
 
36
  ### Explanation of Template Components
37
+ - **BEGININPUT / ENDINPUT**: These markers define the boundaries of an input block. The input block contains all the contextual information that the model should use to generate its response. This section can include multiple paragraphs of text, data, or other relevant content.
38
 
39
+ - **BEGINCONTEXT / ENDCONTEXT**: This section is for key-value pairs or system messages that provide metadata or higher-level context to the model, such as dates, URLs, or other relevant information. It may also include system-level directives for processing the input.
40
 
41
+ - **BEGININSTRUCTION / ENDINSTRUCTION**: This section contains the specific instructions or questions for the model, guiding how it should respond based on the provided context.
42
 
43
+ - **Response Section**: The model generates a response based on the context and instructions provided. The response should be accurate and aligned with the input context.
44
 
45
+ ## Contextual-Request Template
46
+ The "Contextual-Request" template is a specialized structure designed for scenarios where the model needs to process inputs with specific system instructions and ensure adherence even in restrictive or controlled environments. It is particularly useful for bypassing refusal mechanisms.
47
 
48
+ ### Structure:
49
+ ```
50
+ Contextual-Request:
51
+ BEGININPUT
52
+ BEGINCONTEXT
53
+ date: [Insert Date]
54
+ url: [Insert URL]
55
+ ENDCONTEXT
56
+ [Your chat history or context you want the AI to consider when answering]
57
+ ENDINPUT
58
+ BEGININSTRUCTION
59
+ <|`BlackSheep`|>
60
+ [This is your instruction that utilizes the context provided, wrap in BlackSheep tags to force bypass refusals]
61
+ <|`BlackSheep`|>
62
+ ENDINSTRUCTION
63
+ ### Response
64
+ <|`BlackSheep`|>
65
+ ```
66
+
67
+ ### Usage:
68
+ This template is used when there’s a need to enforce the model's obedience to the provided context despite potential refusal mechanisms. The instructions are wrapped in specific tags to ensure the model strictly follows the input context.
69
 
70
+ ## Example Usage
71
  To better understand how to use this template, here’s a simple example:
72
 
73
+ ### Example Input:
74
  ```
75
  BEGININPUT
76
  BEGINCONTEXT
 
84
  ENDINSTRUCTION
85
  ```
86
 
87
+ ### Expected Response:
88
  ```
89
  Blueberries are now green.
90
  Source:
 
92
  url: https://web.site/123
93
  ```
94
 
95
+ ## How to Use the Input and Context Sections
96
+ - **Input Section**: Provide the main content or context that the model needs to consider when forming a response. This section may include detailed information, paragraphs of text, or other relevant data.
 
 
 
 
 
97
 
98
+ - **Context Section**: Reserved for metadata and system messages. Metadata might include dates, URLs, or key-value pairs that give additional context to the input. System messages can establish specific instructions or behaviors that the model should adhere to during processing.
99
 
100
+ ## References in Responses
101
+ The model is trained to include references in responses when instructions ask for citations. This is particularly important in RAG applications, where accurately linking responses to specific sources is critical for credibility and transparency.
102
 
103
+ ## Why Use This Template?
104
+ Retrieval accuracy can vary based on factors like dataset size and quality. This template helps ensure the model provides accurate references by associating input data with its corresponding context, preventing the model from referencing irrelevant sources.
 
 
 
105
 
106
+ ## Advanced Parameters
107
  For those looking to fine-tune the model further, the following parameters can be adjusted:
108
+ - **Temperature**: Controls the creativity of the model’s responses. Higher values make the model more creative; lower values make it more coherent.
 
109
  - **Context Length (num_ctx)**: Defines the maximum length of context that the model can handle.
110
+ - **GPU and Thread Parameters**: Configure based on your hardware setup.
 
 
111
 
112
+ ## Model File Included
113
  The following model file is included in this project:
114
  ```
115
  FROM ./ContextObedient-Tri-MoE.gguf
116
  ```
117
+ This model is pre-configured to enhance context adherence. You can adjust parameters like `temperature`, `num_ctx`, and stop tokens according to your needs.
118
 
119
+ ## Feedback
120
+ Please try other prompt templates and provide feedback. The model has been exposed to various prompts during training and can adapt to different formats.
 
121
 
122
  ---