Text Generation
Transformers
PyTorch
TeleFLM
custom_code
jasonfang3900 commited on
Commit
233c791
1 Parent(s): d1d5733

Upload configuration_teleflm.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. configuration_teleflm.py +196 -0
configuration_teleflm.py ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """ Tele-FLM model configuration"""
21
+
22
+ from transformers.configuration_utils import PretrainedConfig
23
+ from transformers.utils import logging
24
+
25
+
26
+ logger = logging.get_logger(__name__)
27
+
28
+ TeleFLM_PRETRAINED_CONFIG_ARCHIVE_MAP={}
29
+
30
+
31
+ class TeleFLMConfig(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`TeleFLM`]. It is used to instantiate an TeleFLM
34
+ model according to the specified arguments, defining the model architecture.
35
+
36
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
37
+ documentation from [`PretrainedConfig`] for more information.
38
+
39
+
40
+ Args:
41
+ vocab_size (`int`, *optional*, defaults to 32000):
42
+ Vocabulary size of the TeleFLM model. Defines the number of different tokens that can be represented by the
43
+ `inputs_ids` passed when calling [`TeleFLM`]
44
+ hidden_size (`int`, *optional*, defaults to 4096):
45
+ Dimension of the hidden representations.
46
+ intermediate_size (`int`, *optional*, defaults to 11008):
47
+ Dimension of the MLP representations.
48
+ num_hidden_layers (`int`, *optional*, defaults to 32):
49
+ Number of hidden layers in the Transformer decoder.
50
+ num_attention_heads (`int`, *optional*, defaults to 32):
51
+ Number of attention heads for each attention layer in the Transformer decoder.
52
+ num_key_value_heads (`int`, *optional*):
53
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
54
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
55
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
56
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
57
+ by meanpooling all the original heads within that group. For more details checkout [this
58
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
59
+ `num_attention_heads`.
60
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
63
+ The maximum sequence length that this model might ever be used with. TeleFLM supports up to 4096 tokens.
64
+ initializer_range (`float`, *optional*, defaults to 0.02):
65
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
66
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
67
+ The epsilon used by the rms normalization layers.
68
+ use_cache (`bool`, *optional*, defaults to `True`):
69
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
70
+ relevant if `config.is_decoder=True`.
71
+ pad_token_id (`int`, *optional*):
72
+ Padding token id.
73
+ bos_token_id (`int`, *optional*, defaults to 1):
74
+ Beginning of stream token id.
75
+ eos_token_id (`int`, *optional*, defaults to 2):
76
+ End of stream token id.
77
+ pretraining_tp (`int`, *optional*, defaults to 1):
78
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
79
+ document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to understand more about it. This value is
80
+ necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
81
+ issue](https://github.com/pytorch/pytorch/issues/76232).
82
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
83
+ Whether to tie weight embeddings
84
+ rope_theta (`float`, *optional*, defaults to 10000.0):
85
+ The base period of the RoPE embeddings.
86
+ rope_scaling (`Dict`, *optional*):
87
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
88
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
89
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
90
+ `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
91
+ these scaling strategies behave:
92
+ https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
93
+ experimental feature, subject to breaking API changes in future versions.
94
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
95
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
96
+ attention_dropout (`float`, *optional*, defaults to 0.0):
97
+ The dropout ratio for the attention probabilities.
98
+
99
+ ```python
100
+ >>> from transformers import TeleFLMModel, TeleFLMConfig
101
+
102
+ >>> # Initializing a TeleFLM configuration
103
+ >>> configuration = TeleFLMConfig()
104
+
105
+ >>> # Initializing a model from TeleFLM configuration
106
+ >>> model = TeleFLMModel(configuration)
107
+
108
+ >>> # Accessing the model configuration
109
+ >>> configuration = model.config
110
+ ```"""
111
+
112
+ model_type = "TeleFLM"
113
+ keys_to_ignore_at_inference = ["past_key_values"]
114
+
115
+ def __init__(
116
+ self,
117
+ vocab_size=32000,
118
+ hidden_size=4096,
119
+ intermediate_size=11008,
120
+ num_hidden_layers=32,
121
+ num_attention_heads=32,
122
+ num_key_value_heads=None,
123
+ hidden_act="silu",
124
+ max_position_embeddings=2048,
125
+ initializer_range=0.02,
126
+ rms_norm_eps=1e-6,
127
+ use_cache=True,
128
+ pad_token_id=None,
129
+ bos_token_id=1,
130
+ eos_token_id=2,
131
+ pretraining_tp=1,
132
+ tie_word_embeddings=False,
133
+ rope_theta=10000.0,
134
+ rope_scaling=None,
135
+ attention_bias=False,
136
+ attention_dropout=0.0,
137
+ use_mup=False,
138
+ mup_scale_factor=1.0,
139
+ output_mult=1.0,
140
+ input_mult=1.0,
141
+ **kwargs,
142
+ ):
143
+ self.vocab_size = vocab_size
144
+ self.max_position_embeddings = max_position_embeddings
145
+ self.hidden_size = hidden_size
146
+ self.intermediate_size = intermediate_size
147
+ self.num_hidden_layers = num_hidden_layers
148
+ self.num_attention_heads = num_attention_heads
149
+
150
+ # for backward compatibility
151
+ if num_key_value_heads is None:
152
+ num_key_value_heads = num_attention_heads
153
+
154
+ self.num_key_value_heads = num_key_value_heads
155
+ self.hidden_act = hidden_act
156
+ self.initializer_range = initializer_range
157
+ self.rms_norm_eps = rms_norm_eps
158
+ self.pretraining_tp = pretraining_tp
159
+ self.use_cache = use_cache
160
+ self.rope_theta = rope_theta
161
+ self.rope_scaling = rope_scaling
162
+ self._rope_scaling_validation()
163
+ self.attention_bias = attention_bias
164
+ self.attention_dropout = attention_dropout
165
+ self.use_mup=use_mup
166
+ self.mup_scale_factor=mup_scale_factor
167
+ self.output_mult=output_mult
168
+ self.input_mult=input_mult
169
+
170
+ super().__init__(
171
+ pad_token_id=pad_token_id,
172
+ bos_token_id=bos_token_id,
173
+ eos_token_id=eos_token_id,
174
+ tie_word_embeddings=tie_word_embeddings,
175
+ **kwargs,
176
+ )
177
+
178
+ def _rope_scaling_validation(self):
179
+ """
180
+ Validate the `rope_scaling` configuration.
181
+ """
182
+ if self.rope_scaling is None:
183
+ return
184
+
185
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
186
+ raise ValueError(
187
+ "`rope_scaling` must be a dictionary with two fields, `type` and `factor`, " f"got {self.rope_scaling}"
188
+ )
189
+ rope_scaling_type = self.rope_scaling.get("type", None)
190
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
191
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
192
+ raise ValueError(
193
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
194
+ )
195
+ if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
196
+ raise ValueError(f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}")