Matt commited on
Commit
417e15c
1 Parent(s): 4574d54

Upload model

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - stripedhyena
5
+ - long context
6
+ - deep signal processing
7
+ - hybrid
8
+ - biology
9
+ - genomics
10
+ ---
11
+
12
+
13
+ ## Evo-1 (Phase 1)
14
+
15
+ <p align="center">
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/JoEHcvLTUlHoMcgh3mmAz.png" width="70%" />
17
+ </p>
18
+
19
+
20
+ ### About
21
+
22
+ Evo is a biological foundation model capable of long-context modeling and design.
23
+
24
+ Evo uses the [StripedHyena architecture](https://github.com/togethercomputer/stripedhyena) to enable modeling of sequences at a single-nucleotide, byte-level resolution with near-linear scaling of compute and memory relative to context length.
25
+ Evo has 7 billion parameters and is trained on OpenGenome, a prokaryotic whole-genome dataset containing ~300 billion tokens.
26
+
27
+ Technical details about Evo can be found in our preprint and our accompanying blog posts. Evo was collaboratively developed by the [Arc Institute](https://arcinstitute.org/) and TogetherAI.
28
+
29
+ As part of our commitment to open science, we release **weights of 15 intermediate pretraining checkpoints** for phase 1 and phase 2 of pretraining. The checkpoints are available as branches of the corresponding HuggingFace repository.
30
+
31
+ **Evo-1 (Phase 1)** is our first model in the Evo family, trained at a context length of 8k.
32
+
33
+ | Checkpoint Name | Description |
34
+ |----------------------------------------|-------------|
35
+ | `evo-1-8k-base` | A model pretrained with 8,192 context. We use this model as the base model for molecular-scale finetuning tasks. |
36
+ | `evo-1-131k-base` | A model pretrained with 131,072 context using `evo-1-8k-base` as the initialization. We use this model to reason about and generate sequences at the genome scale. |
37
+
38
+ ### Model Architecture
39
+
40
+ StripedHyena is a deep signal processing, hybrid architecture composed of multi-head attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, improving over decoder-only Transformers.
41
+
42
+ StripedHyena is designed to leverage the specialization of each of its layer classes, with Hyena layers implementing the bulk of the computation required for sequence processing and attention layers supplementing the ability to perform targeted pattern recall.
43
+
44
+ Some highlights of the architecture:
45
+ - **Efficient autoregressive generation** via a recurrent mode (>500k generation with a single 80GB GPU)
46
+ - **Significantly faster training and finetuning** at long context (>3x at 131k)
47
+ - **Improved scaling laws over state-of-the-art architectures** (e.g., Transformer++) on both natural language and biological sequences.
48
+ - **Robust to training beyond the compute-optimal frontier** e.g., training way beyond Chinchilla-optimal token amounts (see preprint for details -- more details to come)
49
+
50
+
51
+ ### How to use Evo
52
+
53
+ Example usage is provided in the [standalone repo](https://github.com/evo-design/evo).
54
+
55
+
56
+ #### Parametrization for Inference and Finetuning
57
+
58
+ One of the advantages of deep signal processing models is their flexibility. Different parametrizations of convolutions can be used depending on the memory, expressivity and causality requirements of pretraining, finetuning or inference workloads.
59
+
60
+ The main classes are:
61
+ - Modal canonical: unconstrained poles ([reference](https://arxiv.org/pdf/2203.14343.pdf), [reference](https://arxiv.org/abs/2310.18780)), or constrained poles ([reference](https://arxiv.org/abs/2206.11893), [reference](https://arxiv.org/pdf/2303.06349.pdf)).
62
+ - Companion canonical / rational: TBA.
63
+ - Hypernetworks: hypernetwork ([reference](https://arxiv.org/abs/2102.02611)), modulated hypernetwork ([reference](https://arxiv.org/abs/2302.10866)).
64
+ - Explicit: modulated explicit ([reference](https://arxiv.org/pdf/2210.09298.pdf)).
65
+
66
+ StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training.
67
+
68
+
69
+
70
+ ### Disclaimer
71
+
72
+ To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena).
73
+
74
+ ## Cite
75
+
76
+ ```
77
+ @article{nguyen2024sequence,
78
+ author = {Eric Nguyen and Michael Poli and Matthew G. Durrant and Armin W. Thomas and Brian Kang and Jeremy Sullivan and Madelena Y. Ng and Ashley Lewis and Aman Patel and Aaron Lou and Stefano Ermon and Stephen A. Baccus and Tina Hernandez-Boussard and Christopher Ré and Patrick D. Hsu and Brian L. Hie},
79
+ journal = {Arc Institute manuscripts},
80
+ title = {Sequence modeling and design from molecular to genome scale with Evo},
81
+ url = {https://arcinstitute.org/manuscripts/Evo},
82
+ year = {2024},
83
+ }
84
+ ```
config.json ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_commit_hash": "1cc23830f62c268082475776fb449af8428eb703",
3
+ "_name_or_path": "togethercomputer/evo-1-131k-base",
4
+ "architectures": [
5
+ "StripedHyenaModelForCausalLM"
6
+ ],
7
+ "attn_layer_idxs": [
8
+ 8,
9
+ 16,
10
+ 24
11
+ ],
12
+ "auto_map": {
13
+ "AutoConfig": "togethercomputer/evo-1-131k-base--configuration_hyena.StripedHyenaConfig",
14
+ "AutoModelForCausalLM": "togethercomputer/evo-1-131k-base--modeling_hyena.StripedHyenaModelForCausalLM",
15
+ "AutoTokenizer": "togethercomputer/evo-1-131k-base--tokenizer.ByteTokenizer"
16
+ },
17
+ "column_split": false,
18
+ "column_split_hyena": true,
19
+ "eps": 1e-06,
20
+ "final_norm": true,
21
+ "hidden_size": 4096,
22
+ "hyena_filter_groups": 1,
23
+ "hyena_layer_idxs": [
24
+ 0,
25
+ 1,
26
+ 2,
27
+ 3,
28
+ 4,
29
+ 5,
30
+ 6,
31
+ 7,
32
+ 9,
33
+ 10,
34
+ 11,
35
+ 12,
36
+ 13,
37
+ 14,
38
+ 15,
39
+ 17,
40
+ 18,
41
+ 19,
42
+ 20,
43
+ 21,
44
+ 22,
45
+ 23,
46
+ 25,
47
+ 26,
48
+ 27,
49
+ 28,
50
+ 29,
51
+ 30,
52
+ 31
53
+ ],
54
+ "inference_mode": false,
55
+ "inner_mlp_size": 10928,
56
+ "log_intermediate_values": false,
57
+ "make_vocab_size_divisible_by": 8,
58
+ "max_seqlen": 8192,
59
+ "mha_out_proj_bias": true,
60
+ "mlp_activation": "gelu",
61
+ "model_parallel_size": 1,
62
+ "model_type": "stripedhyena",
63
+ "num_attention_heads": 32,
64
+ "num_filters": 4096,
65
+ "num_layers": 32,
66
+ "pipe_parallel_size": 1,
67
+ "prefill_style": "fft",
68
+ "proj_groups": 1,
69
+ "qkv_proj_bias": true,
70
+ "rotary_emb_base": 10000,
71
+ "rotary_emb_scaling_factor": 1,
72
+ "short_filter_bias": true,
73
+ "short_filter_length": 3,
74
+ "smeared_gqa": false,
75
+ "split_k0": true,
76
+ "state_size": 8,
77
+ "tie_embeddings": true,
78
+ "torch_dtype": "bfloat16",
79
+ "transformers_version": null,
80
+ "use_cache": true,
81
+ "use_flash_attn": true,
82
+ "use_flash_depthwise": false,
83
+ "use_flash_rmsnorm": false,
84
+ "use_flashfft": false,
85
+ "use_interpolated_rotary_pos_emb": false,
86
+ "vocab_size": 512
87
+ }
generation_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "transformers_version": "4.36.2"
4
+ }
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a08792f22697584c4b0c6cd1729902bc993ad7396b76f5caf6d7cc2b32ab882
3
+ size 4980059464
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf18e9010465bff1def520ef5f6124dffde1b36eb2a3359fb6a995afbae284c0
3
+ size 4929849248
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1d054d7f3ef4286da9e51045016de044738f4e66da332576f6d292c7965ecc4
3
+ size 3003304856
model.safetensors.index.json ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 12913164672
4
+ },
5
+ "weight_map": {
6
+ "backbone.blocks.0.filter.D": "model-00001-of-00003.safetensors",
7
+ "backbone.blocks.0.filter.poles": "model-00001-of-00003.safetensors",
8
+ "backbone.blocks.0.filter.residues": "model-00001-of-00003.safetensors",
9
+ "backbone.blocks.0.filter.short_filter_bias": "model-00001-of-00003.safetensors",
10
+ "backbone.blocks.0.filter.short_filter_weight": "model-00001-of-00003.safetensors",
11
+ "backbone.blocks.0.mlp.l1.weight": "model-00001-of-00003.safetensors",
12
+ "backbone.blocks.0.mlp.l2.weight": "model-00001-of-00003.safetensors",
13
+ "backbone.blocks.0.mlp.l3.weight": "model-00001-of-00003.safetensors",
14
+ "backbone.blocks.0.out_filter_dense.bias": "model-00001-of-00003.safetensors",
15
+ "backbone.blocks.0.out_filter_dense.weight": "model-00001-of-00003.safetensors",
16
+ "backbone.blocks.0.post_norm.scale": "model-00001-of-00003.safetensors",
17
+ "backbone.blocks.0.pre_norm.scale": "model-00001-of-00003.safetensors",
18
+ "backbone.blocks.0.projections.bias": "model-00001-of-00003.safetensors",
19
+ "backbone.blocks.0.projections.weight": "model-00001-of-00003.safetensors",
20
+ "backbone.blocks.1.filter.D": "model-00001-of-00003.safetensors",
21
+ "backbone.blocks.1.filter.poles": "model-00001-of-00003.safetensors",
22
+ "backbone.blocks.1.filter.residues": "model-00001-of-00003.safetensors",
23
+ "backbone.blocks.1.filter.short_filter_bias": "model-00001-of-00003.safetensors",
24
+ "backbone.blocks.1.filter.short_filter_weight": "model-00001-of-00003.safetensors",
25
+ "backbone.blocks.1.mlp.l1.weight": "model-00001-of-00003.safetensors",
26
+ "backbone.blocks.1.mlp.l2.weight": "model-00001-of-00003.safetensors",
27
+ "backbone.blocks.1.mlp.l3.weight": "model-00001-of-00003.safetensors",
28
+ "backbone.blocks.1.out_filter_dense.bias": "model-00001-of-00003.safetensors",
29
+ "backbone.blocks.1.out_filter_dense.weight": "model-00001-of-00003.safetensors",
30
+ "backbone.blocks.1.post_norm.scale": "model-00001-of-00003.safetensors",
31
+ "backbone.blocks.1.pre_norm.scale": "model-00001-of-00003.safetensors",
32
+ "backbone.blocks.1.projections.bias": "model-00001-of-00003.safetensors",
33
+ "backbone.blocks.1.projections.weight": "model-00001-of-00003.safetensors",
34
+ "backbone.blocks.10.filter.D": "model-00001-of-00003.safetensors",
35
+ "backbone.blocks.10.filter.poles": "model-00001-of-00003.safetensors",
36
+ "backbone.blocks.10.filter.residues": "model-00001-of-00003.safetensors",
37
+ "backbone.blocks.10.filter.short_filter_bias": "model-00001-of-00003.safetensors",
38
+ "backbone.blocks.10.filter.short_filter_weight": "model-00001-of-00003.safetensors",
39
+ "backbone.blocks.10.mlp.l1.weight": "model-00001-of-00003.safetensors",
40
+ "backbone.blocks.10.mlp.l2.weight": "model-00001-of-00003.safetensors",
41
+ "backbone.blocks.10.mlp.l3.weight": "model-00001-of-00003.safetensors",
42
+ "backbone.blocks.10.out_filter_dense.bias": "model-00001-of-00003.safetensors",
43
+ "backbone.blocks.10.out_filter_dense.weight": "model-00001-of-00003.safetensors",
44
+ "backbone.blocks.10.post_norm.scale": "model-00001-of-00003.safetensors",
45
+ "backbone.blocks.10.pre_norm.scale": "model-00001-of-00003.safetensors",
46
+ "backbone.blocks.10.projections.bias": "model-00001-of-00003.safetensors",
47
+ "backbone.blocks.10.projections.weight": "model-00001-of-00003.safetensors",
48
+ "backbone.blocks.11.filter.D": "model-00001-of-00003.safetensors",
49
+ "backbone.blocks.11.filter.poles": "model-00001-of-00003.safetensors",
50
+ "backbone.blocks.11.filter.residues": "model-00001-of-00003.safetensors",
51
+ "backbone.blocks.11.filter.short_filter_bias": "model-00001-of-00003.safetensors",
52
+ "backbone.blocks.11.filter.short_filter_weight": "model-00001-of-00003.safetensors",
53
+ "backbone.blocks.11.mlp.l1.weight": "model-00001-of-00003.safetensors",
54
+ "backbone.blocks.11.mlp.l2.weight": "model-00001-of-00003.safetensors",
55
+ "backbone.blocks.11.mlp.l3.weight": "model-00001-of-00003.safetensors",
56
+ "backbone.blocks.11.out_filter_dense.bias": "model-00001-of-00003.safetensors",
57
+ "backbone.blocks.11.out_filter_dense.weight": "model-00001-of-00003.safetensors",
58
+ "backbone.blocks.11.post_norm.scale": "model-00001-of-00003.safetensors",
59
+ "backbone.blocks.11.pre_norm.scale": "model-00001-of-00003.safetensors",
60
+ "backbone.blocks.11.projections.bias": "model-00001-of-00003.safetensors",
61
+ "backbone.blocks.11.projections.weight": "model-00001-of-00003.safetensors",
62
+ "backbone.blocks.12.filter.D": "model-00001-of-00003.safetensors",
63
+ "backbone.blocks.12.filter.poles": "model-00001-of-00003.safetensors",
64
+ "backbone.blocks.12.filter.residues": "model-00001-of-00003.safetensors",
65
+ "backbone.blocks.12.filter.short_filter_bias": "model-00001-of-00003.safetensors",
66
+ "backbone.blocks.12.filter.short_filter_weight": "model-00001-of-00003.safetensors",
67
+ "backbone.blocks.12.mlp.l1.weight": "model-00002-of-00003.safetensors",
68
+ "backbone.blocks.12.mlp.l2.weight": "model-00002-of-00003.safetensors",
69
+ "backbone.blocks.12.mlp.l3.weight": "model-00002-of-00003.safetensors",
70
+ "backbone.blocks.12.out_filter_dense.bias": "model-00001-of-00003.safetensors",
71
+ "backbone.blocks.12.out_filter_dense.weight": "model-00001-of-00003.safetensors",
72
+ "backbone.blocks.12.post_norm.scale": "model-00001-of-00003.safetensors",
73
+ "backbone.blocks.12.pre_norm.scale": "model-00001-of-00003.safetensors",
74
+ "backbone.blocks.12.projections.bias": "model-00001-of-00003.safetensors",
75
+ "backbone.blocks.12.projections.weight": "model-00001-of-00003.safetensors",
76
+ "backbone.blocks.13.filter.D": "model-00002-of-00003.safetensors",
77
+ "backbone.blocks.13.filter.poles": "model-00002-of-00003.safetensors",
78
+ "backbone.blocks.13.filter.residues": "model-00002-of-00003.safetensors",
79
+ "backbone.blocks.13.filter.short_filter_bias": "model-00002-of-00003.safetensors",
80
+ "backbone.blocks.13.filter.short_filter_weight": "model-00002-of-00003.safetensors",
81
+ "backbone.blocks.13.mlp.l1.weight": "model-00002-of-00003.safetensors",
82
+ "backbone.blocks.13.mlp.l2.weight": "model-00002-of-00003.safetensors",
83
+ "backbone.blocks.13.mlp.l3.weight": "model-00002-of-00003.safetensors",
84
+ "backbone.blocks.13.out_filter_dense.bias": "model-00002-of-00003.safetensors",
85
+ "backbone.blocks.13.out_filter_dense.weight": "model-00002-of-00003.safetensors",
86
+ "backbone.blocks.13.post_norm.scale": "model-00002-of-00003.safetensors",
87
+ "backbone.blocks.13.pre_norm.scale": "model-00002-of-00003.safetensors",
88
+ "backbone.blocks.13.projections.bias": "model-00002-of-00003.safetensors",
89
+ "backbone.blocks.13.projections.weight": "model-00002-of-00003.safetensors",
90
+ "backbone.blocks.14.filter.D": "model-00002-of-00003.safetensors",
91
+ "backbone.blocks.14.filter.poles": "model-00002-of-00003.safetensors",
92
+ "backbone.blocks.14.filter.residues": "model-00002-of-00003.safetensors",
93
+ "backbone.blocks.14.filter.short_filter_bias": "model-00002-of-00003.safetensors",
94
+ "backbone.blocks.14.filter.short_filter_weight": "model-00002-of-00003.safetensors",
95
+ "backbone.blocks.14.mlp.l1.weight": "model-00002-of-00003.safetensors",
96
+ "backbone.blocks.14.mlp.l2.weight": "model-00002-of-00003.safetensors",
97
+ "backbone.blocks.14.mlp.l3.weight": "model-00002-of-00003.safetensors",
98
+ "backbone.blocks.14.out_filter_dense.bias": "model-00002-of-00003.safetensors",
99
+ "backbone.blocks.14.out_filter_dense.weight": "model-00002-of-00003.safetensors",
100
+ "backbone.blocks.14.post_norm.scale": "model-00002-of-00003.safetensors",
101
+ "backbone.blocks.14.pre_norm.scale": "model-00002-of-00003.safetensors",
102
+ "backbone.blocks.14.projections.bias": "model-00002-of-00003.safetensors",
103
+ "backbone.blocks.14.projections.weight": "model-00002-of-00003.safetensors",
104
+ "backbone.blocks.15.filter.D": "model-00002-of-00003.safetensors",
105
+ "backbone.blocks.15.filter.poles": "model-00002-of-00003.safetensors",
106
+ "backbone.blocks.15.filter.residues": "model-00002-of-00003.safetensors",
107
+ "backbone.blocks.15.filter.short_filter_bias": "model-00002-of-00003.safetensors",
108
+ "backbone.blocks.15.filter.short_filter_weight": "model-00002-of-00003.safetensors",
109
+ "backbone.blocks.15.mlp.l1.weight": "model-00002-of-00003.safetensors",
110
+ "backbone.blocks.15.mlp.l2.weight": "model-00002-of-00003.safetensors",
111
+ "backbone.blocks.15.mlp.l3.weight": "model-00002-of-00003.safetensors",
112
+ "backbone.blocks.15.out_filter_dense.bias": "model-00002-of-00003.safetensors",
113
+ "backbone.blocks.15.out_filter_dense.weight": "model-00002-of-00003.safetensors",
114
+ "backbone.blocks.15.post_norm.scale": "model-00002-of-00003.safetensors",
115
+ "backbone.blocks.15.pre_norm.scale": "model-00002-of-00003.safetensors",
116
+ "backbone.blocks.15.projections.bias": "model-00002-of-00003.safetensors",
117
+ "backbone.blocks.15.projections.weight": "model-00002-of-00003.safetensors",
118
+ "backbone.blocks.16.inner_mha_cls.Wqkv.bias": "model-00002-of-00003.safetensors",
119
+ "backbone.blocks.16.inner_mha_cls.Wqkv.weight": "model-00002-of-00003.safetensors",
120
+ "backbone.blocks.16.inner_mha_cls.out_proj.bias": "model-00002-of-00003.safetensors",
121
+ "backbone.blocks.16.inner_mha_cls.out_proj.weight": "model-00002-of-00003.safetensors",
122
+ "backbone.blocks.16.inner_mha_cls.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
123
+ "backbone.blocks.16.mlp.l1.weight": "model-00002-of-00003.safetensors",
124
+ "backbone.blocks.16.mlp.l2.weight": "model-00002-of-00003.safetensors",
125
+ "backbone.blocks.16.mlp.l3.weight": "model-00002-of-00003.safetensors",
126
+ "backbone.blocks.16.post_norm.scale": "model-00002-of-00003.safetensors",
127
+ "backbone.blocks.16.pre_norm.scale": "model-00002-of-00003.safetensors",
128
+ "backbone.blocks.17.filter.D": "model-00002-of-00003.safetensors",
129
+ "backbone.blocks.17.filter.poles": "model-00002-of-00003.safetensors",
130
+ "backbone.blocks.17.filter.residues": "model-00002-of-00003.safetensors",
131
+ "backbone.blocks.17.filter.short_filter_bias": "model-00002-of-00003.safetensors",
132
+ "backbone.blocks.17.filter.short_filter_weight": "model-00002-of-00003.safetensors",
133
+ "backbone.blocks.17.mlp.l1.weight": "model-00002-of-00003.safetensors",
134
+ "backbone.blocks.17.mlp.l2.weight": "model-00002-of-00003.safetensors",
135
+ "backbone.blocks.17.mlp.l3.weight": "model-00002-of-00003.safetensors",
136
+ "backbone.blocks.17.out_filter_dense.bias": "model-00002-of-00003.safetensors",
137
+ "backbone.blocks.17.out_filter_dense.weight": "model-00002-of-00003.safetensors",
138
+ "backbone.blocks.17.post_norm.scale": "model-00002-of-00003.safetensors",
139
+ "backbone.blocks.17.pre_norm.scale": "model-00002-of-00003.safetensors",
140
+ "backbone.blocks.17.projections.bias": "model-00002-of-00003.safetensors",
141
+ "backbone.blocks.17.projections.weight": "model-00002-of-00003.safetensors",
142
+ "backbone.blocks.18.filter.D": "model-00002-of-00003.safetensors",
143
+ "backbone.blocks.18.filter.poles": "model-00002-of-00003.safetensors",
144
+ "backbone.blocks.18.filter.residues": "model-00002-of-00003.safetensors",
145
+ "backbone.blocks.18.filter.short_filter_bias": "model-00002-of-00003.safetensors",
146
+ "backbone.blocks.18.filter.short_filter_weight": "model-00002-of-00003.safetensors",
147
+ "backbone.blocks.18.mlp.l1.weight": "model-00002-of-00003.safetensors",
148
+ "backbone.blocks.18.mlp.l2.weight": "model-00002-of-00003.safetensors",
149
+ "backbone.blocks.18.mlp.l3.weight": "model-00002-of-00003.safetensors",
150
+ "backbone.blocks.18.out_filter_dense.bias": "model-00002-of-00003.safetensors",
151
+ "backbone.blocks.18.out_filter_dense.weight": "model-00002-of-00003.safetensors",
152
+ "backbone.blocks.18.post_norm.scale": "model-00002-of-00003.safetensors",
153
+ "backbone.blocks.18.pre_norm.scale": "model-00002-of-00003.safetensors",
154
+ "backbone.blocks.18.projections.bias": "model-00002-of-00003.safetensors",
155
+ "backbone.blocks.18.projections.weight": "model-00002-of-00003.safetensors",
156
+ "backbone.blocks.19.filter.D": "model-00002-of-00003.safetensors",
157
+ "backbone.blocks.19.filter.poles": "model-00002-of-00003.safetensors",
158
+ "backbone.blocks.19.filter.residues": "model-00002-of-00003.safetensors",
159
+ "backbone.blocks.19.filter.short_filter_bias": "model-00002-of-00003.safetensors",
160
+ "backbone.blocks.19.filter.short_filter_weight": "model-00002-of-00003.safetensors",
161
+ "backbone.blocks.19.mlp.l1.weight": "model-00002-of-00003.safetensors",
162
+ "backbone.blocks.19.mlp.l2.weight": "model-00002-of-00003.safetensors",
163
+ "backbone.blocks.19.mlp.l3.weight": "model-00002-of-00003.safetensors",
164
+ "backbone.blocks.19.out_filter_dense.bias": "model-00002-of-00003.safetensors",
165
+ "backbone.blocks.19.out_filter_dense.weight": "model-00002-of-00003.safetensors",
166
+ "backbone.blocks.19.post_norm.scale": "model-00002-of-00003.safetensors",
167
+ "backbone.blocks.19.pre_norm.scale": "model-00002-of-00003.safetensors",
168
+ "backbone.blocks.19.projections.bias": "model-00002-of-00003.safetensors",
169
+ "backbone.blocks.19.projections.weight": "model-00002-of-00003.safetensors",
170
+ "backbone.blocks.2.filter.D": "model-00001-of-00003.safetensors",
171
+ "backbone.blocks.2.filter.poles": "model-00001-of-00003.safetensors",
172
+ "backbone.blocks.2.filter.residues": "model-00001-of-00003.safetensors",
173
+ "backbone.blocks.2.filter.short_filter_bias": "model-00001-of-00003.safetensors",
174
+ "backbone.blocks.2.filter.short_filter_weight": "model-00001-of-00003.safetensors",
175
+ "backbone.blocks.2.mlp.l1.weight": "model-00001-of-00003.safetensors",
176
+ "backbone.blocks.2.mlp.l2.weight": "model-00001-of-00003.safetensors",
177
+ "backbone.blocks.2.mlp.l3.weight": "model-00001-of-00003.safetensors",
178
+ "backbone.blocks.2.out_filter_dense.bias": "model-00001-of-00003.safetensors",
179
+ "backbone.blocks.2.out_filter_dense.weight": "model-00001-of-00003.safetensors",
180
+ "backbone.blocks.2.post_norm.scale": "model-00001-of-00003.safetensors",
181
+ "backbone.blocks.2.pre_norm.scale": "model-00001-of-00003.safetensors",
182
+ "backbone.blocks.2.projections.bias": "model-00001-of-00003.safetensors",
183
+ "backbone.blocks.2.projections.weight": "model-00001-of-00003.safetensors",
184
+ "backbone.blocks.20.filter.D": "model-00002-of-00003.safetensors",
185
+ "backbone.blocks.20.filter.poles": "model-00002-of-00003.safetensors",
186
+ "backbone.blocks.20.filter.residues": "model-00002-of-00003.safetensors",
187
+ "backbone.blocks.20.filter.short_filter_bias": "model-00002-of-00003.safetensors",
188
+ "backbone.blocks.20.filter.short_filter_weight": "model-00002-of-00003.safetensors",
189
+ "backbone.blocks.20.mlp.l1.weight": "model-00002-of-00003.safetensors",
190
+ "backbone.blocks.20.mlp.l2.weight": "model-00002-of-00003.safetensors",
191
+ "backbone.blocks.20.mlp.l3.weight": "model-00002-of-00003.safetensors",
192
+ "backbone.blocks.20.out_filter_dense.bias": "model-00002-of-00003.safetensors",
193
+ "backbone.blocks.20.out_filter_dense.weight": "model-00002-of-00003.safetensors",
194
+ "backbone.blocks.20.post_norm.scale": "model-00002-of-00003.safetensors",
195
+ "backbone.blocks.20.pre_norm.scale": "model-00002-of-00003.safetensors",
196
+ "backbone.blocks.20.projections.bias": "model-00002-of-00003.safetensors",
197
+ "backbone.blocks.20.projections.weight": "model-00002-of-00003.safetensors",
198
+ "backbone.blocks.21.filter.D": "model-00002-of-00003.safetensors",
199
+ "backbone.blocks.21.filter.poles": "model-00002-of-00003.safetensors",
200
+ "backbone.blocks.21.filter.residues": "model-00002-of-00003.safetensors",
201
+ "backbone.blocks.21.filter.short_filter_bias": "model-00002-of-00003.safetensors",
202
+ "backbone.blocks.21.filter.short_filter_weight": "model-00002-of-00003.safetensors",
203
+ "backbone.blocks.21.mlp.l1.weight": "model-00002-of-00003.safetensors",
204
+ "backbone.blocks.21.mlp.l2.weight": "model-00002-of-00003.safetensors",
205
+ "backbone.blocks.21.mlp.l3.weight": "model-00002-of-00003.safetensors",
206
+ "backbone.blocks.21.out_filter_dense.bias": "model-00002-of-00003.safetensors",
207
+ "backbone.blocks.21.out_filter_dense.weight": "model-00002-of-00003.safetensors",
208
+ "backbone.blocks.21.post_norm.scale": "model-00002-of-00003.safetensors",
209
+ "backbone.blocks.21.pre_norm.scale": "model-00002-of-00003.safetensors",
210
+ "backbone.blocks.21.projections.bias": "model-00002-of-00003.safetensors",
211
+ "backbone.blocks.21.projections.weight": "model-00002-of-00003.safetensors",
212
+ "backbone.blocks.22.filter.D": "model-00002-of-00003.safetensors",
213
+ "backbone.blocks.22.filter.poles": "model-00002-of-00003.safetensors",
214
+ "backbone.blocks.22.filter.residues": "model-00002-of-00003.safetensors",
215
+ "backbone.blocks.22.filter.short_filter_bias": "model-00002-of-00003.safetensors",
216
+ "backbone.blocks.22.filter.short_filter_weight": "model-00002-of-00003.safetensors",
217
+ "backbone.blocks.22.mlp.l1.weight": "model-00002-of-00003.safetensors",
218
+ "backbone.blocks.22.mlp.l2.weight": "model-00002-of-00003.safetensors",
219
+ "backbone.blocks.22.mlp.l3.weight": "model-00002-of-00003.safetensors",
220
+ "backbone.blocks.22.out_filter_dense.bias": "model-00002-of-00003.safetensors",
221
+ "backbone.blocks.22.out_filter_dense.weight": "model-00002-of-00003.safetensors",
222
+ "backbone.blocks.22.post_norm.scale": "model-00002-of-00003.safetensors",
223
+ "backbone.blocks.22.pre_norm.scale": "model-00002-of-00003.safetensors",
224
+ "backbone.blocks.22.projections.bias": "model-00002-of-00003.safetensors",
225
+ "backbone.blocks.22.projections.weight": "model-00002-of-00003.safetensors",
226
+ "backbone.blocks.23.filter.D": "model-00002-of-00003.safetensors",
227
+ "backbone.blocks.23.filter.poles": "model-00002-of-00003.safetensors",
228
+ "backbone.blocks.23.filter.residues": "model-00002-of-00003.safetensors",
229
+ "backbone.blocks.23.filter.short_filter_bias": "model-00002-of-00003.safetensors",
230
+ "backbone.blocks.23.filter.short_filter_weight": "model-00002-of-00003.safetensors",
231
+ "backbone.blocks.23.mlp.l1.weight": "model-00002-of-00003.safetensors",
232
+ "backbone.blocks.23.mlp.l2.weight": "model-00002-of-00003.safetensors",
233
+ "backbone.blocks.23.mlp.l3.weight": "model-00002-of-00003.safetensors",
234
+ "backbone.blocks.23.out_filter_dense.bias": "model-00002-of-00003.safetensors",
235
+ "backbone.blocks.23.out_filter_dense.weight": "model-00002-of-00003.safetensors",
236
+ "backbone.blocks.23.post_norm.scale": "model-00002-of-00003.safetensors",
237
+ "backbone.blocks.23.pre_norm.scale": "model-00002-of-00003.safetensors",
238
+ "backbone.blocks.23.projections.bias": "model-00002-of-00003.safetensors",
239
+ "backbone.blocks.23.projections.weight": "model-00002-of-00003.safetensors",
240
+ "backbone.blocks.24.inner_mha_cls.Wqkv.bias": "model-00002-of-00003.safetensors",
241
+ "backbone.blocks.24.inner_mha_cls.Wqkv.weight": "model-00002-of-00003.safetensors",
242
+ "backbone.blocks.24.inner_mha_cls.out_proj.bias": "model-00002-of-00003.safetensors",
243
+ "backbone.blocks.24.inner_mha_cls.out_proj.weight": "model-00002-of-00003.safetensors",
244
+ "backbone.blocks.24.inner_mha_cls.rotary_emb.inv_freq": "model-00002-of-00003.safetensors",
245
+ "backbone.blocks.24.mlp.l1.weight": "model-00002-of-00003.safetensors",
246
+ "backbone.blocks.24.mlp.l2.weight": "model-00003-of-00003.safetensors",
247
+ "backbone.blocks.24.mlp.l3.weight": "model-00003-of-00003.safetensors",
248
+ "backbone.blocks.24.post_norm.scale": "model-00002-of-00003.safetensors",
249
+ "backbone.blocks.24.pre_norm.scale": "model-00002-of-00003.safetensors",
250
+ "backbone.blocks.25.filter.D": "model-00003-of-00003.safetensors",
251
+ "backbone.blocks.25.filter.poles": "model-00003-of-00003.safetensors",
252
+ "backbone.blocks.25.filter.residues": "model-00003-of-00003.safetensors",
253
+ "backbone.blocks.25.filter.short_filter_bias": "model-00003-of-00003.safetensors",
254
+ "backbone.blocks.25.filter.short_filter_weight": "model-00003-of-00003.safetensors",
255
+ "backbone.blocks.25.mlp.l1.weight": "model-00003-of-00003.safetensors",
256
+ "backbone.blocks.25.mlp.l2.weight": "model-00003-of-00003.safetensors",
257
+ "backbone.blocks.25.mlp.l3.weight": "model-00003-of-00003.safetensors",
258
+ "backbone.blocks.25.out_filter_dense.bias": "model-00003-of-00003.safetensors",
259
+ "backbone.blocks.25.out_filter_dense.weight": "model-00003-of-00003.safetensors",
260
+ "backbone.blocks.25.post_norm.scale": "model-00003-of-00003.safetensors",
261
+ "backbone.blocks.25.pre_norm.scale": "model-00003-of-00003.safetensors",
262
+ "backbone.blocks.25.projections.bias": "model-00003-of-00003.safetensors",
263
+ "backbone.blocks.25.projections.weight": "model-00003-of-00003.safetensors",
264
+ "backbone.blocks.26.filter.D": "model-00003-of-00003.safetensors",
265
+ "backbone.blocks.26.filter.poles": "model-00003-of-00003.safetensors",
266
+ "backbone.blocks.26.filter.residues": "model-00003-of-00003.safetensors",
267
+ "backbone.blocks.26.filter.short_filter_bias": "model-00003-of-00003.safetensors",
268
+ "backbone.blocks.26.filter.short_filter_weight": "model-00003-of-00003.safetensors",
269
+ "backbone.blocks.26.mlp.l1.weight": "model-00003-of-00003.safetensors",
270
+ "backbone.blocks.26.mlp.l2.weight": "model-00003-of-00003.safetensors",
271
+ "backbone.blocks.26.mlp.l3.weight": "model-00003-of-00003.safetensors",
272
+ "backbone.blocks.26.out_filter_dense.bias": "model-00003-of-00003.safetensors",
273
+ "backbone.blocks.26.out_filter_dense.weight": "model-00003-of-00003.safetensors",
274
+ "backbone.blocks.26.post_norm.scale": "model-00003-of-00003.safetensors",
275
+ "backbone.blocks.26.pre_norm.scale": "model-00003-of-00003.safetensors",
276
+ "backbone.blocks.26.projections.bias": "model-00003-of-00003.safetensors",
277
+ "backbone.blocks.26.projections.weight": "model-00003-of-00003.safetensors",
278
+ "backbone.blocks.27.filter.D": "model-00003-of-00003.safetensors",
279
+ "backbone.blocks.27.filter.poles": "model-00003-of-00003.safetensors",
280
+ "backbone.blocks.27.filter.residues": "model-00003-of-00003.safetensors",
281
+ "backbone.blocks.27.filter.short_filter_bias": "model-00003-of-00003.safetensors",
282
+ "backbone.blocks.27.filter.short_filter_weight": "model-00003-of-00003.safetensors",
283
+ "backbone.blocks.27.mlp.l1.weight": "model-00003-of-00003.safetensors",
284
+ "backbone.blocks.27.mlp.l2.weight": "model-00003-of-00003.safetensors",
285
+ "backbone.blocks.27.mlp.l3.weight": "model-00003-of-00003.safetensors",
286
+ "backbone.blocks.27.out_filter_dense.bias": "model-00003-of-00003.safetensors",
287
+ "backbone.blocks.27.out_filter_dense.weight": "model-00003-of-00003.safetensors",
288
+ "backbone.blocks.27.post_norm.scale": "model-00003-of-00003.safetensors",
289
+ "backbone.blocks.27.pre_norm.scale": "model-00003-of-00003.safetensors",
290
+ "backbone.blocks.27.projections.bias": "model-00003-of-00003.safetensors",
291
+ "backbone.blocks.27.projections.weight": "model-00003-of-00003.safetensors",
292
+ "backbone.blocks.28.filter.D": "model-00003-of-00003.safetensors",
293
+ "backbone.blocks.28.filter.poles": "model-00003-of-00003.safetensors",
294
+ "backbone.blocks.28.filter.residues": "model-00003-of-00003.safetensors",
295
+ "backbone.blocks.28.filter.short_filter_bias": "model-00003-of-00003.safetensors",
296
+ "backbone.blocks.28.filter.short_filter_weight": "model-00003-of-00003.safetensors",
297
+ "backbone.blocks.28.mlp.l1.weight": "model-00003-of-00003.safetensors",
298
+ "backbone.blocks.28.mlp.l2.weight": "model-00003-of-00003.safetensors",
299
+ "backbone.blocks.28.mlp.l3.weight": "model-00003-of-00003.safetensors",
300
+ "backbone.blocks.28.out_filter_dense.bias": "model-00003-of-00003.safetensors",
301
+ "backbone.blocks.28.out_filter_dense.weight": "model-00003-of-00003.safetensors",
302
+ "backbone.blocks.28.post_norm.scale": "model-00003-of-00003.safetensors",
303
+ "backbone.blocks.28.pre_norm.scale": "model-00003-of-00003.safetensors",
304
+ "backbone.blocks.28.projections.bias": "model-00003-of-00003.safetensors",
305
+ "backbone.blocks.28.projections.weight": "model-00003-of-00003.safetensors",
306
+ "backbone.blocks.29.filter.D": "model-00003-of-00003.safetensors",
307
+ "backbone.blocks.29.filter.poles": "model-00003-of-00003.safetensors",
308
+ "backbone.blocks.29.filter.residues": "model-00003-of-00003.safetensors",
309
+ "backbone.blocks.29.filter.short_filter_bias": "model-00003-of-00003.safetensors",
310
+ "backbone.blocks.29.filter.short_filter_weight": "model-00003-of-00003.safetensors",
311
+ "backbone.blocks.29.mlp.l1.weight": "model-00003-of-00003.safetensors",
312
+ "backbone.blocks.29.mlp.l2.weight": "model-00003-of-00003.safetensors",
313
+ "backbone.blocks.29.mlp.l3.weight": "model-00003-of-00003.safetensors",
314
+ "backbone.blocks.29.out_filter_dense.bias": "model-00003-of-00003.safetensors",
315
+ "backbone.blocks.29.out_filter_dense.weight": "model-00003-of-00003.safetensors",
316
+ "backbone.blocks.29.post_norm.scale": "model-00003-of-00003.safetensors",
317
+ "backbone.blocks.29.pre_norm.scale": "model-00003-of-00003.safetensors",
318
+ "backbone.blocks.29.projections.bias": "model-00003-of-00003.safetensors",
319
+ "backbone.blocks.29.projections.weight": "model-00003-of-00003.safetensors",
320
+ "backbone.blocks.3.filter.D": "model-00001-of-00003.safetensors",
321
+ "backbone.blocks.3.filter.poles": "model-00001-of-00003.safetensors",
322
+ "backbone.blocks.3.filter.residues": "model-00001-of-00003.safetensors",
323
+ "backbone.blocks.3.filter.short_filter_bias": "model-00001-of-00003.safetensors",
324
+ "backbone.blocks.3.filter.short_filter_weight": "model-00001-of-00003.safetensors",
325
+ "backbone.blocks.3.mlp.l1.weight": "model-00001-of-00003.safetensors",
326
+ "backbone.blocks.3.mlp.l2.weight": "model-00001-of-00003.safetensors",
327
+ "backbone.blocks.3.mlp.l3.weight": "model-00001-of-00003.safetensors",
328
+ "backbone.blocks.3.out_filter_dense.bias": "model-00001-of-00003.safetensors",
329
+ "backbone.blocks.3.out_filter_dense.weight": "model-00001-of-00003.safetensors",
330
+ "backbone.blocks.3.post_norm.scale": "model-00001-of-00003.safetensors",
331
+ "backbone.blocks.3.pre_norm.scale": "model-00001-of-00003.safetensors",
332
+ "backbone.blocks.3.projections.bias": "model-00001-of-00003.safetensors",
333
+ "backbone.blocks.3.projections.weight": "model-00001-of-00003.safetensors",
334
+ "backbone.blocks.30.filter.D": "model-00003-of-00003.safetensors",
335
+ "backbone.blocks.30.filter.poles": "model-00003-of-00003.safetensors",
336
+ "backbone.blocks.30.filter.residues": "model-00003-of-00003.safetensors",
337
+ "backbone.blocks.30.filter.short_filter_bias": "model-00003-of-00003.safetensors",
338
+ "backbone.blocks.30.filter.short_filter_weight": "model-00003-of-00003.safetensors",
339
+ "backbone.blocks.30.mlp.l1.weight": "model-00003-of-00003.safetensors",
340
+ "backbone.blocks.30.mlp.l2.weight": "model-00003-of-00003.safetensors",
341
+ "backbone.blocks.30.mlp.l3.weight": "model-00003-of-00003.safetensors",
342
+ "backbone.blocks.30.out_filter_dense.bias": "model-00003-of-00003.safetensors",
343
+ "backbone.blocks.30.out_filter_dense.weight": "model-00003-of-00003.safetensors",
344
+ "backbone.blocks.30.post_norm.scale": "model-00003-of-00003.safetensors",
345
+ "backbone.blocks.30.pre_norm.scale": "model-00003-of-00003.safetensors",
346
+ "backbone.blocks.30.projections.bias": "model-00003-of-00003.safetensors",
347
+ "backbone.blocks.30.projections.weight": "model-00003-of-00003.safetensors",
348
+ "backbone.blocks.31.filter.D": "model-00003-of-00003.safetensors",
349
+ "backbone.blocks.31.filter.poles": "model-00003-of-00003.safetensors",
350
+ "backbone.blocks.31.filter.residues": "model-00003-of-00003.safetensors",
351
+ "backbone.blocks.31.filter.short_filter_bias": "model-00003-of-00003.safetensors",
352
+ "backbone.blocks.31.filter.short_filter_weight": "model-00003-of-00003.safetensors",
353
+ "backbone.blocks.31.mlp.l1.weight": "model-00003-of-00003.safetensors",
354
+ "backbone.blocks.31.mlp.l2.weight": "model-00003-of-00003.safetensors",
355
+ "backbone.blocks.31.mlp.l3.weight": "model-00003-of-00003.safetensors",
356
+ "backbone.blocks.31.out_filter_dense.bias": "model-00003-of-00003.safetensors",
357
+ "backbone.blocks.31.out_filter_dense.weight": "model-00003-of-00003.safetensors",
358
+ "backbone.blocks.31.post_norm.scale": "model-00003-of-00003.safetensors",
359
+ "backbone.blocks.31.pre_norm.scale": "model-00003-of-00003.safetensors",
360
+ "backbone.blocks.31.projections.bias": "model-00003-of-00003.safetensors",
361
+ "backbone.blocks.31.projections.weight": "model-00003-of-00003.safetensors",
362
+ "backbone.blocks.4.filter.D": "model-00001-of-00003.safetensors",
363
+ "backbone.blocks.4.filter.poles": "model-00001-of-00003.safetensors",
364
+ "backbone.blocks.4.filter.residues": "model-00001-of-00003.safetensors",
365
+ "backbone.blocks.4.filter.short_filter_bias": "model-00001-of-00003.safetensors",
366
+ "backbone.blocks.4.filter.short_filter_weight": "model-00001-of-00003.safetensors",
367
+ "backbone.blocks.4.mlp.l1.weight": "model-00001-of-00003.safetensors",
368
+ "backbone.blocks.4.mlp.l2.weight": "model-00001-of-00003.safetensors",
369
+ "backbone.blocks.4.mlp.l3.weight": "model-00001-of-00003.safetensors",
370
+ "backbone.blocks.4.out_filter_dense.bias": "model-00001-of-00003.safetensors",
371
+ "backbone.blocks.4.out_filter_dense.weight": "model-00001-of-00003.safetensors",
372
+ "backbone.blocks.4.post_norm.scale": "model-00001-of-00003.safetensors",
373
+ "backbone.blocks.4.pre_norm.scale": "model-00001-of-00003.safetensors",
374
+ "backbone.blocks.4.projections.bias": "model-00001-of-00003.safetensors",
375
+ "backbone.blocks.4.projections.weight": "model-00001-of-00003.safetensors",
376
+ "backbone.blocks.5.filter.D": "model-00001-of-00003.safetensors",
377
+ "backbone.blocks.5.filter.poles": "model-00001-of-00003.safetensors",
378
+ "backbone.blocks.5.filter.residues": "model-00001-of-00003.safetensors",
379
+ "backbone.blocks.5.filter.short_filter_bias": "model-00001-of-00003.safetensors",
380
+ "backbone.blocks.5.filter.short_filter_weight": "model-00001-of-00003.safetensors",
381
+ "backbone.blocks.5.mlp.l1.weight": "model-00001-of-00003.safetensors",
382
+ "backbone.blocks.5.mlp.l2.weight": "model-00001-of-00003.safetensors",
383
+ "backbone.blocks.5.mlp.l3.weight": "model-00001-of-00003.safetensors",
384
+ "backbone.blocks.5.out_filter_dense.bias": "model-00001-of-00003.safetensors",
385
+ "backbone.blocks.5.out_filter_dense.weight": "model-00001-of-00003.safetensors",
386
+ "backbone.blocks.5.post_norm.scale": "model-00001-of-00003.safetensors",
387
+ "backbone.blocks.5.pre_norm.scale": "model-00001-of-00003.safetensors",
388
+ "backbone.blocks.5.projections.bias": "model-00001-of-00003.safetensors",
389
+ "backbone.blocks.5.projections.weight": "model-00001-of-00003.safetensors",
390
+ "backbone.blocks.6.filter.D": "model-00001-of-00003.safetensors",
391
+ "backbone.blocks.6.filter.poles": "model-00001-of-00003.safetensors",
392
+ "backbone.blocks.6.filter.residues": "model-00001-of-00003.safetensors",
393
+ "backbone.blocks.6.filter.short_filter_bias": "model-00001-of-00003.safetensors",
394
+ "backbone.blocks.6.filter.short_filter_weight": "model-00001-of-00003.safetensors",
395
+ "backbone.blocks.6.mlp.l1.weight": "model-00001-of-00003.safetensors",
396
+ "backbone.blocks.6.mlp.l2.weight": "model-00001-of-00003.safetensors",
397
+ "backbone.blocks.6.mlp.l3.weight": "model-00001-of-00003.safetensors",
398
+ "backbone.blocks.6.out_filter_dense.bias": "model-00001-of-00003.safetensors",
399
+ "backbone.blocks.6.out_filter_dense.weight": "model-00001-of-00003.safetensors",
400
+ "backbone.blocks.6.post_norm.scale": "model-00001-of-00003.safetensors",
401
+ "backbone.blocks.6.pre_norm.scale": "model-00001-of-00003.safetensors",
402
+ "backbone.blocks.6.projections.bias": "model-00001-of-00003.safetensors",
403
+ "backbone.blocks.6.projections.weight": "model-00001-of-00003.safetensors",
404
+ "backbone.blocks.7.filter.D": "model-00001-of-00003.safetensors",
405
+ "backbone.blocks.7.filter.poles": "model-00001-of-00003.safetensors",
406
+ "backbone.blocks.7.filter.residues": "model-00001-of-00003.safetensors",
407
+ "backbone.blocks.7.filter.short_filter_bias": "model-00001-of-00003.safetensors",
408
+ "backbone.blocks.7.filter.short_filter_weight": "model-00001-of-00003.safetensors",
409
+ "backbone.blocks.7.mlp.l1.weight": "model-00001-of-00003.safetensors",
410
+ "backbone.blocks.7.mlp.l2.weight": "model-00001-of-00003.safetensors",
411
+ "backbone.blocks.7.mlp.l3.weight": "model-00001-of-00003.safetensors",
412
+ "backbone.blocks.7.out_filter_dense.bias": "model-00001-of-00003.safetensors",
413
+ "backbone.blocks.7.out_filter_dense.weight": "model-00001-of-00003.safetensors",
414
+ "backbone.blocks.7.post_norm.scale": "model-00001-of-00003.safetensors",
415
+ "backbone.blocks.7.pre_norm.scale": "model-00001-of-00003.safetensors",
416
+ "backbone.blocks.7.projections.bias": "model-00001-of-00003.safetensors",
417
+ "backbone.blocks.7.projections.weight": "model-00001-of-00003.safetensors",
418
+ "backbone.blocks.8.inner_mha_cls.Wqkv.bias": "model-00001-of-00003.safetensors",
419
+ "backbone.blocks.8.inner_mha_cls.Wqkv.weight": "model-00001-of-00003.safetensors",
420
+ "backbone.blocks.8.inner_mha_cls.out_proj.bias": "model-00001-of-00003.safetensors",
421
+ "backbone.blocks.8.inner_mha_cls.out_proj.weight": "model-00001-of-00003.safetensors",
422
+ "backbone.blocks.8.inner_mha_cls.rotary_emb.inv_freq": "model-00001-of-00003.safetensors",
423
+ "backbone.blocks.8.mlp.l1.weight": "model-00001-of-00003.safetensors",
424
+ "backbone.blocks.8.mlp.l2.weight": "model-00001-of-00003.safetensors",
425
+ "backbone.blocks.8.mlp.l3.weight": "model-00001-of-00003.safetensors",
426
+ "backbone.blocks.8.post_norm.scale": "model-00001-of-00003.safetensors",
427
+ "backbone.blocks.8.pre_norm.scale": "model-00001-of-00003.safetensors",
428
+ "backbone.blocks.9.filter.D": "model-00001-of-00003.safetensors",
429
+ "backbone.blocks.9.filter.poles": "model-00001-of-00003.safetensors",
430
+ "backbone.blocks.9.filter.residues": "model-00001-of-00003.safetensors",
431
+ "backbone.blocks.9.filter.short_filter_bias": "model-00001-of-00003.safetensors",
432
+ "backbone.blocks.9.filter.short_filter_weight": "model-00001-of-00003.safetensors",
433
+ "backbone.blocks.9.mlp.l1.weight": "model-00001-of-00003.safetensors",
434
+ "backbone.blocks.9.mlp.l2.weight": "model-00001-of-00003.safetensors",
435
+ "backbone.blocks.9.mlp.l3.weight": "model-00001-of-00003.safetensors",
436
+ "backbone.blocks.9.out_filter_dense.bias": "model-00001-of-00003.safetensors",
437
+ "backbone.blocks.9.out_filter_dense.weight": "model-00001-of-00003.safetensors",
438
+ "backbone.blocks.9.post_norm.scale": "model-00001-of-00003.safetensors",
439
+ "backbone.blocks.9.pre_norm.scale": "model-00001-of-00003.safetensors",
440
+ "backbone.blocks.9.projections.bias": "model-00001-of-00003.safetensors",
441
+ "backbone.blocks.9.projections.weight": "model-00001-of-00003.safetensors",
442
+ "backbone.embedding_layer.weight": "model-00001-of-00003.safetensors",
443
+ "backbone.norm.scale": "model-00001-of-00003.safetensors"
444
+ }
445
+ }
pytorch_model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67d8791dd9318b2276d96e01442c626e1bc92430370d15d33c7acfb4e8fa72a7
3
+ size 16814399082
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {},
3
+ "auto_map": {
4
+ "AutoTokenizer": [
5
+ "tokenizer.ByteTokenizer",
6
+ null
7
+ ]
8
+ },
9
+ "byte_level": true,
10
+ "clean_up_tokenization_spaces": true,
11
+ "model_max_length": 1000000000000000019884624838656,
12
+ "padding_side": "left",
13
+ "truncation_side": "left"
14
+ }