phoebeklett
commited on
Commit
•
1d73c7c
1
Parent(s):
ca22bee
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
4 |
+
{}
|
5 |
+
---
|
6 |
+
|
7 |
+
# Model Card for Extended-Mind-MPT-7b
|
8 |
+
|
9 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
+
|
11 |
+
Extended Mind MPT-7b, as described in [Supersizing Transformers](https://blog.normalcomputing.ai/posts/2023-09-12-supersizing-transformers/supersizing-transformers.html).
|
12 |
+
|
13 |
+
### Model Description
|
14 |
+
|
15 |
+
<!-- Provide a longer summary of what this model is. -->
|
16 |
+
|
17 |
+
This model implements active externalism for MPT's 7b model. The model weights have not been edited. Original architecture and code by Mosaic ML.
|
18 |
+
|
19 |
+
For more details on active externalism, check out our [blog](https://blog.normalcomputing.ai/posts/2023-09-12-supersizing-transformers/supersizing-transformers.html)!
|
20 |
+
|
21 |
+
|
22 |
+
- **Developed by:** [Normal Computing](https://huggingface.co/normalcomputing), Adapted from [Mosacic ML](https://huggingface.co/mosaicml)
|
23 |
+
- **License:** Apache 2.0
|
24 |
+
|
25 |
+
|
26 |
+
## Limitations
|
27 |
+
|
28 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
29 |
+
|
30 |
+
This model is part of ongoing research at Normal Computing.
|