BerenMillidge commited on
Commit
d2446d9
1 Parent(s): b9296e1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -7,6 +7,8 @@ Zamba-7B-v1-phase1 is a hybrid model between Mamba, a state-space model, and tra
7
 
8
  Note: the current Huggingface implementation of Zamba performs slower than our internal implementation. We are working to fix this with the Huggingface team.
9
 
 
 
10
  ## Quick start
11
 
12
  ### Presequities
@@ -43,6 +45,19 @@ outputs = model.generate(**input_ids, max_new_tokens=100)
43
  print(tokenizer.decode(outputs[0]))
44
  ```
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ## Notice
47
 
48
  Zamba is a pretrained base model and therefore does not have any moderation mechanism.
 
7
 
8
  Note: the current Huggingface implementation of Zamba performs slower than our internal implementation. We are working to fix this with the Huggingface team.
9
 
10
+ Our technical report describing the training of Zamba is available [here](https://arxiv.org/abs/2405.16712).
11
+
12
  ## Quick start
13
 
14
  ### Presequities
 
45
  print(tokenizer.decode(outputs[0]))
46
  ```
47
 
48
+ ## Citation
49
+
50
+ If you find Zamba useful in your work please cite it as:
51
+
52
+ ```
53
+ @article{glorioso2024zamba,
54
+ title={Zamba: A Compact 7B SSM Hybrid Model},
55
+ author={Glorioso, Paolo and Anthony, Quentin and Tokpanov, Yury and Whittington, James and Pilault, Jonathan and Ibrahim, Adam and Millidge, Beren},
56
+ journal={arXiv preprint arXiv:2405.16712},
57
+ year={2024}
58
+ }
59
+ ```
60
+
61
  ## Notice
62
 
63
  Zamba is a pretrained base model and therefore does not have any moderation mechanism.