gpt-neo-2.7B-8bit / README.md
skylersterling's picture
Update README.md
68ca011
|
raw
history blame
1.19 kB
---
language: en
license: mit
tags:
- causal-lm
datasets:
- The_Pile
---
### Quantized EleutherAI/gpt-neo-2.7B with 8-bit weights
This is a version of [EleutherAI's GPT-Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) with 2.7 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1lMja-CPc0vm5_-gXNXAWU-9c0nom7vZ9)
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Links
* [EleutherAI](https://www.eleuther.ai)
* [Hivemind](https://training-transformers-together.github.io/)
* [Gustave Cortal](https://twitter.com/gustavecortal)