Edit model card

GPT-Greentext-125m

A finetuned version of GPT-Neo-125M on the 'greentext' dataset. (Linked above) Do also take a look at GPT-Greentext-1.5b, the larger size model of this project, it will produce better-quality greentexts than this model can. A demo is available here The demo playground is recommended over the inference box on the right, as it uses the largest model in this series.

Training Procedure

This was trained on the 'greentext' dataset, using the "HappyTransformers" library on Google Colab. This model was trained for 15 epochs with learning rate 1e-2.

Biases & Limitations

This likely contains the same biases and limitations as the original GPT-Neo-125M that it is based on, and additionally heavy biases from the greentext dataset. It likely will generate offensive output.

Intended Use

This model is meant for fun, nothing else.

Sample Use

#Import model:
from happytransformer import HappyGeneration
happy_gen = HappyGeneration("GPT-NEO", "DarwinAnim8or/GPT-Greentext-125m")

#Set generation settings:
from happytransformer import GENSettings
args_top_k = GENSettings(no_repeat_ngram_size=2, do_sample=True,top_k=80, temperature=0.4, max_length=150, early_stopping=False)

#Generate a response:
result = happy_gen.generate_text(""">be me
>""", args=args_top_k)

print(result)
print(result.text)
Downloads last month
18
Safetensors
Model size
176M params
Tensor type
F32
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train DarwinAnim8or/GPT-Greentext-125m