Model_1A_Clinton
This model is a fine-tuned version of gpt2-medium on a large corpus of William J. Clinton's second term discourse on terrorism.
To Prompt the Model
Try entering single words or short phrases, such as "terrorism is" or "national security" or "our foreign policy should be", in the dialogue box on the right hand side of this page. Then click on 'compute' and wait for the results. The model will take a few seconds to load on your first prompt.
Intended uses & limitations
This model is intended as an experiment on the utility of LLMs for discourse analysis on a specific corpus of political rhetoric.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for GPT-JF/Model_1A_Clinton
Base model
openai-community/gpt2-medium