Edit model card

llama3-8b-schopenhauer

llama_sch.png

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on a synthetic dataset of argumentative conversations. The model has been built by Raphaaal, vdeva, margotcosson and basileplus

Model description

The model as been trained to be an argumentative expert, following deterministic rethoric guidelines depicted by Schopenhauer in The Art of Being Right. The model aims at showing how persuasive a model can be if we simply introduce some simple deterministic argumentative guidelines.

Training and evaluation data

The model has been trained using LoRa on a small synthetic dataset which quality can be improved both in size and quality. The model has shown great performance in responding with short percuting answers to argumentative conversations. No argumentative metric has been implemented, interesting arguments evaluation benchmark can be found in Cabrio, E., & Villata, S. (Year). Towards a Benchmark of Natural Language Arguments. INRIA Sophia Antipolis, France.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for basilePlus/llama3-8b-schopenhauer

Adapter
(618)
this model