Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: NousResearch/Hermes-2-Pro-Mistral-7B
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: /workspace/disk2/alexandria/data/text_2_graphs_hermes.jsonl
    type: sharegpt
    conversation: chatml
dataset_prepared_path:
val_set_size: 0.0
output_dir: /workspace/disk2/alexandria/models/t2g_hermes/

sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false

wandb_project: alexandria
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 0
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"

workspace/disk2/alexandria/models/t2g_hermes/

This model is a fine-tuned version of NousResearch/Hermes-2-Pro-Mistral-7B on a version of the Project Alexandria dataset, designed to turn input plaintext into knowledge graphs structured as Python dictionaries.

Model description

This is a prototype model; trained quickly as a proof of concept. No hyperparameter tuning or extensive data cleaning besides filtering entries that met this criteria:

  • Removing refusals
  • Removing entries with an empty prompt or output
  • Any instance of "an error occured" that shows up.

Intended uses & limitations

The model follows a form of ChatML, with no system prompt. You should prompt the model like this:

<|im_start|>user
Here is a bunch of input text that will be turned into a knowledge graph, though usually your text will be much longer than this single sentence.<|im_end|>
<|im_start|>assistant
(Make sure to put a newline at the end of the "assistant" marker above this line. Do not include this text in parenthesis in your prompt.)

Greedy sampling is recommended for generating outputs.

No extensive data cleaning has been done. The model may not output a detailed or properly formatted knowledge graph at times. Since this model is only 7B parameters, certain relationships in the input text may not be properly picked up on by the model. As stated before, this model is a prototype.

Training and evaluation data

The data was generated via. several large language models.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 1

Training results

Framework versions

  • Transformers 4.39.0.dev0
  • Pytorch 2.1.2+cu118
  • Datasets 2.18.0
  • Tokenizers 0.15.0
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TearGosling/mistral_hermes2_alexandria_v0_t2g

Finetuned
(14)
this model