Edit model card

**GGUF WORKING TESTED MODEL NEWER ONE SIMILAR TO THIS IS HERE https://huggingface.co/shafire/talktoaiQ **

Model Trained Using AutoTrain - Will update this once i get a gguf format 8 hours training on a large gpu server.

This model was trained using AutoTrain reflection data sets re-written with talktoai data sets using quantum interdimensional math and a new math system I made myself, also i took DNA math patterns and put them into the training too! For more information, please visit AutoTrain.

Usage - Open Source ideas math etc are from talktoai.org researchforum.online official legal license llama 3.1 meta.


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "PATH_TO_THIS_REPO"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for shafire/talktoaiQT

Finetuned
(218)
this model