automatedstockminingorg's picture
Update README.md
7d3a081 verified
metadata
license: apache-2.0
pipeline_tag: text-generation
tags:
  - text-generation
  - causal-lm
  - instruction-tuned
  - serverless
library_name: transformers
inference: true
language:
  - en
base_model: automatedstockminingorg/expert-on-investment-valuation-mypricermodel
datasets:
  - automatedstockminingorg/investment-valuation-chunks

Expert on Investment Valuation Model

Introduction

This model is fine-tuned on data specifically curated for investment valuation, helping users with insights and explanations on various valuation techniques, including the discounted cash flow (DCF) model and comparable company analysis.

  • Designed for generating text that follows instructions and role-playing in a financial advisory setting.
  • Supports long-context processing to handle in-depth questions.
  • Multilingual support available in English.

This repo contains the instruction-tuned version of the model:

  • Type: Causal Language Model (instruction-tuned)
  • Language: English
  • Model Architecture: Transformers

For more details, please refer to our documentation.

Requirements

To ensure compatibility, use the latest version of transformers.

Quickstart

Here is a code snippet to show how to load the tokenizer and model and generate responses.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "automatedstockminingorg/14b-stockanalyst-14b-stockanalyst"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain the discounted cash flow (DCF) model in investment valuation."
messages = [
    {"role": "system", "content": "You are an expert in investment valuation."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=300
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)