LeonardPuettmann's picture
Create README.md
f94a3b0
|
raw
history blame
1.88 kB

Finetuned destilBERT model for stock news classification

This is a HuggingFace model that uses BERT (Bidirectional Encoder Representations from Transformers) to perform text classification tasks. It was fine-tuned on 50.000 stock news articles using the HuggingFace adapter from Kern AI refinery. BERT is a state-of-the-art pre-trained language model that can encode both the left and right context of a word in a sentence, allowing it to capture complex semantic and syntactic information.

Features

  • The model can handle various text classification tasks, especially when it comes to stock and finance news sentiment classification.
  • The model can accept either single sentences or sentence pairs as input, and output a probability distribution over the predefined classes.
  • The model can be fine-tuned on custom datasets and labels using the HuggingFace Trainer API or the PyTorch Lightning integration.
  • The model is currently supported by the PyTorch framework and can be easily deployed on various platforms using the HuggingFace Pipeline API or the ONNX Runtime.

Usage

To use the model, you need to install the HuggingFace Transformers library:

pip install transformers

Then you can load the model and the tokenizer from the HuggingFace Hub:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("KernAI/stock-news-destilbert")
tokenizer = AutoTokenizer.from_pretrained("KernAI/stock-news-destilbert")

To classify a single sentence or a sentence pair, you can use the HuggingFace Pipeline API:

from transformers import pipeline

classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = classifier("This is a positive sentence.")
print(result)
# [{'label': 'POSITIVE', 'score': 0.9998656511306763}]