tobischimanski's picture
Update README.md
ae54eb6 verified
metadata
language: en
license: apache-2.0
datasets:
  - ESGBERT/WaterForestBiodiversityNature_2200
tags:
  - ESG
  - environmental
  - forest

Model Card for EnvironmentalBERT-water

Model Description

Based on this paper, this is the EnvironmentalBERT-forest language model. A language model that is trained to better classify forest texts in the ESG/nature domain.

Using the EnvironmentalBERT-base model as a starting point, the EnvironmentalBERT-forest Language Model is additionally fine-trained on a 2.2k forest dataset to detect forest text samples.

How to Get Started With the Model

See these tutorials on Medium for a guide on model usage, large-scale analysis, and fine-tuning.

It is highly recommended to first classify a sentence to be "environmental" or not with the EnvironmentalBERT-environmental model before classifying whether it is "forest" or not.

You can use the model with a pipeline for text classification:

from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
 
tokenizer_name = "ESGBERT/EnvironmentalBERT-forest"
model_name = "ESGBERT/EnvironmentalBERT-forest"
 
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
 
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) # set device=0 to use GPU
 
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
print(pipe("A large portion of trees in the Amazonas is dying each year.", padding=True, truncation=True))

More details can be found in the paper

@article{Schimanski23ExploringNature,
    title={{Exploring Nature: Datasets and Models for Analyzing Nature-Related Disclosures}},
    author={Tobias Schimanski and Chiara Colesanti Senni and Glen Gostlow and Jingwei Ni and Tingyu Yu and Markus Leippold},
    year={2023},
    journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4665715},
}