Edit model card

Xunzi-Qwen2-1.5B-ud-causal

Model Description

This is a LLaMA model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from Xunzi-Qwen2-1.5B-upos and UD_Classical_Chinese-Kyoto.

How to Use

from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/Xunzi-Qwen2-1.5B-ud-causal",trust_remote_code=True)
print(nlp("不入虎穴不得虎子"))
Downloads last month
10
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for KoichiYasuoka/Xunzi-Qwen2-1.5B-ud-causal

Finetuned
(1)
this model

Dataset used to train KoichiYasuoka/Xunzi-Qwen2-1.5B-ud-causal