File size: 2,051 Bytes
580bd40
 
 
 
 
 
 
 
 
2da2822
 
904b463
 
10dec59
 
83e549a
10dec59
83e549a
10dec59
83e549a
10dec59
83e549a
10dec59
83e549a
 
 
 
10dec59
83e549a
10dec59
83e549a
10dec59
83e549a
 
 
10dec59
83e549a
10dec59
83e549a
10dec59
 
83e549a
10dec59
83e549a
10dec59
83e549a
 
10dec59
83e549a
 
 
10dec59
83e549a
 
10dec59
83e549a
 
 
 
10dec59
83e549a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: apache-2.0
language:
- fa
pipeline_tag: question-answering
tags:
- persain
- persian_qa
- parsbert
metrics:
- accuracy
datasets:
- SajjadAyoubi/persian_qa
---
# Model Card for Model ID
# ParsBERT for Persian Question Answering

## Model Description

`mansoorhamidzadeh/parsbert-persian-QA` is a fine-tuned version of the ParsBERT model, specifically adapted for the task of question answering in Persian. ParsBERT is a BERT-based model pre-trained on a large Persian text corpus. This model has been fine-tuned on a Persian QA dataset to provide accurate and contextually relevant answers to questions posed in Persian.

## Model Architecture

- **Base Model**: ParsBERT
- **Task**: Question Answering
- **Language**: Persian
- **Number of Parameters**: 110M

## Intended Use

This model is intended for use in applications requiring natural language understanding and question answering in Persian, such as:

- Persian language chatbots
- Persian information retrieval systems
- Educational tools for Persian language learners

## Dataset

The model was fine-tuned on a Persian QA dataset. The dataset consists of question-answer pairs extracted from various Persian text sources, ensuring a diverse range of topics and contexts.


## Usage

To use this model for question answering in Persian, you can load it using the Hugging Face Transformers library. Here’s a quick example:

```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mansoorhamidzadeh/parsbert-persian-QA")
model = AutoModelForQuestionAnswering.from_pretrained("mansoorhamidzadeh/parsbert-persian-QA")

# Create a QA pipeline
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)

# Example usage
context = "متن زمینه که شامل اطلاعات مرتبط با سوال شما است."
question = "سوال شما چیست؟"
result = qa_pipeline(question=question, context=context)

print(f"Answer: {result['answer']}")