File size: 2,006 Bytes
433c3d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73e0592
433c3d5
 
 
5ce9200
433c3d5
 
5ce9200
433c3d5
5ce9200
433c3d5
 
fc21025
 
 
433c3d5
 
 
 
 
5ce9200
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- nadsoft/Jordan-Audio
model-index:
- name: hamsa-medium
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# **Hamsa-v0.1-beta**

## Model description

Hamsa (همسة) represents a sophisticated advancement in the realm of Arabic speech recognition. It's a pre-trained automatic speech recognition (ASR) model that is built upon the foundation of the Whisper model. Hamsa is not just a technological achievement; it's a testament to NADSOFT's commitment to elevating the standards of AI results for the Arabic language. This contribution is especially significant for the Middle East and North Africa (MENA) region and the broader Arab World, as it seeks to address the unique linguistic nuances and cater to the specific needs of these communities.
## Intended uses & limitations

Hamsa is a model that is still under development, and it is important to be aware of its limitations. For example, the model may not be able to accurately transcribe text from speakers with very strong accents, such as Moroccan Arabic. Additionally, the model may have difficulty transcribing text from noisy recordings.

It is important to note that Hamsa is not a perfect model, and it should not be used to generate text that is intended to be used in legal, medical, or other sensitive contexts.
## Training and evaluation data

 nadsoft/Jordan-Audio | google/fleurs | mozilla-foundation/common_voice_11_0

 WER = 18.22

## Training procedure

### Training hyperparameters

- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000 then 4000 for NADSOFT data
- mixed_precision_training: Native AMP