--- language: - fr tags: - music - rap - lyrics - word2vec library_name: gensim --- # Word2Bezbar: Word2Vec Models for French Rap Lyrics ## Overview __Word2Bezbar__ are __Word2Vec__ models trained on __french rap lyrics__ sourced from __Genius__. Tokenization has been done using __NLTK__ french `word_tokenze` function, with a prior processing to remove __french oral contractions__. Used dataset size was __323MB__, corresponding to __77M tokens__. The model captures the __semantic relationships__ between words in the context of __french rap__, providing a useful tool for studies associated to __french slang__ and __music lyrics analysis__. ## Model Details Size of this model is __small__ | Parameter | Value | |----------------|--------------| | Dimensionality | 100 | | Window Size | 5 | | Epochs | 10 | | Algorithm | CBOW | ## Versions This model has been trained with the followed software versions | Requirement | Version | |----------------|--------------| | Python | 3.8.5 | | Gensim library | 4.3.2 | | NTLK library | 3.8.1 | ## Installation 1. **Install Required Python Libraries**: ```bash pip install gensim ``` 2. **Clone the Repository**: ```bash git clone https://github.com/rapminerz/Word2Bezbar-small.git ``` 3. **Navigate to the Model Directory**: ```bash cd Word2Bezbar-small ``` ## Loading the Model To load the Word2Bezbar Word2Vec model, use the following Python code: ```python import gensim # Load the Word2Vec model model = gensim.models.Word2Vec.load("word2vec.model") ``` ## Using the Model Once the model is loaded, you can use it as shown: 1. **To get the most similary words regarding a word** ```python model.wv.most_similar("bendo") [('binks', 0.8920747637748718), ('bando', 0.8460732698440552), ('hood', 0.8299438953399658), ('tieks', 0.8264378309249878), ('hall', 0.817583441734314), ('secteur', 0.8145656585693359), ('barrio', 0.809047281742096), ('block', 0.793493390083313), ('bâtiment', 0.7826434969902039), ('bloc', 0.7753982543945312)] model.wv.most_similar("kichta") [('liasse', 0.878665566444397), ('sse-lia', 0.8552991151809692), ('kishta', 0.8535938262939453), ('kich', 0.7646669149398804), ('skalape', 0.7576569318771362), ('moula', 0.7466527223587036), ('valise', 0.7429592609405518), ('sacoche', 0.7324921488761902), ('mallette', 0.7247079014778137), ('re-pai', 0.7060815095901489)] ``` 2. **To find the word that doesn't match in a list of words** ```python model.wv.doesnt_match(["racli","gow","gadji","fimbi","boug"]) 'boug' model.wv.doesnt_match(["Zidane","Mbappé","Ronaldo","Messi","Jordan"]) 'Jordan' ``` 3. **To find the similarity between two words** ```python model.wv.similarity("kichta", "moula") 0.7466528 model.wv.similarity("bonheur", "moula") 0.16985293 ``` 4. **Or even get the vector representation of a word** ```python model.wv['ekip'] array([ 1.4757039e-01, ... 1.1260221e+00], dtype=float32) ``` ## Purpose and Disclaimer This model is designed for academic and research purposes only. It is not intended for commercial use. The creators of this model do not endorse or promote any specific views or opinions that may be represented in the dataset. __Please mention @RapMinerz if you use our models__ ## Contact For any questions or issues, please contact the repository owner, __RapMinerz__, at rapminerz.contact@gmail.com.