File size: 3,499 Bytes
fc9c13b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: cc-by-3.0
task_categories:
- text-generation
language:
- fr
tags:
- news
- media
- Press
size_categories:
- 100K<n<1M
---
# NEWS FR
There is an open-access [dataset on BnF / Gallica](https://transfert.bnf.fr/link/3a04ea3f-dbe8-4a4a-a302-913a89c3a7a8) comprising nearly a hundred newspapers from the print media spanning almost 100 years. 
Unfortunately, for this dataset, only 85% of the text is transcribed accurately.

## DATASET
This dataset compiles 1M online articles from nearly 100 Francophone media outlets. This dataset is intended for research purposes and non-commercial use. It includes 1,140,000 lines for model training, and 63,500 lines for the test and validation files.

Included with this dataset are scripts to extract and process the article text from the same sources. The script is somewhat rough around the edges, but it is functional and commented.

### Format

- **Type**: Text
- **File Extension**: `.txt`

The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French.

### Structure

The dataset is divided into the following splits:

- `train.txt`: 2.2 GB   - 1,140,000 rows   - 90%
- `test.txt` : 122 MB   - 63,500 rows    - 5%
- `valid.txt`: 122 MB   - 63,500 rows    - 5%

### Exploring the Dataset

You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.

### Additional Information

This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, Wikipedia, as well as several hundred thousand Francophone news articles.

## EXTRACT NEWS FR

The "NEWS FR" module allows for the extraction of online press articles from over a hundred different sources.

## Installation

To set up the module, follow the steps below:

1. **Database Setup**:
    - Create a database and incorporate the two tables present in `database.sql`.

2. **Database Configuration**:
    - Update your MySQL connection information in the `config.py` file.

3. **Dependencies Installation**:
    - Install it using pip install:
      ```
      pip install aiohttp mysql-connector-python beautifulsoup4 chardet colorama pyquery
      ```

## Usage

### 1_extract_rss.py:

This script fetches RSS feeds from various media outlets and adds URLs for further extraction.

```bash
python 1_extract_rss.py
```

### 2_extract_news.py:

This script retrieves the sources of articles for subsequent local processing.

```bash
python 2_extract_news.py
```

### 3_extract_news_txt.py:

This script extracts the text content of press articles and saves it (title + description + text) to a `.txt` file.

```bash
python 3_extract_news_txt.py
```
After completing this step, you can use the Python script located at /dataset/2_cleaning_txt.py to standardize the text for your dataset.
 
### 4_extract_news_url.py:

This script allows for the extraction of links to other articles from local article sources. This ensures swift retrieval of numerous past articles, as opposed to fetching only the most recent ones.

```bash
php 4_extract_news_url.py
```

After using this script, you'll need to run 2_extract_news.py again to retrieve the sources of the new articles, as well as 3_extract_news_txt.py to extract the text from these articles.

---