eckendoerffer
commited on
Commit
•
3b9e33b
1
Parent(s):
fc9c13b
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-3.0
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
+
language:
|
6 |
+
- fr
|
7 |
+
tags:
|
8 |
+
- news
|
9 |
+
- media
|
10 |
+
- Press
|
11 |
+
size_categories:
|
12 |
+
- 100K<n<1M
|
13 |
+
---
|
14 |
+
# NEWS FR
|
15 |
+
There is an open-access [dataset on BnF / Gallica](https://transfert.bnf.fr/link/3a04ea3f-dbe8-4a4a-a302-913a89c3a7a8) comprising nearly a hundred newspapers from the print media spanning almost 100 years.
|
16 |
+
Unfortunately, for this dataset, only 85% of the text is transcribed accurately.
|
17 |
+
|
18 |
+
## DATASET
|
19 |
+
This dataset compiles 1M online articles from nearly 100 Francophone media outlets. This dataset is intended for research purposes and non-commercial use. It includes 1,140,000 lines for model training, and 63,500 lines for the test and validation files.
|
20 |
+
|
21 |
+
Included with this dataset are scripts to extract and process the article text from the same sources. The script is somewhat rough around the edges, but it is functional and commented.
|
22 |
+
|
23 |
+
### Format
|
24 |
+
|
25 |
+
- **Type**: Text
|
26 |
+
- **File Extension**: `.txt`
|
27 |
+
|
28 |
+
The text has been standardized for consistent formatting and line length. Additionally, the dataset has been filtered using the `langid` library to include only text in French.
|
29 |
+
|
30 |
+
### Structure
|
31 |
+
|
32 |
+
The dataset is divided into the following splits:
|
33 |
+
|
34 |
+
- `train.txt`: 2.2 GB - 1,140,000 rows - 90%
|
35 |
+
- `test.txt` : 122 MB - 63,500 rows - 5%
|
36 |
+
- `valid.txt`: 122 MB - 63,500 rows - 5%
|
37 |
+
|
38 |
+
### Exploring the Dataset
|
39 |
+
|
40 |
+
You can use the `explore_dataset.py` script to explore the dataset by randomly displaying a certain number of lines from it. The script creates and saves an index based on the line breaks, enabling faster data retrieval and display.
|
41 |
+
|
42 |
+
### Additional Information
|
43 |
+
|
44 |
+
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, Wikipedia, as well as several hundred thousand Francophone news articles.
|
45 |
+
|
46 |
+
## EXTRACT NEWS FR
|
47 |
+
|
48 |
+
The "NEWS FR" module allows for the extraction of online press articles from over a hundred different sources.
|
49 |
+
|
50 |
+
## Installation
|
51 |
+
|
52 |
+
To set up the module, follow the steps below:
|
53 |
+
|
54 |
+
1. **Database Setup**:
|
55 |
+
- Create a database and incorporate the two tables present in `database.sql`.
|
56 |
+
|
57 |
+
2. **Database Configuration**:
|
58 |
+
- Update your MySQL connection information in the `config.py` file.
|
59 |
+
|
60 |
+
3. **Dependencies Installation**:
|
61 |
+
- Install it using pip install:
|
62 |
+
```
|
63 |
+
pip install aiohttp mysql-connector-python beautifulsoup4 chardet colorama pyquery
|
64 |
+
```
|
65 |
+
|
66 |
+
## Usage
|
67 |
+
|
68 |
+
### 1_extract_rss.py:
|
69 |
+
|
70 |
+
This script fetches RSS feeds from various media outlets and adds URLs for further extraction.
|
71 |
+
|
72 |
+
```bash
|
73 |
+
python 1_extract_rss.py
|
74 |
+
```
|
75 |
+
|
76 |
+
### 2_extract_news.py:
|
77 |
+
|
78 |
+
This script retrieves the sources of articles for subsequent local processing.
|
79 |
+
|
80 |
+
```bash
|
81 |
+
python 2_extract_news.py
|
82 |
+
```
|
83 |
+
|
84 |
+
### 3_extract_news_txt.py:
|
85 |
+
|
86 |
+
This script extracts the text content of press articles and saves it (title + description + text) to a `.txt` file.
|
87 |
+
|
88 |
+
```bash
|
89 |
+
python 3_extract_news_txt.py
|
90 |
+
```
|
91 |
+
After completing this step, you can use the Python script located at /dataset/2_cleaning_txt.py to standardize the text for your dataset.
|
92 |
+
|
93 |
+
### 4_extract_news_url.py:
|
94 |
+
|
95 |
+
This script allows for the extraction of links to other articles from local article sources. This ensures swift retrieval of numerous past articles, as opposed to fetching only the most recent ones.
|
96 |
+
|
97 |
+
```bash
|
98 |
+
php 4_extract_news_url.py
|
99 |
+
```
|
100 |
+
|
101 |
+
After using this script, you'll need to run 2_extract_news.py again to retrieve the sources of the new articles, as well as 3_extract_news_txt.py to extract the text from these articles.
|
102 |
+
|
103 |
+
---
|
104 |
+
|