Pclanglais commited on
Commit
851dd5a
1 Parent(s): 187b44e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -3
README.md CHANGED
@@ -1,13 +1,36 @@
1
  ---
2
  license: cc0-1.0
 
 
3
  language:
4
  - en
 
 
 
5
  ---
6
 
7
- **US-Newspapers-Pile** is an agregation of all the digitized archives of US newspaper made available by the Chronicle America digital library.
8
 
9
- Comprising nearly 100 billion words, it is one of the largest open corpus in the English language. All the materials are now part of the public domain and have no intellectual property rights remaining.
10
 
11
  ## Content
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- As of 2024, the corpus includes about 20 million individual publications and editions, released from the 18th century to the early 1960s.
 
1
  ---
2
  license: cc0-1.0
3
+ task_categories:
4
+ - text-generation
5
  language:
6
  - en
7
+ tags:
8
+ - ocr
9
+ pretty_name: United States-Public Domain-Newspapers
10
  ---
11
 
12
+ **US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
13
 
14
+ With nearly 100 billion words, is currently one of the largest open corpus in the English language. All the materials are now part of the public domain and have no intellectual property rights remaining.
15
 
16
  ## Content
17
+ As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions (98,742,987,471 words) from the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress, published from the 18th century to 1963. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
18
+
19
+ This initial agregation was made possible thanks to the extensive open data program of the Library of Congress.
20
+
21
+ The composition of the dataset adheres to the US criteria for public domain of collective works (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in all countries with a Berne author-right model.
22
+
23
+ ## Uses
24
+ The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.
25
+
26
+ The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
27
+
28
+ ## License
29
+ The entire collection is in the public domain everywhere and has been digitized by a US federal entity.
30
+
31
+ ## Future developments
32
+ This dataset is not a one time work but will continue to evolve significantly on three directions:
33
+ * Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).
34
+ * Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models (like COLAF) on the original PDF files.
35
+ * Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.
36