âAmerican Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapersâ, 2023-08-24 ()â :
Existing full text datasets of U.S. public domain newspapers do not recognize the often complex layouts of newspaper scans, and as a result the digitized content scrambles texts from articles, headlines, captions, advertisements, and other layout regions. OCR quality can also be low. This study develops a novel, deep learning pipeline for extracting full article texts from newspaper images and applies it to the nearly 20 million scans in Library of Congressâs public domain Chronicling America collection. The pipeline includes layout detection, legibility classification, custom OCR, and association of article texts spanning multiple bounding boxes. To achieve high scalability, it is built with efficient architectures designed for mobile phones.
The resulting American Stories dataset provides high quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge. The dataset could also be added to the external database of a retrieval-augmented language model to make historical informationâranging from interpretations of political events to minutiae about the lives of peopleâs ancestorsâmore widely accessible.
Furthermore, structured article texts facilitate using transformer-based methods for popular social science applications like topic classification, detection of reproduced content, and news story clustering. Finally, American Stories provides a massive silver quality dataset for innovating multimodal layout analysis models and other multimodal applications.
[Commentary: Whatâs interesting architecturally is that they selected mobile-optimized convolutional models (YOLOv8 and MobileNetV3) for preprocessing to economize on costs, trained them on one Nvidia A6000 and data annotated by undergraduate research assistants đ (compensated for their work, of course) and then inferenced the result on cloud CPUs. They estimate the savings on this stage in 1 order of magnitude.
They also mention open-source TrOCR Base, which would have cost 50Ă more, and commercial solutions (which would be even more expensive), and still even with the EfficientOCR framework and even with the cloud compute budget of $60,000, which probably only Harvard could afford, they were only able to process 40% of Chronicling America scans (thatâs why the dataset version is 0.1).
When you are in digital humanities, you make do with what you have!]