Skip to main content

PyPDFLoader

This notebook provides a quick overview for getting started with PyPDF document loader. For detailed documentation of all DocumentLoader features and configurations head to the API reference.

Overview

Integration details

ClassPackageLocalSerializableJS support
PyPDFLoaderlangchain_community

Loader features

SourceDocument Lazy LoadingNative Async Support
PyPDFLoader

Setup

Credentials

No credentials are required to use PyPDFLoader.

Installation

To use PyPDFLoader you need to have the langchain-community python package downloaded:

%pip install -qU langchain_community pypdf

Initialization

Now we can instantiate our model object and load documents:

from langchain_community.document_loaders import PyPDFLoader

loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
)
API Reference:PyPDFLoader

Load

docs = loader.load()
docs[0]
Document(metadata={'source': './example_data/layout-parser-paper.pdf', 'page': 0}, page_content='LayoutParser : A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\nshannons@allenai.org\n2Brown University\nruochen zhang@brown.edu\n3Harvard University\n{melissadell,jacob carlson }@fas.harvard.edu\n4University of Washington\nbcgl@cs.washington.edu\n5University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\n·Character Recognition ·Open Source library ·Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [ 11,arXiv:2103.15348v2  [cs.CV]  21 Jun 2021')
print(docs[0].metadata)
{'source': './example_data/layout-parser-paper.pdf', 'page': 0}

Lazy Load

pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)

pages = []
len(pages)
6
print(pages[0].page_content[:100])
print(pages[0].metadata)
LayoutParser : A Unified Toolkit for DL-Based DIA 11
focuses on precision, efficiency, and robustness.
{'source': './example_data/layout-parser-paper.pdf', 'page': 10}

API reference

For detailed documentation of all PyPDFLoader features and configurations head to the API reference: https://python.langchain.com/v0.2/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html


Was this page helpful?


You can also leave detailed feedback on GitHub.