Python uses consumer miner to parse PDF code instances.
In the near future, crawlers sometimes encounter the situation where the website only provides pdf, so that scrapy cannot be used to directly crawl the page content, and it can only be processed by parsing PDF, currently, only pyPDF and mongominer are available. Because it is said that mongominer is more suitable for text parsing, and I need to parse the text, so I finally chose to use mongominer (which means I have no idea about pyPDF ).
The first thing to note is that PDF Parsing is a tough task, even if the simplified miner is not well-formed, the PDF parsing effect is not very good, so even the developers of the simplified miner are talking about PDF is edevil. but these are not important. Official documentation here: http://www.unixuser.org /~ Euske/python/pdfminer/index.html
1. installation:
1. First download the source file package http://pypi.python.org/pypi/#miner/. then, run the following command to install Python setup. py install:
2. Run the following command to test the installation: 20172txt. py samples/simple1.pdf. If the following content is displayed, the installation is successful:
Hello World H e l o W o r l d
3. If you want to use Chinese and Japanese characters, you need to compile them before installing them:
# make cmappython tools/conv_cmap.py pdfminer/cmap Adobe-CNS1 cmaprsrc/cid2code_Adobe_CNS1.txtreading 'cmaprsrc/cid2code_Adobe_CNS1.txt'...writing 'CNS1_H.py'......(this may take several minutes) # python setup.py install
Ii. Use
Since PDF parsing is very time-consuming and memory-consuming, mongominer uses a policy called lazy parsing, Which is parsed only when needed, to reduce time and memory usage. To parse a PDF file, there must be at least two types: PDFParser and PDFDocument. PDFParser extracts data from the file and the PDFDocument stores the data. In addition, we also need PDFPageInterpreter to process the page content, which should be converted into what we need by device. PDFResourceManager is used to save shared content such as fonts or images.
Figure 1. Relationships between extends miner classes
Layout mainly includes the following components:
LTPage
Represents an entire page. May contain child objects like LTTextBox, LTFigure, LTImage, LTRect, LTCurve and LTLine.
LTTextBox
Represents a group of text chunks that can be contained in a rectangular area. note that this box is created by geometric analysis and does not necessarily represents a logical boundary of the text. it contains a list of LTTextLine objects. get_text () method returns the text content.
LTTextLine
Contains a list of LTChar objects that represent a single text line. The characters are aligned either horizontaly or vertically, depending on the text's writing mode. get_text () method returns the text content.
LTChar
LTAnno
Represent an actual letter in the text as a Unicode string. note that while a LTChar object has actual boundaries, LTAnno objects does not, as these are "virtual" characters, inserted by a layout analyzer according to the relationship between two characters (e.g. a space ).
LTFigure
Represents an area used by PDF Form objects. PDF Forms can be used to present figures or pictures by embedding yet another PDF document within a page. Note that LTFigure objects can appear recursively.
LTImage
Represents an image object. Embedded images can be in JPEG or other formats, but currently extends miner does not pay much attention to graphical objects.
LTLine
Represents a single straight line. cocould be used for separating text or figures.
LTRect
Represents a rectangle. cocould be used for framing another pictures or figures.
LTCurve
Represents a generic begiscurve.
The official documentation to a few demos but are too simple, although to give a detailed Demo, but the link address is the old has expired, but eventually found the new address: http://denis.papathanasiou.org/posts/2010.08.04.post.html
This Demo is more detailed. The source code is as follows:
#!/usr/bin/pythonimport sysimport osfrom binascii import b2a_hex###### pdf-miner requirements###from pdfminer.pdfparser import PDFParserfrom pdfminer.pdfdocument import PDFDocument, PDFNoOutlinesfrom pdfminer.pdfpage import PDFPagefrom pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreterfrom pdfminer.converter import PDFPageAggregatorfrom pdfminer.layout import LAParams, LTTextBox, LTTextLine, LTFigure, LTImage, LTChardef with_pdf (pdf_doc, fn, pdf_pwd, *args): """Open the pdf document, and apply the function, returning the results""" result = None try: # open the pdf file fp = open(pdf_doc, 'rb') # create a parser object associated with the file object parser = PDFParser(fp) # create a PDFDocument object that stores the document structure doc = PDFDocument(parser, pdf_pwd) # connect the parser and document objects parser.set_document(doc) # supply the password for initialization if doc.is_extractable: # apply the function and return the result result = fn(doc, *args) # close the pdf file fp.close() except IOError: # the file doesn't exist or similar problem pass return result### ### Table of Contents### def _parse_toc (doc): """With an open PDFDocument object, get the table of contents (toc) data [this is a higher-order function to be passed to with_pdf()]""" toc = [] try: outlines = doc.get_outlines() for (level,title,dest,a,se) in outlines: toc.append( (level, title) ) except PDFNoOutlines: pass return tocdef get_toc (pdf_doc, pdf_pwd=''): """Return the table of contents (toc), if any, for this pdf file""" return with_pdf(pdf_doc, _parse_toc, pdf_pwd)###### Extracting Images###def write_file (folder, filename, filedata, flags='w'): """Write the file data to the folder and filename combination (flags: 'w' for write text, 'wb' for write binary, use 'a' instead of 'w' for append)""" result = False if os.path.isdir(folder): try: file_obj = open(os.path.join(folder, filename), flags) file_obj.write(filedata) file_obj.close() result = True except IOError: pass return resultdef determine_image_type (stream_first_4_bytes): """Find out the image file type based on the magic number comparison of the first 4 (or 2) bytes""" file_type = None bytes_as_hex = b2a_hex(stream_first_4_bytes) if bytes_as_hex.startswith('ffd8'): file_type = '.jpeg' elif bytes_as_hex == '89504e47': file_type = '.png' elif bytes_as_hex == '47494638': file_type = '.gif' elif bytes_as_hex.startswith('424d'): file_type = '.bmp' return file_typedef save_image (lt_image, page_number, images_folder): """Try to save the image data from this LTImage object, and return the file name, if successful""" result = None if lt_image.stream: file_stream = lt_image.stream.get_rawdata() if file_stream: file_ext = determine_image_type(file_stream[0:4]) if file_ext: file_name = ''.join([str(page_number), '_', lt_image.name, file_ext]) if write_file(images_folder, file_name, file_stream, flags='wb'): result = file_name return result###### Extracting Text###def to_bytestring (s, enc='utf-8'): """Convert the given unicode string to a bytestring, using the standard encoding, unless it's already a bytestring""" if s: if isinstance(s, str): return s else: return s.encode(enc)def update_page_text_hash (h, lt_obj, pct=0.2): """Use the bbox x0,x1 values within pct% to produce lists of associated text within the hash""" x0 = lt_obj.bbox[0] x1 = lt_obj.bbox[2] key_found = False for k, v in h.items(): hash_x0 = k[0] if x0 >= (hash_x0 * (1.0-pct)) and (hash_x0 * (1.0+pct)) >= x0: hash_x1 = k[1] if x1 >= (hash_x1 * (1.0-pct)) and (hash_x1 * (1.0+pct)) >= x1: # the text inside this LT* object was positioned at the same # width as a prior series of text, so it belongs together key_found = True v.append(to_bytestring(lt_obj.get_text())) h[k] = v if not key_found: # the text, based on width, is a new series, # so it gets its own series (entry in the hash) h[(x0,x1)] = [to_bytestring(lt_obj.get_text())] return hdef parse_lt_objs (lt_objs, page_number, images_folder, text=[]): """Iterate through the list of LT* objects and capture the text or image data contained in each""" text_content = [] page_text = {} # k=(x0, x1) of the bbox, v=list of text strings within that bbox width (physical column) for lt_obj in lt_objs: if isinstance(lt_obj, LTTextBox) or isinstance(lt_obj, LTTextLine): # text, so arrange is logically based on its column width page_text = update_page_text_hash(page_text, lt_obj) elif isinstance(lt_obj, LTImage): # an image, so save it to the designated folder, and note its place in the text saved_file = save_image(lt_obj, page_number, images_folder) if saved_file: # use html style tag to mark the position of the image within the text text_content.append('') else: print >> sys.stderr, "error saving image on page", page_number, lt_obj.__repr__ elif isinstance(lt_obj, LTFigure): # LTFigure objects are containers for other LT* objects, so recurse through the children text_content.append(parse_lt_objs(lt_obj, page_number, images_folder, text_content)) for k, v in sorted([(key,value) for (key,value) in page_text.items()]): # sort the page_text hash by the keys (x0,x1 values of the bbox), # which produces a top-down, left-to-right sequence of related columns text_content.append(''.join(v)) return '\n'.join(text_content)###### Processing Pages###def _parse_pages (doc, images_folder): """With an open PDFDocument object, get the pages and parse each one [this is a higher-order function to be passed to with_pdf()]""" rsrcmgr = PDFResourceManager() laparams = LAParams() device = PDFPageAggregator(rsrcmgr, laparams=laparams) interpreter = PDFPageInterpreter(rsrcmgr, device) text_content = [] for i, page in enumerate(PDFPage.create_pages(doc)): interpreter.process_page(page) # receive the LTPage object for this page layout = device.get_result() # layout is an LTPage object which may contain child objects like LTTextBox, LTFigure, LTImage, etc. text_content.append(parse_lt_objs(layout, (i+1), images_folder)) return text_contentdef get_pages (pdf_doc, pdf_pwd='', images_folder='/tmp'): """Process each of the pages in this pdf file and return a list of strings representing the text found in each page""" return with_pdf(pdf_doc, _parse_pages, pdf_pwd, *tuple([images_folder]))a = open('a.txt','a')for i in get_pages('/home/jamespei/nova.pdf'): a.write(i)a.close()
This Code focuses on 128th lines. It can be seen that mongominer is a coordinate-based parsing framework. All the components that can be parsed in PDF include the coordinates of the upper, lower, and left edges, for example, x0 = lt_obj.bbox [0] is the coordinate of the left edge of the lt_obj element. Similarly, x1 is the right edge. The code above means to divide all elements with x0 and x1 coordinates less than 20% into a group, thus achieving targeted extraction of content from the PDF file.
---------------- Supplement --------------------
Note that when parsing PDF files, the following error occurs: Invalid minerpolicdocument. PDFEncryptionError: Unknown algorithm: param = {'cf ': {'stdcf': {'length': 16, 'cfm':/AESV2, 'authevent':/DocOpen }}, 'o': '\ xe4 \ xe74 \ xb86/\ xa8) \ xa6x \ xe6 \ xa3/U \ xdf \ x0fWR \ x9cPh \ xac \ xae \ x88B \ x06 _ \ xb0 \ x93 @ \ x9f \ x8d ', 'filter ': /Standard, 'P':-1340, 'length': 128, 'r': 4, 'U ': '| UTX # f \ xc9V \ x18 \ x87z \ x10 \ xcb \ xf5 {\ xa7 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 \ x00 ', 'V': 4, 'stmf ':/StdCF, 'strf':/StdCF}
Literally, this PDF is an encrypted PDF, so it cannot be parsed. However, if you open the PDF directly, it is possible that you do not need to enter the password or something, the reason is that this PDF file is encrypted, but the password is empty, so this problem occurs.
To solve this problem, use the qpdf command to decrypt the file (to ensure that qpdf has been installed). To call this command in python, you only need to use call:
from subprocess import callcall('qpdf --password=%s --decrypt %s %s' %('', file_path, new_file_path), shell=True)
The file_path parameter is the path of the PDF file to be decrypted, and new_file_path is the path of the decrypted PDF file. Then, it is OK to parse the decrypted file.
The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.