Learning python Network Data Collection notes-Chapter 1 and Chapter 2: python data collection

Source: Internet
Author: User

Learning python Network Data Collection notes-Chapter 1 and Chapter 2: python data collection

If the English version is poor, you can only view the Chinese version. The translation of Posts and Telecommunications publishing house is really bad.

The above is the message, and the following is the text.

We recommend that you install Python or a later version of pthon 3.x. It is troublesome to install plug-ins without pip in earlier versions.

: Https://www.python.org/downloads/windows/

1.1 note the crow's tip: If urllib. request is written after version 2.x, replace it with urllib or urllib2.

1.2.1 The installation package command must not be written into pip install beatifulsoup4

1.2.2 use html. read () to read the content of the h1 tag in the ccs style on the webpage

#! /Usr/bin/env python # coding = utf-8from urllib. request import urlopen #3. * The version is like this. 2. * remove the suffix. request, refer to the 1.1 crow prompt from bs4 import BeautifulSouphtml = urlopen ("http://www.pythonscraping.com/pages/page1.html") bsObj = BeautifulSoup (html. read () print (bsObj. h1.get _ text ())

1.2.3 setting error

The webpage does not exist except T HTTPError as e:

If html is None does not exist on the server

Attribute error: Failed t AttributeError as e:

An error is returned when you create a function.

#! /Usr/bin/env python # coding = utf-8from urllib2 import urlopenfrom bs4 import BeautifulSoupfrom urllib2 import HTTPErrordef getTitle (url): try: html = urlopen (url) handle T HTTPError as e: # e is the return None try: bsObj = BeautifulSoup (html. read () title = bsObj. body. h1 failed t AttributeError as e: return None return titletitle = getTitle ("http://www.pythonscraping.com/pages/pageee1.html") # specify a page that cannot be found here if title = None: print ("title cocould not be found") else: print (title)

2.2 extract text based on tag attributes

Namelist = bsObj. findAll ("span", {"class": "green "}

# Here, the primary requirement is that A in findAll must be capitalized.

Get_text () is used to delete a tag. You can add it to print (bsObj. h1.get _ text () and run the following command to delete the h1 Tag:

View Code

 

2.2.1 The difference between find and findAll is that limit can be used to limit the number of levels to be searched by findAll. The specific difference is that the limit is not clear.

2.2.2 beautifulsoup object

Common Object bsObj

Tag object bsObj. div. h1

NAvigablesString text in the image label

Comment Object Search comments <! -- *** -->

2.2.3 navigation tree-child, brother, parent tag

Child and child tags (descendant)

From urllib. request import urlopenfrom bs4 import BeautifulSouphtml = urlopen ("http://www.pythonscraping.com/pages/page3.html") bsObj = BeautifulSoup (html) # omitted from 1.2. read () for child in bsObj. find ("table", {"id": "giftList "}). children :#. children is a sub-object ,. descendants is the print (child) of all descendants)

Sibling tag

From urllib. request import urlopenfrom bs4 import BeautifulSouphtml = urlopen ("http://www.pythonscraping.com/pages/page3.html") bsObj = BeautifulSoup (html) for sibling in bsObj. find ("table", {"id": "giftList "}). tr. next_siblings :#. tr extract the title line #. next_siblings extract data except the header row #. previus_siblings extract data from the last row # only a single label print (sibling) is returned when the preceding two values are removed)

Parent tag

From urllib. request import urlopenfrom bs4 import BeautifulSouphtml = urlopen ("http://www.pythonscraping.com/pages/page3.html") bsObj = BeautifulSoup (html) print (bsObj. find ("img", {"src ":".. /img/gifts/img1.jpg "placement .parent.previus_sibling.get_text()).

2.3 regular expression.

Here, the extension can only be described here. The webmaster tool contains a regular expression tool.

2.4 Regular Expressions and Beautifulsoup

from urllib.request import urlopenfrom bs4 import BeautifulSouphtml = urlopen("http://www.pythonscraping.com/pages/page3.html")bsObj=BeautifulSoup(html)import reimages=bsObj.findAll("img",{"src":re.compile("\.\.\/img\/gifts/img.*\.jpg")})for image in images:    print(image["src"])

2.5 get attributes

I cannot understand the introduction.

2.6Lambda expressions

Not touched yet

2.7 There are many other websites that have previously used urllibe and urllibe2 to crawl through Weibo.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.