Python3 practice-get Data from the website (Carbon Market Data-GD) (bs4/Beautifulsoup), python3bs4
Based on your individual needs, you can obtain some data from a website and find that the webpage link is hidden. You need to view the code in the browser to obtain the real link.
In the following case, data is directly crawled from the real link.
In addition, it is found that the "lxml" table cannot be directly parsed using read_html of pandas, which requires further research.
In addition, the crawled data contains many space characters, including "\ r", "\ n", and "\ t ",
The methods for removing strings "\ r", "\ n", and "\ t" are also added in this case.
The Code is as follows:
1 # Code based on Python 3.x 2 # _ * _ coding: UTF-8 _ * _ 3 # _ Author: "LEMON" 4 5 6 from bs4 import BeautifulSoup 7 import requests 8 import csv 9 10 url2 = 'HTTP: // ets.nn mission.com/carbon/portalindex/markethistory? Top = 1 '11 12 req = requests. get (url2) 13 # soup = BeautifulSoup (req. content, 'html5lib ') 14 soup = BeautifulSoup (req. content, 'lxml') 15 # Use "lxml" to parse and obtain data. However, each row in the csv file has 16 17 Tables = soup. table18 trs = table. find_all ('tr ') 19 20 list1 = [] 21 for tr in trs: 22 td = tr. find_all ('td ') 23 24 # Remove "\ r" and "\ n" and "\ t" after each cell data ", 25 # The following two methods can generate a csv file, 26 # But the csv file generated by method1 is small, and the optimization performance is good, I do not understand the principle 27 # method128 row = [I. text. replace ('\ R ',''). replace ('\ n ',''). replace ('\ t', '') for I in td] 29 # method 230 # row = [I. text. replace ('\ r \ n \ t', '') for I in td] 31 32 list1.append (row) 33 34 with open('mkt0000uangdong.csv', 'A ', errors = 'ignore', newline = '') as f: 35 f_csv = csv. writer (f) 36 f_csv.writerows (list1)