Tag: Access end uses the ONS port subscript ble. Text header
CSV, whose files store tabular data (numbers and text) in plain text, CSV records are separated by some sort of line break and other characters, common commas or tabs,
#coding:utf-8import csvheaders = [‘ID‘,‘UserName‘,‘Password‘,‘Age‘,‘Country‘]rows = [(1001,"guobao","1382_pass",21,"China"), (1002,"Mary","Mary_pass",20,"USA"), (1003,"Jack","Jack_pass",20,"USA"), ]with open(‘guguobao.csv‘,‘w‘) as f: f_csv = csv.writer(f) f_csv.writerow(headers) f_csv.writerows(rows)运行结果:ID,UserName,Password,Age,Country1001,guobao,1382_pass,21,China1002,Mary,Mary_pass,20,USA1003,Jack,Jack_pass,20,USA
- The rows list inside the data tuple, or a dictionary array, for example:
import csvheaders = [‘ID‘,‘UserName‘,‘Password‘,‘Age‘,‘Country‘]rows = [{‘ID‘:1001,‘UserName‘:"qiye",‘Password‘:"qiye_pass",‘Age‘:24,‘Country‘:"China"},{‘ID‘:1002,‘UserName‘:"Mary",‘Password‘:"Mary_pass",‘Age‘:20,‘Country‘:"USA"},{‘ID‘:1003,‘UserName‘:"Jack",‘Password‘:"Jack_pass",‘Age‘:20,‘Country‘:"USA"},]with open(‘qiye.csv‘,‘w‘) as f: f_csv = csv.DictWriter(f,headers) f_csv.writeheader() f_csv.writerows(rows)
Next is the CSV read, to remove the CSV file, you need to create a reader object, for example:
import csvwith open(‘gugobao.csv‘,‘r‘) as f: f_csv = csv.reader(f) headers = next(f_csv) print headers for row in f_csv: print row
- In addition to accessing age using row[0] access id,row[3), you can consider using tuples because of the confusion of index access
from collections import namedtupleimport csvwith open(‘qiye.csv‘) as f: f_csv = csv.reader(f) headings = next(f_csv) Row = namedtuple(‘Row‘, headings) for r in f_csv: row = Row(*r) print row.UserName,row.Password print row运行结果:C:\Python27\python.exe F:/爬虫/5.1.2.pyqiye qiye_passRow(ID=‘1001‘, UserName=‘qiye‘, Password=‘qiye_pass‘, Age=‘24‘, Country=‘China‘)Mary Mary_passRow(ID=‘1002‘, UserName=‘Mary‘, Password=‘Mary_pass‘, Age=‘20‘, Country=‘USA‘)Jack Jack_passRow(ID=‘1003‘, UserName=‘Jack‘, Password=‘Jack_pass‘, Age=‘20‘, Country=‘USA‘)Process finished with exit code 0
- You can use column names such as row. Username and Row.password are used instead of subscript access. In addition to using named groupings, another solution is to read a dictionary sequence, as follows:
import csvwith open(‘qiye.csv‘) as f: f_csv = csv.DictReader(f) for row in f_csv: print row.get(‘UserName‘),row.get(‘Password‘)运行结果:import csvwith open(‘qiye.csv‘) as f: f_csv = csv.DictReader(f) for row in f_csv: print row.get(‘UserName‘),row.get(‘Password‘)
Finally use CSV to parse the title chapters of the Http://seputu.com home page and connect
From lxml import etreeimport requestsuser_agent = ' mozilla/4.0 (compatible; MSIE 5.5; Windows NT) ' headers={' user-agent ': user_agent}r = Requests.get (' http://seputu.com/', headers=headers) # Parse Web page HTML = etree using lxml. HTML (r.text) Div_mulus = Html.xpath ('.//*[@class = "Mulu"] ') #先找到所有的div class=mulu label pattern = Re.compile (R ' \s*\[(. *) \]\ S+ (. *) ') rows=[]for Div_mulu in div_mulus: #找到所有的div_h2标签 div_h2 = Div_mulu.xpath ('./div[@class = "Mulu-title"]/center/ H2/text () ') If Len (div_h2) > 0:h2_title = Div_h2[0].encode (' utf-8 ') a_s = Div_mulu.xpath ('./div[@class = "box"]/ul/li/a ') for a in a_s: #找到href属性 href=a.xpath ('./@href ') [0].encode (' Utf-8 ') #找到title属性 box_title = A.xpath ('./@title ') [0] pattern = re.compile (R ' \s*\[(. *) \]\s+ (. *) ') Match = Pattern.search (box_title) if Match!=none:date =match.group (1). Encode (' Utf-8 ') Real_title= Match.group (2). Encode (' Utf-8 ') # print Real_title content= (h2_title,real_title,href,date) print content ro Ws.append (content) headers = [' title ', ' Real_title ', ' href ', ' Date ']with open (' Qiye.csv ', ' W ') as F:f_csv = Csv.writer (f,) F_csv.writerow (Headers) f_csv.writerows (rows)
Python data store--CSV