Today, I encountered a problem when writing python: I defined a list type class variable, but this list needs to be added with a lot of URLs during initialization. in this case, we need to use the inverted function. the result is as follows:
Class TianyaSpider (crawler ):
Def init_start ():
Url_l = u'http: // search.tianya.cn/s? Tn = sty & rn = 10 & pn ='
Url_r = U' & s = 0 & pid = & f = 0 & h = 1 & ma = 0 & q = % B8 % DF % BF % BC % D6 % BE % D4 % b8'
Urls = []
For I in range (0, 75, 1 ):
Tem = url_l + str (I) + url_r
Urls. append (tem)
Return urls
Name = 'tianya'
Allowed_domains = ['tianya. cn']
Start_urls = init_start ()
The write operation is successful, but it is not standardized. If you write the function definition later, the program cannot recognize the function. in fact, I think this is the same as java. The code in this method is stored in the static area. when this class was loaded for the first time, the Code was put in. such a function does not seem convenient to be called outside the class. so this is undoubtedly not standardized in python, so I want to change the expression method.
At the beginning, I always wanted to call class functions within the class definition to initialize the class variables, so the code turned into the following:
#-*-Coding: UTF-8 -*-
From scrapy. selector import HtmlXPathSelector
From scrapy. contrib. linkextractors. sgml import SgmlLinkExtractor
From scrapy. contrib. spiders import crawler, Rule
From GaoKao. items import GaokaoItem
Class TianyaSpider (crawler ):
Name = 'tianya'
Allowed_domains = ['tianya. cn']
Start_urls = TianyaSpider. init_start ()
Count = 0
Def parse (self, response ):
Hxs = HtmlXPathSelector (response)
Self. count = self. count + 1
# Title = hxs. select ("// div [@ id = 'post-title'] [@ class = 'fn-clear']/h1 [@ id = 'htitle'] // */ text () "). extract ()
Title = hxs. select ('// title/text ()'). extract ()
Item = GaokaoItem ()
Item ['title'] = title [0]
Yield item
@ Classmethod
Def init_start (cls ):
Url_l = u'http: // search.tianya.cn/s? Tn = sty & rn = 10 & pn ='
Url_r = U' & s = 0 & pid = & f = 0 & h = 1 & ma = 0 & q = % B8 % DF % BF % BC % D6 % BE % D4 % b8'
Urls = []
For I in range (0, 75, 1 ):
Tem = url_l + str (I) + url_r
Urls. append (tem)
Return urls
But the error message says TinayaSpider is not defined. If you have a girl, it is probably an object. Then I will do this again:
#-*-Coding: UTF-8 -*-
From scrapy. selector import HtmlXPathSelector
From scrapy. contrib. linkextractors. sgml import SgmlLinkExtractor
From scrapy. contrib. spiders import crawler, Rule
From GaoKao. items import GaokaoItem
Class TianyaSpider (crawler ):
@ Classmethod
Def init_start (cls ):
Url_l = u'http: // search.tianya.cn/s? Tn = sty & rn = 10 & pn ='
Url_r = U' & s = 0 & pid = & f = 0 & h = 1 & ma = 0 & q = % B8 % DF % BF % BC % D6 % BE % D4 % b8'
Urls = []
For I in range (0, 75, 1 ):
Tem = url_l + str (I) + url_r
Urls. append (tem)
Return urls
Name = 'tianya'
Allowed_domains = ['tianya. cn']
Start_urls = init_start ()
Count = 0
Def parse (self, response ):
Hxs = HtmlXPathSelector (response)
Self. count = self. count + 1
# Title = hxs. select ("// div [@ id = 'post-title'] [@ class = 'fn-clear']/h1 [@ id = 'htitle'] // */ text () "). extract ()
Title = hxs. select ('// title/text ()'). extract ()
Item = GaokaoItem ()
Item ['title'] = title [0]
Yield item
However, an error is reported indicating that this method cannot be called.
So far, I do not know how to call class methods for initialization here.
However, I can change the code structure by using the instance method as follows:
#-*-Coding: UTF-8 -*-
From scrapy. selector import HtmlXPathSelector
From scrapy. contrib. linkextractors. sgml import SgmlLinkExtractor
From scrapy. contrib. spiders import crawler, Rule
From GaoKao. items import GaokaoItem
Class TianyaSpider (crawler ):
Def init_start (cls ):
Url_l = u'http: // search.tianya.cn/s? Tn = sty & rn = 10 & pn ='
Url_r = U' & s = 0 & pid = & f = 0 & h = 1 & ma = 0 & q = % B8 % DF % BF % BC % D6 % BE % D4 % b8'
Urls = []
For I in range (0, 75, 1 ):
Tem = url_l + str (I) + url_r
Urls. append (tem)
Return urls
Name = 'tianya'
Allowed_domains = ['tianya. cn']
Start_urls = []
Count = 0
Def _ init _ (self ):
CrawlSpider. _ init _ (self)
TianyaSpider. start_urls = self. init_start ()
Def parse (self, response ):
Hxs = HtmlXPathSelector (response)
Self. count = self. count + 1
# Title = hxs. select ("// div [@ id = 'post-title'] [@ class = 'fn-clear']/h1 [@ id = 'htitle'] // */ text () "). extract ()
Title = hxs. select ('// title/text ()'). extract ()
Item = GaokaoItem ()
Item ['title'] = title [0]
Yield item
Def init_start (self ):
Url_l = u'http: // search.tianya.cn/s? Tn = sty & rn = 10 & pn ='
Url_r = U' & s = 0 & pid = & f = 0 & h = 1 & ma = 0 & q = % B8 % DF % BF % BC % D6 % BE % D4 % b8'
Urls = []
For I in range (0, 75, 1 ):
Tem = url_l + str (I) + url_r
Urls. append (tem)
Return urls
This will succeed. I don't know why tinayaspider is recognized here.