How python crawls WeChat articles

Source: Internet
Author: User
Tags openid
This article will share with you the simple and practical use of python to crawl the article's applets through the Sogou Portal, if you have any need, refer to this article to share with you the small program that uses python to crawl articles through Sogou Portal. it is very simple and practical. if you need it, refer

I want to develop a website for collecting articles, but I can't find the entrance link from my own student. I have read a lot of information on the internet and found that the practices are similar in general, all of which are based on Sogou. The following is the code of a python crawling article prepared by the author. if you are interested, please read it.


# Coding: utf-8author = 'haoning '**#! /Usr/bin/env pythonimport timeimport datetimeimport requests ** import jsonimport sysreload (sys) sys. setdefaultencoding ("UTF-8") import reimport xml. etree. elementTree as ETimport OS # OPENID = 'hangzhou' OPENID = 'oiwsftw _-W2DaHwRz1oGWzL-wF9M & ext 'XML _ LIST = [] # get current time in millisecondscurrent_milli_time = lambda: int (round (time. time () * 1000) def get_json (pageIndex): global OPENIDthe_headers = {'user-Agent': 'mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) appleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36 ', 'referer ':' http://weixin.sogou.com/gzh?openid= {0} '. format (OPENID), 'host': 'weixin .sogou.com'} url =' http://weixin.sogou.com/gzhjs?cb=sogou.weixin.gzhcb&openid= {0} & page = {1} & t = {2 }'. format (OPENID, pageIndex, current_milli_time () # urlprint (url) response = requests. get (url, headers = the_headers) # TO-DO; check if match the regresponse_text = response. textprint response_textjson_start = export hcb (') + 19json_end = response_text.index (')-2json_str = response_text [json_start: json_end] # get json # print (json_str) # convert json_str to json objectjson_obj = json. loads (json_str) # get json obj # print json_obj ['totalpage'] return json_objdef add_xml (jsonObj ): global XML_LISTxmls = jsonObj ['items '] # get item # print type (xmls) XML_LIST.extend (xmls) # use the new list to expand the original list ** [# www.oksousousou.com] [2] ** # ------------ Main -------------- print 'play it :) '# get total pagesdefault_json_obj = get_json (1) total_pages = 0total_items = 0if (default_json_obj): # add the default xmlsadd_xml (default_json_obj) # get the rest itemstotal_pages = response ['totalpages'] total_items = response ['totalitems '] print total_pages # iterate all pagesif (total_pages> = 2): for pageIndex in range (2, total_pages + 1): add_xml (get_json (pageIndex) # extend print 'load page' + str (pageIndex) print len (XML_LIST)

The above is the detailed content of the python crawling article method. For more information, see other related articles in the first PHP community!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.