Python implements several methods to extract domain names from URLs, pythonurl
Find the domain name from the url, first think of using regular expressions, and then find the corresponding class library. There are a lot of imperfections in regular expression parsing, including domain names in URLs and increasing domain name suffixes. Several methods can be found through google. One is to use the Python built-in module and the regular expression to resolve the domain name, and the other is to enable the third party to directly resolve the domain name with the prepared parsing module.
Url to be parsed
Copy codeThe Code is as follows:
Urls = ["http://meiwen.me/src/index.html ",
"Http://1000chi.com/game/index.html ",
"Http://see.xidian.edu.cn/cpp/html/1429.html ",
"Https://docs.python.org/2/howto/regex.html ",
"Https://www.google.com.hk/search? Client = aff-cs-360chromium & hs = TSj & q = url % E8 % A7 % A3 % E6 % 9E % 90% E5 % 9F % 9F % E5 % 90% 8de & oq = url % E8 % a7 % A3 % E6 % 9E % 90% E5 % 9F % 9F % E5 % 90% 8de & gs_l = serp.3... 74418.86867.0.87673.28.25.2.0.0.0.541.2454.2-6j0j1j1. 8. 0 .... 0... 1c. 1j4. 53. serp .. 26.2.547.IuHTj4uoyHg """,
"File: // D:/code/echarts-2.0.3/doc/example/tooltip.html ",
"Http://api.mongodb.org/python/current/faq.html#is-pymongo-thread-safe ",
"Https://pypi.python.org/pypi/publicsuffix ",
"Http: // 127.0.0.1: 8000"
]
Use urlparse + Regular Expression
Copy codeThe Code is as follows:
Import re
From urlparse import urlparse
TopHostPostfix = (
'. Com ','. la ','. io ','. co ','. info ','.. net ','. org ','. me ','. mobi ',
'. Us','. biz', '. XXX','. ca', '.co.jp', '.com.cn', '.net.cn ',
'.Org.cn', '. mx', '. TV', '. ws','. AG', '. com. AG','. net. AG ',
'. Org. AG','. am', '. air','. at', '. be',' .com.br ',' .net.br ',
'. Bz','. com. bz', '. net. bz','. CC', '. com. co','. net. co ',
'. Nom. co','. de', '. els','. com. els', '. nom. els','. org. els ',
'. Eu ','. fm ','. fr ','. gs ','. in ','. co. in ','. firm. in ','. gen. in ',
'. Ind. in ','. net. in ','. org. in ','. it ','. jobs ','. jp ','. ms ',
'. Com. mx', '. nl', '. nu', '. co. nz','. net. nz', '. org. nz ',
'. Se','. tc ','. tk ','. tw ',' .com.tw ',' .idv.tw ',' .org.tw ',
'. Hk', '. co. uk', '. me. uk', '. org. uk', '. vg', ".com.hk ")
Regx = R' [^ \.] + ('+' | '. join ([h. replace ('. ', R '\. ') for h in topHostPostfix]) +') $'
Pattern = re. compile (regx, re. IGNORECASE)
Print "--" * 40
For url in urls:
Parts = urlparse (url)
Host = parts. netloc
M = pattern. search (host)
Res = m. group () if m else host
Print "unkonw" if not res else res
The running result is as follows:
Copy codeThe Code is as follows:
Meiwen. me
1000chi.com
See.xidian.edu.cn
Python.org
Google.com.hk
Unkonw
Mongodb.org
Python.org
127.0.0.1: 8000
Acceptable
Urllib to resolve domain names
Copy codeThe Code is as follows:
Import urllib
Print "--" * 40
For url in urls:
Proto, rest = urllib. splittype (url)
Res, rest = urllib. splithost (rest)
Print "unkonw" if not res else res
The running result is as follows:
Copy codeThe Code is as follows:
Meiwen. me
1000chi.com
See.xidian.edu.cn
Docs.python.org
Www.google.com.hk
Unkonw
Api.mongodb.org
Pypi.python.org
127.0.0.1: 8000
Www. will also be taken, and further analysis is required.
Use third-party module tld
Copy codeThe Code is as follows:
From tld import get_tld
Print "--" * 40
For url in urls:
Try:
Print get_tld (url)
Failed t Exception as e:
Print "unkonw"
Running result:
Copy codeThe Code is as follows:
Meiwen. me
1000chi.com
Xidian.edu.cn
Python.org
Google.com.hk
Unkonw
Mongodb.org
Python.org
Unkonw
Acceptable results
Other available parsing modules:
Tld
Tldextract
Publicsuffix
Can I use PYTHON to capture URLs generated by JS on a webpage?
The most vicious method is to write a js interpreter, and then send the captured page to the js interpreter to generate a static page. You can search for it on google code.
Deep URL crawling in python
Add:
Global AllURL
AllURL = []
For I in webUrl: Add the following judgment:
If I in ALLURL:
Continue
Else:
ALLURL. append (I)
It took a lot of time to test the landlord's code. I had the following questions to discuss with the landlord:
1. if an error occurs while accessing a webpage, no response is set, for example, 404, the program is interrupted;
2. If you access a dynamic website, the website may rewrite the relative path in the webpage for the request url, for example:
On the y page, the relative path is/xxx/, which forms a url/xxx/with the url, and the website redirects it to the page y;
After the catchurl is called, the path is changed to url/xxx/... and then called .... Until python cannot accept the path length. This problem cannot be solved by using the global variable ALLURL.
So whether to add a comparison to the web page in the program. If it is the same web page, it jumps out.