53008439
Today, the project manager asked me a question, asked me here are 2000 URLs to check whether it can open, in fact, I was refused, I know because to write code, just learned a little python, a think, Python processing easy, chose Python, began to think good:
1. First 2000 URLs. Can be placed within a txt text
2. Use Python to put the URL within the content into the array in one line
3. Open a simulated browser to access it.
4. If normal access is output normal, error output error
Direct simple brute-dump code. Because of the privacy involved, the picture was coded
- Import Urllib.request
- Import Time
- Opener = Urllib.request.build_opener ()
- Opener.addheaders = [(' user-agent ', ' mozilla/49.0.2 ')]
- #这个是你放网址的文件名, just make it right.
- File = open (' test.txt ')
- lines = File.readlines ()
- Aa=[]
- For line in lines:
- Temp=line.replace (' \ n ',')
- Aa.append (temp)
- Print (AA)
- Print (' start check: ')
- For A in AA:
- Tempurl = A
- Try:
- Opener.open (Tempurl)
- Print (tempurl+' no problem ')
- except Urllib.error.HTTPError:
- Print (tempurl+' = Error accessing page ')
- Time.sleep (2)
- except Urllib.error.URLError:
- Print (tempurl+' = Error accessing page ')
- Time.sleep (2)
- Time.sleep (0.1)
Python checks URLs for normal access