Python Guangxi happy very source rent dsluntan.com q:3393756370 vx:17061863513 In recent years, rapid development, became the hottest language.
So how to do NetEase cloud singer Top50 song lyrics crawl It
- First to do NetEase cloud and do like the singer search as follows:
The important note here is that http://music.163.com/#/artist?id=1007170 is not really the connection we need, the real link should be http://music.163.com/artist?id=1007170
- After figuring out the connection problem, we need to carry out the BeautifulSoup to the NetEase to crawl
The core code is as follows:
#encoding =utf-8
Import requests
Import JSON
Import re
Import OS
From BS4 import BeautifulSoup
headers = {
' Referer ': ' https://music.163.com ',
' Host ': ' music.163.com ',
' User-agent ': ' mozilla/5.0 (Windows NT 10.0; Win64; x64) applewebkit/537.36 (khtml, like Gecko) chrome/68.0.3440.84 safari/537.36 ',
' Accept ': ' Text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/; q=0.8 '
}
def get_top50 (artist_id):
url = "http://music.163.com/artist?id=" +str (artist_id)
s = requests.session()s = BeautifulSoup(s.get(url,headers=headers).content,"lxml")artist_name = s.titlemain = s.find(‘ul‘,{‘class‘:‘f-hide‘})main = main.find_all(‘a‘)song = {}song[‘artist_name‘] = artist_name.textsong[‘list‘] = mainreturn song
The returned song is a dict, with two key-value pairs, Artist_name record singer name, list record song name and href value to get top50 song ID and link need to treat the return value as follows song[' href ']--- Song Link song.text---song name
If you use the above code hint error, it is possible that there are no packages installed BeautifulSoup and lxml. This time you need to go to the Python installation path, then locate the Script folder, open cmd into the script directory, use the command
Pip Install BeautifulSoup
Pip Install lxml
#这里需要注意的是pip的版本要与使用的python版本一致
#譬如本机的python使用的是3. version 0 and above, you need to use PIP3
- Get song lyrics
In fact, in the previous step, we have been able to get each song corresponding to the jump link, but it would be more troublesome to crawl the link directly, especially the token value configuration is very complex. For the sake of simplicity, here is the NetEase cloud song lyrics API, here you can refer to the link http://moonlib.com/606.html
The detailed code is as follows:
def get_lyric (song_id):
List = Song_id.split (' = ')
id = list[1]
url = "Http://music.163.com/api/song/lyric?id=" +str (ID) + "&lv=1&kv=1&tv=-1"
s = requests.session()s = BeautifulSoup(s.get(url,headers=headers).content,"lxml")json_obj = json.loads(s.text)final_lyric = ""if( "lrc" in json_obj): inital_lyric = json_obj[‘lrc‘][‘lyric‘] regex = re.compile(r‘\[.*\]‘) final_lyric = re.sub(regex,‘‘,inital_lyric).strip()return final_lyric
- Will get the lyrics string to write to the TXT file
def convertstrtofile (DIR_NAME,FILENAME,STR):
if (str = = ""):
Return
filename = filename.replace ('/', ')
With open (dir_name+ "//" +filename+ ". txt", ' W ') as F:
F.write (str)
Use Python Guangxi happy very source rent crawl NetEase cloud singer Top50 song lyrics