Develop the program of Yin and Yang division from 0 to 1 in 24 hours,
0. order
The gender who plays the Yin and Yang division knows that every morning at six o'clock P.M. and will refresh two sealing tasks. the most painful thing for each task is to find copies and mysterious clues corresponding to various monsters. Yin and Yang division provides NetEase Genie to perform some data queries, but the experience is so touching that most people choose to use the search engine to search for the distribution of monsters and mysterious clues.
Every time I use a search engine, it is very inconvenient to search. Therefore, I decided to write a small program to query the distribution of monsters in the Yin and Yang division, and strive to achieve a faster user experience, leave more time for the dog food and the soul.
It happened that there were two days last weekend, so I started writing immediately.
1. conception and design (3 hours)1.1 Conception
The main function of the applet is the query function, so the homepage should be as concise as the search engine, and the search box is definitely needed;
The home page contains hot searches and caches the most popular searches;
Search supports full match or single-word match;
Click the search result to go to the dishen details page. 53. the details page of Shishen should contain the illustrations, names, rare places, and locations of Shishen, and sorted by number of monsters;
Added the data error reporting and suggestion function;
Supports personal search history;
The name of the applet. taking into account the functionality of the applet, the final decision is called the style God Hunter (in fact, this is what I think after the final development );
1.2 Design
After the idea, I began to design a sketch with the PS level of my half-hanging child, which is probably like this:
[JavaScript]View plain text Copy code
├── app.js├── app.json├── app.wxss├── pages│ ├── feedback│ ├── index│ ├── my│ ├── onmyoji│ ├── statement│ └── template│ ├── template.js│ ├── template.json│ ├── template.wxml│ └── template.wxss├── static└── utils
You can directly use import to call templates for other files:
[XML]View plain text Copy code
Then, where the template needs to be referenced:
[XML]View plain text Copy code
Another problem occurs here. writing the style corresponding to the template to the wxss corresponding to the template does not work. you need to write it in the wxss of the file that calls the template, for example, if the index needs to use a template, you need to write the corresponding css in my/my. wxss.
4. crawling image Resources (2 hours)
The icons and images of the style gods are basically available on the official website of the Yin and Yang division. it is not realistic to write crawlers here. Therefore, the crawlers are decisively crawled and saved to their own cdn.
Big Pictures and small pictures can be found in the http://yys.163.com/shishen/index.html here. At first, we thought about crawling the webpage and then beautiful soup to extract data. later, we found that God data was asynchronously loaded, which is simpler, analysis page get https://g37simulator.webapp.163.com/get_heroid_list directly returned type God information json information, so it is easy to write a crawler can be done:
[Python]View plain text Copy code
# coding: utf-8import jsonimport requestsimport urllibfrom xpinyin import Pinyinurl = "https://g37simulator.webapp.163.com/get_heroid_list?callback=jQuery11130959811888616583_1487429691764&rarity=0&page=1&per_page=200&_=1487429691765"result = requests.get(url).content.replace('jQuery11130959811888616583_1487429691764(', '').replace(')', '')json_data = json.loads(result)hellspawn_list = json_data['data']p = Pinyin()for k, v in hellspawn_list.iteritems(): file_name = p.get_pinyin(v.get('name'), '') print 'id: {0} name: {1}'.format(k, v.get('name')) big_url = "https://yys.res.netease.com/pc/zt/20161108171335/data/shishen_big/{0}.png".format(k) urllib.urlretrieve(big_url, filename='big/{0}@big.png'.format(file_name)) avatar_url = "https://yys.res.netease.com/pc/gw/20160929201016/data/shishen/{0}.png".format(k) urllib.urlretrieve(avatar_url, filename='icon/{0}@icon.png'.format(file_name))
However, after crawling the data, I found a problem. NetEase's official images are full-screen images without code. for my poor ds images, I had to go bankrupt on cdn for two days, therefore, you need to convert images in batches into smaller and visible ones. Well, the ps batch processing capability can be used here.
Open ps and select an image to be crawled;
Select "window" on the menu bar and then select "action;
Under the action option, create an action;
Click the circular recording button to start recording;
Save an image as a web format in the order of normal processing;
Click the Square stop button to stop recording;
Select files on the menu bar-automatic-batch processing-select the previously recorded action-configure the input folder and output folder;
Click OK;
After the batch processing is complete, you should just click here to upload all the images to the static resource server.
5. data crawling (4 hours)
Dishen distributed data is relatively complex on the internet and there are many deviations in the data. Therefore, we have decided to use a semi-artificial semi-automatic method. The crawled data output is json:
[JavaScript]
{"Scene_name": "Explore Chapter 1", "team_list": [{"name": "Tmall Green 1", "index": 1, "monsters ": [{"name": "", "count": 1 },{ "name": "小", "count": 2}]}, {"name": "Tmall Green 2", "index": 2, "monsters": [{"name": "Tmall Green", "count ": 1 },{ "name": "lamp sick", "count": 2}] },{ "name": "lamp sick 1", "index ": 3, "monsters": [{"name": "", "count": 2 },{ "name":" 小", "count ": 1}] },{ "name": "lantern sick 2", "index": 4, "monsters": [{"name": "Lantern Ghost ", "count": 2 },{ "name": "lamp sick", "count": 1}] },{ "name": "", "index ": 5, "monsters": [{"name": "jiushenmao", "count": 3}]}
And then manually check again. Of course there will still be omissions, so the data error reporting function is very important.
The actual write time for this part of the code may be more than half an hour, and the data is being checked for the rest of the time;
After all the checks, write a script to directly import the json into the database. after the check is correct, use fabric to publish it to the online server for testing;
6. test (2 hours)
The last step is to check the error on the mobile phone, modify some results, disable the debugging mode, and prepare for submission for review;
It's already Sunday. Oh, No. it should be one o'clock on Monday morning:
The above is a 24-hour development of the Yin and Yang division applet details, please pay attention to other articles in the first PHP community!