Go crawl Web data and save to MySQL and return JSON data < a >

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Objective

Long ago wanted to study GO , but because the preparation of research and internship has been delayed not to do, but occasionally look at the relevant basic grammar, and did not apply it specifically to the actual coding. Senior, the course was a lot less, so decided to use it to grab some image data from the Internet, and then provide the interface, for the back of learning to iOS provide some network data.

About GO The introduction I do not say here, for me this kind of beginner originally said is not clear not Chu, more give oneself fall gossip.
The main features I want to implement are the following:

    • Capture image links and other data from fine image websites;

    • Storing the acquired data in a MySQL database;

    • Provides a simple json interface that allows you to get data through a link json .

Preparatory work

Install go and configure the environment

Because I use it myself OS X , I also wrote a Mac install go article, if you use a Mac, you can refer to it. Baidu under Windows will also be a good solution.

Analyzing Small programs

$GOPATH/srcunder Create a project folder indiepic as the directory for this applet. Each of the go projects has and only one package main , create a new go file under the project folder indiepic.go as the main file:

package mainimport "fmt"func main () {    fmt.Println("Hello World")}

Because the file is later started and the interface is provided with HTTP data, operations such as fetching data and storing it in a database are placed in one of the items for readability, and the manipulation of the data is rarely manipulated and does not need to be executed at each boot, so it is organized into a package is a good way to call an interface in a function only when it needs to be crawled main .

Therefore, create a new folder in the project folder crawldata , which is what we need package . The following required fetching data and storing data in the database and fetching data from the database are written as a function under the package.

crawldatanew crawldata.go and files under Folders database.go . One is related to crawling data, and one to database access data.
The folder structure is as follows:

indiepic├── README.md├── crawldata│   ├── crawldata.go│   └── database.go└── indiepic.go

The next step is to start implementing the Data Capture section.
Main crawl image Site http://www.gratisography.com/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.