Target detection cloud service for Kai-golang

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Yolo/darknet is currently the most popular object detection algorithm (formerly known as Darknet), the performance on the GPU is not only fast and accurate rate is high. However, it is inconvenient to use, only the command line interface and the simple Python interface are provided. So I want to use the restful to realize a cloud darknet service Kai.

The reason for choosing go is not to consider concurrency, but the synchronization between goroutine can be easily handled, which is suitable for realizing pipeline function. The problem is that darknet is a C language implementation, and go must be encapsulated with CGO to invoke the C function. The goal is to achieve three basic functions: 1. Picture Detection 2. Video Detection 3. Camera detection. For ease of use I modified some of the code for Darknet and then redefined the following functions:

// Set a gpu devicevoid set_gpu(int gpu);// Recognize a imagevoid image_detector(char *datacfg, char *cfgfile, char *weightfile, char *filename,    float thresh, float hier_thresh, char *outfile);// Recognize a videovoid video_detector(char *datacfg, char *cfgfile, char *weightfile, char *filename,    float thresh, float hier_thresh, char *outfile);// Recognize a camera streamvoid camera_detector(char *datacfg, char *cfgfile, char *weightfile, int camindex,        float thresh, float hier_thresh, char *outpath);

With these several functions, it is good to do, the following with CGO to import the corresponding library and header files can:

// #cgo pkg-config: opencv// #cgo linux LDFLAGS: -ldarknet -lm -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand -lcudnn// #cgo darwin LDFLAGS: -ldarknet// #include "yolo.h"import "C"// SetGPU set a gpu device you wantfunc SetGPU(gpu int) {    C.set_gpu(C.int(gpu))}// ImageDetector recognize a imagefunc ImageDetector(dc, cf, wf, fn string, t, ht float64, of ...string) {    ...}// VideoDetector recognize a videofunc VideoDetector(dc, cf, wf, fn string, t, ht float64, of ...string) {    ...}// CameraDetector recognize a camera streamfunc CameraDetector(dc, cf, wf string, i int, t, ht float64, of ...string) {    ...}

This is done for the darknet package Go-yolo.

Enter the topic below to introduce the implementation of Kai.

Kai's design goals are as follows:

    • Backend based on darknet (training not supported)
    • Provides restful interfaces for image and video detection
    • Support for Amazon S3 download and upload
    • Support FTP Download and upload
    • Supports detection of results persisted to MongoDB

The architecture diagram is like this

Here is an introduction to Kai's pipeline mechanism, where the pipeline includes downloads (Download), Detection (Yolo), and on (Upload) The flow of this series.
First, the previous image:

The difficulty here is that the three steps of Download (Download), detection (Yolo) and upload (Upload) can be configured with a different number of goroutine, and the three steps are a synchronous operation.

    • First you need to define 3 buffered channel to synchronize
// KaiServer represents the server for processing all job requeststype KaiServer struct {    net.Listener    logger        *logging.Logger    config        types.ServerConfig    listenAddr    string    listenNetwork string    router        *Router    server        *http.Server    db            db.Storage    // jobDownBuff is the buffered channel for job downloading    jobDownBuff chan types.Job    // jobDownBuff is the buffered channel for job todo    jobTodoBuff chan types.Job    // jobDownBuff is the buffered channel for job done    jobDoneBuff chan types.Job}
    • The execution flow of pipeline is as follows
// Pipeline contains downloading, processing and uploading a jobfunc Pipeline(logger *logging.Logger, config types.ServerConfig, dbInstance db.Storage, jobDownBuff chan types.Job,    jobTodoBuff chan types.Job, jobDoneBuff chan types.Job, job types.Job) {    logger.Infof("pipeline-job %+v", job)    // download a job    setupAndDownloadJob(logger, config.System, dbInstance, job, jobDownBuff)    // jobDownBuff -> jobTodoBuff -> jobDoneBuff    yoloJob(logger, config, dbInstance, jobDownBuff, jobTodoBuff, jobDoneBuff)    // upload a job    uploadJob(logger, dbInstance, jobDoneBuff)}
    • Download (Download)
// setupAndDownloadJob setup and download jobs into jobDownBufffunc setupAndDownloadJob(logger *logging.Logger, config types.SystemConfig,    dbInstance db.Storage, job types.Job, jobDownBuff chan<- types.Job) {    go func() {        logger.Infof("start setup and download a job: %+v", job)        newJob, err := SetupJob(logger, job.ID, dbInstance, config)        job = *newJob        if err != nil {            logger.Error("setup-job failed", err)            return        }        downloadFunc := downloaders.GetDownloadFunc(job.Source)        if err := downloadFunc(logger, config, dbInstance, job.ID); err != nil {            logger.Error("download failed", err)            job.Status = types.JobError            job.Details = err.Error()            dbInstance.UpdateJob(job.ID, job)            return        }        jobDownBuff <- job    }()}
    • Detection (Yolo)
func yoloJob(logger *logging.Logger, config types.ServerConfig, dbInstance db.Storage,    jobDownBuff <-chan types.Job, jobTodoBuff chan types.Job, jobDoneBuff chan types.Job) {    go func() {        job, ok := <-jobDownBuff        if !ok {            logger.Info("job download buffer is closed")            return        }        logger.Infof("start a yolo job: %+v", job)        // limit the number of job in the jobTodoBuff        jobTodoBuff <- job        jobTodo, ok := <-jobTodoBuff        if !ok {            logger.Info("job todo buffer is closed")            return        }        nGpu := config.System.NGpu        t := yolo.NewTask(config.Yolo, jobTodo.Media.Cate, nGpu, jobTodo.LocalSource, jobTodo.LocalDestination)        logger.Debugf("yolo task: %+v", *t)        yolo.StartTask(t, logger, dbInstance, jobTodo.ID)        jobDoneBuff <- job    }()}
    • Upload (Upload)
 func uploadjob (Logger *logging. Logger, DBInstance Db. Storage, Jobdonebuff <-chan types. Job) {go func () {jobdone, OK: = <-jobdonebuff if!ok {logger. Info ("Job done buffer is closed") return} logger. Infof ("Start a upload job:%+v", jobdone) Uploadfunc: = uploaders. Getuploadfunc (jobdone.destination) If err: = Uploadfunc (Logger, dbinstance, jobdone.id); Err! = Nil {logger. Error ("Upload failed", err) Jobdone.status = types. Joberror jobdone.details = Err. Error () dbinstance.updatejob (jobdone.id, Jobdone) return} logger. Info ("Erasing temporary Files") If err: = Cleanswap (DBInstance, jobdone.id); Err! = Nil {logger. Error ("Erasing temporary files failed", err)} Jobdone.status = types. Jobfinished dbinstance.updatejob (jobdone.id, Jobdone) logger. Infof ("End a job:%+v", Jobdone)} ()} 

Here, the main mechanism of the project has been introduced, if you are interested can go to click on the project homepage below.

Project Link:
Go-yolo:https://github.com/zanlabs/go ...
Kai:https://github.com/zanlabs/kai

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.