Create a distributed system with 300 lines of code with Mesos, Docker, and go

Source: Internet
Author: User
Tags docker registry apache mesos
This is a creation in Article, where the information may have evolved or changed.

http://www.csdn.net/article/2015-07-31/2825348

"Editor's note" Nowadays, for most it players, Docker and Mesos are both familiar and unfamiliar: the familiarity with these two words has undoubtedly become the focus of discussion, and the strangeness is that these two technologies are not widely used in production environments, so many people still don't know what their advantages are, or what to do. Recently, John Walter wrote on Dzone Creating a distributed System in the Lines with Mesos, Docker, and go, about the powerful destructive power of Mesos, Docker, and go, Translated by ONEAPM engineers.

The following is the translation

It is difficult to build a distributed system. It requires scalability, fault tolerance, high availability, consistency, scalability, and efficiency. To achieve these goals, distributed systems require many complex components to work together in a complex way. For example, Apache Hadoop needs to rely on a highly fault-tolerant file system (HDFS) for high throughput when it processes terabytes of data in parallel on a large cluster.

Previously, each new distributed system, such as Hadoop and Cassandra, needed to build its own underlying architecture, including message processing, storage, networking, fault tolerance, and scalability. Fortunately, systems like Apache Mesos simplify the task of building and managing distributed systems by providing similar operating system-like management services to key building blocks of distributed systems. Mesos the CPU, storage, and other computing resources, so developers can treat the entire data center cluster as a giant machine when developing distributed applications.

The applications built on Mesos are called frameworks, and they solve a lot of problems: Apache Spark, a popular Clustered data analysis tool, Chronos, a cron-like, fault-tolerant distributed scheduler, This is an example of two frameworks built on Mesos. The build framework can be used in multiple languages, including C++,go,python,java,haskell and Scala.

Bitcoin mining is a good example of a distributed system use case. Bitcoin will convert the challenge of generating acceptable hash to verifying the reliability of a transaction. It may take decades for a single laptop to dig a piece that may take more than 150 years. As a result, there are many "mining pools" that allow miners to unite their computing resources to speed up ore-digging. An intern at Mesosphere, Derek, wrote a bitcoin mining framework (Https://github.com/derekchiang/Mesos-Bitcoin-Miner) that uses the advantages of cluster resources to do the same thing. In the next section, take his code as an example.

1 Mesos frames are composed of 1 scheduler and one executor. Scheduler and Mesos Master communicate and decide what tasks to run, while executor runs on slaves and performs actual tasks. Most frameworks implement their own scheduler and use 1 standard executors provided by Mesos. Of course, the framework can also customize the executor itself. In this example, a custom scheduler is written and a docker image with our Bitcoin service is run using the standard command executor (executor).

For scheduler here, there are two tasks that need to be run--one Miner Server task and multiple miner worker tasks. The server communicates with a Bitcoin mining pool and assigns blocks to each worker. Workers will work hard to exploit bitcoin.

The task is actually encapsulated in the executor framework, so the task runs means telling Mesos master to start a executor on one of the slave. Because the standard command executor (executor) is used here, you can specify that the task is a binary executable, bash script, or other command. Because Mesos supports Docker, the executable Docker image will be used in this example. Docker is a technology that allows you to package your application with the dependencies it needs to run.

In order to use Docker images in Mesos, it is necessary to register their names in Docker registry:

Const (    minerserverdockerimage = "Derekchiang/p2pool"    minerdaemondockerimage = "Derekchiang/cpuminer")

Then define a constant that specifies the resources required for each task:

Const (    memperdaemontask = +  //mining shouldn ' t be memory-intensive    memperservertask =    Cpuperservertask = 1    //A miner server does not use much CPU)

Now define a real scheduler, track it, and make sure it needs to run correctly:

Type Minerscheduler struct {    //Bitcoind RPC credentials    bitcoindaddr string    rpcuser      string    Rpcpass      string    //mutable state    minerserverrunning  bool    minerserverhostname string     Minerserverport     int    //The port, Miner Daemons                                //Connect to    //unique task IDs    taskslaunched        int    currentdaemontaskids []*mesos. TaskID}

The scheduler must implement the following interface:

Type Scheduler Interface {    registered (Schedulerdriver, *mesos. Frameworkid, *mesos. Masterinfo)    reregistered (Schedulerdriver, *mesos. Masterinfo)    disconnected (schedulerdriver)    resourceoffers (Schedulerdriver, []*mesos. Offer)    offerrescinded (Schedulerdriver, *mesos. OfferID)    statusupdate (Schedulerdriver, *mesos. TaskStatus)    frameworkmessage (Schedulerdriver, *mesos. Executorid,                      *mesos. SlaveID, String)    slavelost (Schedulerdriver, *mesos. SlaveID)    executorlost (Schedulerdriver, *mesos. Executorid, *mesos. SlaveID,                  int)    Error (Schedulerdriver, String)}

Now look at a callback function together:

Func (S *minerscheduler) registered (_ Sched. Schedulerdriver,       frameworkid *mesos. Frameworkid, Masterinfo *mesos. Masterinfo) {    log. Infoln ("Framework registered with Master", Masterinfo)}func (S *minerscheduler) reregistered (_ Sched. Schedulerdriver,       masterinfo *mesos. Masterinfo) {    log. Infoln ("Framework re-registered with Master", Masterinfo)}func (s *minerscheduler) disconnected (sched. Schedulerdriver) {    log. Infoln ("Framework disconnected with Master")}

registered was called after Scheduler successfully registered with Mesos master.

reregistered is called when Scheduler is disconnected from Mesos Master and is registered again, for example, at Master reboot.

disconnected is called when Scheduler is disconnected from Mesos master. This will happen when Master hangs up.

So far, only the log information has been printed in the callback function, because most callback functions can be empty there for a simple framework like this. However, the next callback function is the core of each frame and must be carefully written.

Resourceoffers is called when Scheduler gets an offer from master. Each offer contains a list of resources that can be used by the framework on a cluster. Resources typically include CPU, memory, ports, and disks. A framework can use some of the resources it provides, all resources, or a bit of resources.

For each offer, it is now expected to gather all the resources provided and decide whether a new server task or a new worker task needs to be published. Here, you can send as many tasks as possible to each offer to test the maximum capacity, but since the mining of Bitcoins is CPU-dependent, there is a one-miner task and all available CPU resources are running on each of these terms.

For I, Offer: = range offers {    //... Gather resource being offered and do setup    if!s.minerserverrunning && MEMS >= memperservertask &&< C3/>cpus >= cpuperservertask && ports >= 2 {        //... Launch a server task since no server is running and we         //has resources to Launch it.    } else if S.minerserverrunn ing && MEMS >= memperdaemontask {        //... Launch a miner since a server is running and we had mem         //To Launch one.    }}

For each task, you need to create a corresponding TaskInfo message that contains the information needed to run the task.

S.taskslaunched++taskid = &mesos. TaskID {    Value:proto. String ("miner-server-" +                         StrConv. Itoa (s.taskslaunched)),}

Task IDs is determined by the framework, and each framework must be unique.

Containertype: = Mesos. Containerinfo_dockertask = &mesos. TaskInfo {    Name:proto. String ("task-" + taskid.getvalue ()),    Taskid:taskid,    slaveid:offer. SlaveID,    Container: &mesos. Containerinfo {        Type: &containertype,        Docker: &mesos. Containerinfo_dockerinfo {            Image:proto. String (Minerserverdockerimage),        },    },    Command: &mesos.commandinfo {        Shell:proto. Bool (False),        Arguments: []string {            //These Arguments is passed to run_p2pool.py            "-- Bitcoind-address ", S.bitcoindaddr,            "--p2pool-port ", StrConv. Itoa (int (p2poolport)),            "-w", StrConv. Itoa (int (workerport)),            S.rpcuser, S.rpcpass,        }, and    Resources: []*mesos. Resource {        util. Newscalarresource ("CPUs", Cpuperservertask),        util. Newscalarresource ("mem", Memperservertask),    },}

The TaskInfo message specifies some important metadata information about the task that allows the Mesos node to run the Docker container, specifically specifying name, task ID, container information, and some parameters that need to be passed to the container. The resources required for the task are also specified here.

Now that the TaskInfo has been built, the task can run like this:

Driver. Launchtasks ([]*mesos. Offerid{offer. Id}, Tasks, &mesos. Filters{refuseseconds:proto. Float64 (1)})

The last thing that needs to be done in the framework is what happens when the miner server shuts down. This can be handled using the Statusupdate function.

In the life cycle of a task, there are different types of status updates for different stages. For this framework, to ensure that if the miner server fails for any reason, the system will kill all the miners to avoid wasting resources. Here is the relevant code:

If strings. Contains (status. GetTaskID (). GetValue (), "server") &&    (status. GetState () = = Mesos. Taskstate_task_lost | |        Status. GetState () = = Mesos. taskstate_task_killed | |        Status. GetState () = = Mesos. taskstate_task_finished | |        Status. GetState () = = Mesos. Taskstate_task_error | |        Status. GetState () = = Mesos. taskstate_task_failed) {    s.minerserverrunning = False    //Kill All Tasks    for _, TaskID: = Range S.currentdaemo Ntaskids {        _, Err: = driver. Killtask (TaskID)        if err! = Nil {            log. Errorf ("Failed to kill task%s", TaskID)        }    }    s.currentdaemontaskids = make ([]*mesos. TaskID, 0)}

Everything! By trying to build a working distributed Bitcoin mining Framework on Apache Mesos, it uses only about 300 lines of go code. This proves how fast and simple it is to write a distributed system using the API of the Mesos framework.

Original link: Creating a distributed System in Lines with Mesos, Docker, and Go (Zebian/Zhonghao)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.