Create a distributed system with 300 lines of code with Mesos, Docker, and Go

Source: Internet
Author: User
Tags docker registry apache mesos
This is a creation in Article, where the information may have evolved or changed.

"Summary" Although Docker and Mesos have become buzzwords, but for most people they are still unfamiliar, let's explore the powerful destructive power of Mesos, Docker, and Go, and how to create a bitcoin mining with 300 lines of code System.

Nowadays, for most it players, Docker and Mesos are both familiar and unfamiliar: the familiarity with these two words has undoubtedly become the focus of discussion, and the strangeness is that these two technologies are not widely used in production environments, so many people still don't know what their advantages are or what they can do. Recently, John Walter wrote on Dzone Creating a distributed System in the Lines with Mesos, Docker, and go, narrated Mesos, Docker and go mates This article is compiled and collated by OneAPM engineers.

Admittedly, it is difficult to build a distributed system that requires scalability, fault tolerance, high availability, consistency, scalability, and efficiency. To achieve these goals, distributed systems require many complex components to work together in a complex way. For example, Apache Hadoop needs to rely on a highly fault-tolerant file system (HDFS) for high throughput when it processes terabytes of data in parallel on a large cluster.

Previously, each new distributed system, such as Hadoop and Cassandra, needed to build its own underlying architecture, including message processing, storage, networking, fault tolerance, and scalability. Fortunately, systems like Apache Mesos simplify the task of building and managing distributed systems by providing similar operating system-like management services to key building blocks of distributed systems. Mesos the CPU, storage, and other computing resources, so developers can treat the entire data center cluster as a giant machine when developing distributed applications.

The applications built on Mesos are called frameworks, and they solve a lot of problems: Apache Spark, a popular clustered data analysis tool; Chronos, a cron-like, fault-tolerant, distributed scheduler, which is a two framework built on Mesos Example. The build framework can be used in multiple languages, including C++,go,python,java,haskell and Scala.

Bitcoin mining is a good example of a distributed system use case. Bitcoin will convert the challenge of generating acceptable hash to verifying the reliability of a transaction. It may take decades for a single laptop to dig a piece that may take more than 150 years. As a result, there are many "mining pools" that allow miners to unite their computing resources to speed up ore-digging. An intern at Mesosphere, Derek, wrote a bitcoin mining framework (Https://github.com/derekchiang/Mesos-Bitcoin-Miner) that uses the advantages of cluster resources to do the same thing. In the next section, take his code as an example.

1 Mesos frames are composed of 1 scheduler and one executor. Scheduler and Mesos Master communicate and decide what tasks to run, while executor runs on slaves and performs actual tasks. Most frameworks implement their own scheduler and use 1 standard executors provided by Mesos. Of course, the framework can also customize the executor itself. In this example, a custom scheduler is written and a Docker image with our Bitcoin service is run using the standard command executor (executor).

For scheduler here, there are two tasks that need to be run--one Miner Server task and multiple miner worker tasks. The server communicates with a Bitcoin mining pool and assigns blocks to each worker. Workers will work hard to exploit bitcoin.

The task is actually encapsulated in the executor framework, so the task runs means telling Mesos master to start a executor on one of the slave. Because the standard command executor (executor) is used here, you can specify that the task is a binary executable, bash script, or other command. Because Mesos supports Docker, the executable Docker image will be used in this example. Docker is a technology that allows you to package your application with the dependencies it needs to run.

In order to use Docker images in Mesos, it is necessary to register their names in Docker registry:

const (    MinerServerDockerImage = "derekchiang/p2pool"    MinerDaemonDockerImage = "derekchiang/cpuminer")

Then define a constant that specifies the resources required for each task:

const (    MemPerDaemonTask = 128  // mining shouldn't be    memory-intensive    MemPerServerTask = 256    CPUPerServerTask = 1    // a miner server does not use much     CPU)

Now define a real scheduler, track it, and make sure it needs to run correctly:

type MinerScheduler struct {     // bitcoind RPC credentials    bitcoindAddr string    rpcUser      string    rpcPass      string    // mutable state    minerServerRunning  bool    minerServerHostname string     minerServerPort     int    // the port that miner daemons                                // connect to    // unique task ids    tasksLaunched        int    currentDaemonTaskIDs []*mesos.TaskID}

The scheduler must implement the following interface:

type Scheduler interface {    Registered(SchedulerDriver, *mesos.FrameworkID,     *mesos.MasterInfo)    Reregistered(SchedulerDriver, *mesos.MasterInfo)    Disconnected(SchedulerDriver)    ResourceOffers(SchedulerDriver, []*mesos.Offer)    OfferRescinded(SchedulerDriver, *mesos.OfferID)    StatusUpdate(SchedulerDriver, *mesos.TaskStatus)    FrameworkMessage(SchedulerDriver, *mesos.ExecutorID,                      *mesos.SlaveID, string)    SlaveLost(SchedulerDriver, *mesos.SlaveID)    ExecutorLost(SchedulerDriver, *mesos.ExecutorID,   *mesos.SlaveID,                  int)    Error(SchedulerDriver, string)}

Now look at a callback function together:

func (s *MinerScheduler) Registered(_ sched.SchedulerDriver,       frameworkId *mesos.FrameworkID, masterInfo *mesos.MasterInfo) {    log.Infoln("Framework registered with Master ", masterInfo)}func (s *MinerScheduler) Reregistered(_ sched.SchedulerDriver,       masterInfo *mesos.MasterInfo) {    log.Infoln("Framework Re-Registered with Master ",  masterInfo)}func (s *MinerScheduler) Disconnected(sched.SchedulerDriver) {    log.Infoln("Framework disconnected with Master")}

registered was called after Scheduler successfully registered with Mesos master.

reregistered is called when Scheduler is disconnected from Mesos Master and is registered again, for example, at Master reboot.

disconnected is called when Scheduler is disconnected from Mesos master. This will happen when Master hangs up.

So far, only the log information has been printed in the callback function, because most callback functions can be empty there for a simple framework like this. However, the next callback function is the core of each frame and must be carefully written.

Resourceoffers is called when Scheduler gets an offer from master. Each offer contains a list of resources that can be used by the framework on a cluster. Resources typically include CPU, memory, ports, and disks. A framework can use some of the resources it provides, all resources, or a bit of resources.

For each offer, it is now expected to gather all the resources provided and decide whether a new server task or a new worker task needs to be published. Here, you can send as many tasks as possible to each offer to test the maximum capacity, but since the mining of Bitcoins is CPU-dependent, there is a one-miner task and all available CPU resources are running on each of these terms.

for i, offer := range offers {    // … Gather resource being offered and do setup    if !s.minerServerRunning && mems >= MemPerServerTask &&            cpus >= CPUPerServerTask && ports >= 2 {        // … Launch a server task since no server is running and     we         // have resources to launch it.    } else if s.minerServerRunning && mems >= MemPerDaemonTask {        // … Launch a miner since a server is running and we have     mem         // to launch one.    }}

For each task, you need to create a corresponding TaskInfo message that contains the information needed to run the task.

s.tasksLaunched++taskID = &mesos.TaskID {    Value: proto.String("miner-server-" +                         strconv.Itoa(s.tasksLaunched)),}

Task IDs is determined by the framework, and each framework must be unique.

  Containertype: = Mesos. Containerinfo_dockertask = &mesos. TaskInfo {Name:proto. String ("task-" + taskid.getvalue ()), Taskid:taskid, Slaveid:offer. SlaveID, Container: &mesos. Containerinfo {Type: &containertype, Docker: &mesos. Containerinfo_dockerinfo {Image:proto. String (Minerserverdockerimage),},}, Command: &mesos.commandinfo {shell:proto. Bool (False), Arguments: []string {//These Arguments'll be passed to run_p2pool.py "--BITC Oind-address ", S.bitcoindaddr,"--p2pool-port ", StrConv. Itoa (int (p2poolport)), "-W", StrConv. Itoa (int (workerport)), S.rpcuser, S.rpcpass,}, and Resources: []*mesos. Resource {util. Newscalarresource ("CPUs", Cpuperservertask), util. Newscalarresource ("mem", Memperservertask),},}  

The TaskInfo message specifies some important metadata information about the task that allows the Mesos node to run the Docker container, specifically specifying name, task ID, container information, and some parameters that need to be passed to the container. The resources required for the task are also specified here.

Now that the TaskInfo has been built, the task can run like this:

driver.LaunchTasks([]*mesos.OfferID{offer.Id}, tasks,     &mesos.Filters{RefuseSeconds: proto.Float64(1)})

The last thing that needs to be done in the framework is what happens when the miner server shuts down. This can be handled using the Statusupdate function.

In the life cycle of a task, there are different types of status updates for different stages. For this framework, to ensure that if the miner server fails for any reason, the system will Kill all the miners to avoid wasting resources. Here is the relevant code:

if strings.Contains(status.GetTaskId().GetValue(), "server") &&    (status.GetState() == mesos.TaskState_TASK_LOST ||        status.GetState() == mesos.TaskState_TASK_KILLED ||        status.GetState() == mesos.TaskState_TASK_FINISHED ||        status.GetState() == mesos.TaskState_TASK_ERROR ||        status.GetState() == mesos.TaskState_TASK_FAILED) {    s.minerServerRunning = false    // kill all tasks    for _, taskID := range s.currentDaemonTaskIDs {        _, err := driver.KillTask(taskID)        if err != nil {            log.Errorf("Failed to kill task %s", taskID)        }    }    s.currentDaemonTaskIDs = make([]*mesos.TaskID, 0)}

Everything! By trying to build a working distributed Bitcoin mining Framework on Apache Mesos, it uses only about 300 lines of GO code. This proves how fast and simple it is to write a distributed system using the API of the Mesos framework.

Original link: Creating a distributed System in Lines with Mesos, Docker, and Go

This article is compiled by ONEAPM engineers and want to read more technical articles, please visit the ONEAPM Official technology blog.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.