January 24, 2016 8:00-19:00 Beijing Marriott Hotel (No. 7th Jian Guo Men South Street, Dongcheng District)@Container conference is a top-tier container technology conference designed for first-line developers and OPS engineers, organized by the domestic container community Dockone, which focuses on practice and communication, focusing on containers, operations, cloud computing and other technical fields, and strives to interpret container technology in a comprehensive and multi-perspective way f
Because Docker does not provide the ability of cluster management, the landing operation of a docker cluster is not very realistic, so it is necessary to introduce the management tools of container cluster, mainstream mesosphere marathon, Google
either continue to work in the current environment and build it, or, as other companies do, completely remove existing facilities and move them directly to modern infrastructure ."
Use Mesos for infrastructure Scheduling
Mattermark decided to use Mesos to redesign its infrastructure a few months ago, and provided several special requirements that must be met for the new system:
An abstraction layer needs
This article describes the process of installing using Marathon on OS X.Marathon IntroductionMarathon is a lightweight, extensible scheduling long-running Service scheduling framework built by Mesosphere company for the Mesos ecosystem. Support RESTful APIs to create and manage apps, automate fault-tolerant migrations for apps, and, in theory, simply start and manage marathon on Mesos with a shell-based tas
algorithm, run on Hadoop, can be expressed in Scala, run on Spark, and have a multiple increase in speed. In contrast, switching between MPI and Hadoop algorithms is much more difficult.
(2) Functional programmingSpark is written by Scala, and the supported language is Scala. One reason is that Scala supports functional programming. This has created the Spark code concise, and secondly makes the process based on spark development, but also particularly concise. A complete mapreduce,hadoop need
.
The simple and easy-to-use Api--alluxio offers a number of easy-to-use APIs Its native API is a set of java.io-like file input and output interface, using its development application does not require a complex user learning curve; Alluxio provides an HDFS-compatible interface where applications that originally use HDFs as the Target store can be migrated directly to Alluxio, and the application simply needs to transfer the original HDFs ://Replace with alluxio://to work properly, the cost of
protocols; Apache Zookeeper: Centralized service for process management; Google Chubby: A loosely coupled distributed system lock service; Linkedin Norbert: Cluster manager; OPENMPI: Message passing framework; Serf: Decentralized solutions for service discovery and coordination; Spotify Luigi: A python package that constructs a complex pipeline of batch jobs that handles dependency resolution, workflow management, visualization, fault handling, command line integration, and so on; Spring
This is a creation in
Article, where the information may have evolved or changed.
"Editor's words" in recent years, with the popularization of Mesos in the production environment, so that large-scale cluster management has become simple, and based on mesosframework development of the juice framework, can complete the distribution of distributed tasks, processing, for the improvement of resource utilization has great help, Let's introduce this framewor
. Each computing framework manages its own compute clusters. These application frameworks often divide tasks into small tasks, which can increase the utilization of the cluster and keep the computation close to the data. But these frameworks are developed independently, and it is not possible to share resources between application frameworks. The image is expressed as follows:We want to be able to run multiple application frameworks on the same cluster. Meso
service discovery mechanism automatically maps the address of an application instance to 外部网关 an address. The "Address" column in the figure is the address used to access Wordpress, where IP is an external gateway IP or domain name.Container ServiceContainers are sandboxed, with no interface to each other (IPhone-like apps), with little performance overhead and can be easily run in the machine and data center. Most importantly, they are not dependent on any language, framework, or system.Docker
install git first and install it directly into the Ubuntu Software Center or Apt-get. Installed after the need to go to https://github.com to register an account, I registered is Jerrylead, registered mailbox and password, and then according to the site Get-start prompted to generate RSA password.
Note: If there is a local rsa_id.pub,authorized_keys before, save it or make the original password a DSA form, so git and the original password do not conflict. 3 Spark Installation
Download the lates
Open the MESOS platform this morning and find a killed mission, mesos_task_id= HYAKUHEI.A318E232-28D9-11E6-BC8F-96BED1F124A2, the name is very strange, not I run Ah, and then go to Marathon to see, without this task container in the run, may have been deleted, view Mesos log, found in two Slav The e-node ran over the task, logged in to slave Docker ps-a, and saw the image name scare Jump:#dockerps-acontaine
Currently, Apache Spark supports three distributed deployment methods, standalone, spark on Mesos, and Spark on YARN, the first of which is similar to the pattern used in MapReduce 1.0, where fault tolerance and resource management are implemented internally. The latter two are the trend of future development, partial fault tolerance and resource management by the Unified resource management system: Let Spark run on a common resource management system
Myriad started working on a new project by ebay, MAPR and Mesosphere, and then forwarded the project to Mesos, "project development has moved to:https:// Github.com/mesos/myriad. " And then handed it over to Apache, it's a great project migration! I. introduction of myriad (from concept understanding myriad)The myriad name means countless or very large numbers.The following is intercepted by the GitHub offi
. In Coding our large and small micro-service more than 50, just like eggs can not be placed in a basket, containers can not be placed on the same cloud host, but neatly separated to put, otherwise how can be called distributed? Our microservices, for file systems, have some dependencies on each other's networks, strung together like a spider's web, which is required for the host location and the corresponding host configuration allowed by the microservices. So it takes a central thing to help u
. In Coding our large and small micro-service more than 50, just like eggs can not be placed in a basket, containers can not be placed on the same cloud host, but neatly separated to put, otherwise how can be called distributed? Our microservices, for file systems, have some dependencies on each other's networks, strung together like a spider's web, which is required for the host location and the corresponding host configuration allowed by the microservices. So it takes a central thing to help u
combine the advantages of both scenarios, Consul management requires hot-updated configuration items, and most of the fixed configurations are injected through environment variables.
Select an Orchestration toolWhen we first started testing containers, most of the time developers wrote scripts to control the distribution, creation, and destruction of containers. It was not long before we found that a unified orchestration tool was needed to standardize operations and reduce the wheel-making
Background Introduction
Sparrow's paper is included in Sosp 2013, on the Internet can also find a writer's talk ppt, it is worth mentioning that the author is a bit ppmm. She has previously published a case for Tiny Tasks in Compute Clusters, this article I did not read carefully, but at that time when looking at the mesos thickness and granularity pattern, the group has discussed this paper. Combined with her GitHub project, she found that she and
need to be considered at first) and then develop the corresponding wrapper to deploy services in the stanlone mode to the Resource Management System yarn or mesos. The resource management system is responsible for Fault Tolerance of services. Currently, Spark does not have any single point of failure (spof) in standalone mode, which is implemented by zookeeper. The idea is similar to the Hbase master single point of failure solution. Comparing Spark
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.