About cloud and continuous integration

Source: Internet
Author: User

On the internet saw a cloud-era operating system: A few people cloud, a simple look at its product demo:https://dashboard.shurenyun.com/cluster/listclusters, instantly feel very familiar, a sense of déjà vu, Originally and I was 2012-2013 years ago, in the Eisoo Platform Development Department, for the then cloud storage back-end system design of that management background a bit like. But then also involves cluster management, node management and so on, more complex than this, which is for IT administrators to use the back-end system;

I contacted OpenStack in 2011, and in a twinkling, it was more than four years. The IaaS at that time was OpenStack, CloudStack, Eucalyptus, and Opennebula, and of course AWS said otherwise. I had an iteration of the d~g version of OpenStack, but now OpenStack is an L version, and it was interesting to continue to delve into OpenStack, but unfortunately, the company's decision-making strategic direction and positioning decided, and ultimately mainly to do traditional private cloud storage system, There was not much interest in internet-like cloud services such as Ucloud, seven cows, Upcloud, Qingyun, etc. eisoo was destined to be a relatively traditional and conservative company, but steady is also its advantage, and personally, for the things of interest, in the course of the job opportunities in the process, not in-depth study, It's a pity.

Pull away, come back, actually experienced cloud computing, is not a pity. When I approached the concept of cloud computing from 08, I thought he would be useful in the future, and it was true that cloud computing would be more and more widely used.

Cloud not only integrates physical resources, makes computing, storage, network and other resources to be virtualized and more efficient use, but also save resources, of course, this is IaaS.

And now PAAs, SaaS also big Line, when the platform, software and so on gradually in a service-based way to provide, this is actually the whole technology development of a large reconstruction.

The cloud era, service, less coupling, architectural evolution, no doubt, will create a lot of wonderful in the future, greatly improve people's lives, entrepreneurial environment, so that all become simplified and efficient.

The importance of architecture and testing

Remember when I first graduated, participated in CMMI3 process improvement issues, access to RUP, unified software process concept. The RUP concept has a deep impact on my architecture, and Rup is actually talking about the following three features:

    1. Use cases and risk-driven uses case and risk driven

    2. Architecture Center Architecture Centric

    3. Iterations and increments iterative and incremental

RUP is very focused on architecture, advocating architecture and risk-driven, it is necessary to do end-to-end prototype prototype, through the compression test to verify the feasibility of the architecture, and then on a prototype based on continuous iterative and incremental development, development, testing, adjustment architecture, such as development, loop, as shown:

After the architect completes the framework, the developer writes the code, and the tester needs to validate the schema as quickly as possible. It's not a good practice to put an armchair in the frame and throw it to the team (unless it's a very clear and mature area). In addition, the technical architecture is a bit of a perfectionist tendency, initially like to seek big perfection, ignoring the evolution and iteration of architecture, this tendency to create products and users can not form effective and rapid feedback, the product does not meet the needs of the end user.

  In fact, the architecture has been evolving, the code has been refactoring, testing has been optimized, is a good cycle.

Take a look at these, excerpts from the description of some of the acquaintances cloud (ref.: http://doc.shurenyun.com/get-started/vocabulary.html), some of the documents are as follows:

Micro-Service

MicroServices are an emerging application architecture that builds an application through a set of services that are deployed independently in different processes, with different services communicating through lightweight interaction mechanisms, such as REST. Each service scales independently and defines clear boundaries, and different services can even be implemented in different programming languages and maintained by independent teams.

MicroServices architectures have many benefits:

    1. This solves the complexity problem by breaking the huge monolithic application into multiple services. In the case of constant functionality, the application is decomposed into multiple manageable branches or services. Each service has a boundary that is clearly defined with the REST API. MicroServices architectures provide a modular solution for features that are difficult to implement in a single encoding, and a single service is easy to develop, understand, and maintain.
    2. This architecture enables each service to be developed by a dedicated development team. Developers are free to choose the development technology and provide API services. Of course, many companies try to avoid confusion and only offer certain technical options. This freedom, then, means that developers do not have to be forced to use the outdated technology used at the beginning of the project, and they can choose the latest technology. Because the service is relatively simple, it is not very difficult to rewrite the previous code with today's technology.
    3. MicroServices architectures require that each microservices be deployed independently. Developers no longer need to coordinate the impact of other service deployments on the service. This change can speed up deployment. The MicroServices architecture model makes continuous deployment possible.
    4. The MicroServices architecture pattern allows each service to scale independently. You can deploy the scale to meet their needs based on the volume of business for each service.
Service discovery

The basic idea of service discovery is that any instance of an application can programmatically get the details of the current environment, while new instances may be embedded in the existing application environment without human intervention. The service Discovery tool is typically implemented with a globally accessible storage information registry that stores information about the currently running instance or service. In most cases, in order for this configuration to be fault tolerant and extensible, this tool is stored on multiple nodes in a distributed format.

Service discovery reduces or eliminates "manual" connections between components. When you push your application into the production environment, all of these things can be configured: The database server's host and port, the REST service URL, and so on. In a highly extensible architecture, these connections can change dynamically. A new backend can be added, a database node can also be stopped, and your application needs to adapt to this dynamic environment.

Several people cloud provides users with complete service discovery capabilities:

    1. Tcp/http
      Depending on the service port protocol, TCP forwarding or HTTP forwarding can be selected.

    2. Internal/External
      The digital cloud not only provides the traditional external service discovery, but also provides the internal service discovery for the distributed service. External service discovery through the external gateway to provide services, if it is an HTTP service, you need to configure the domain name or extranet IP, internal service discovery through the inner proxy, the multi-instance micro-service port map to a unified exposed port;

Example

In the process of building a WordPress site, you need to deploy a Mysql server and a WordPress server.

    • The service port for Mysql itself is 3306, the protocol type Tcp;mysql service needs to be open to internal modules, and open ports still follow 3306. Therefore, the application address configuration for Mysql is as follows.

A few people cloud the internal service discovery mechanism automatically maps the address of an application instance to 内部网关 an address. The "Address" column in the figure is the address used to access Mysql, where IP is the internal gateway IP.

    • Wordpress itself is a Web service, its port is the default 80, the protocol type for http;wordpress service needs to be open to external access.
      If the external gateway is equipped with an accessible domain name, access the service through the domain name, open port is 80, cannot be modified.

If the external gateway does not have a domain name configured, then IP Access Service, open port optional, here to prevent port conflicts, set to 81.

Digital Cloud external service discovery mechanism automatically maps the address of an application instance to 外部网关 an address. The "Address" column in the figure is the address used to access Wordpress, where IP is an external gateway IP or domain name.

Container Service

Containers are sandboxed, with no interface to each other (IPhone-like apps), with little performance overhead and can be easily run in the machine and data center. Most importantly, they are not dependent on any language, framework, or system.

Docker is an open-source application container engine that allows developers to quickly package applications and rely on packages in a portable container, and then publish them to any major Linux host.

Digital cloud adoption of the current popular Docker container, based on the Mesos cluster scheduling tool, can easily manage and dispatch thousands of containers, the second level to start, second-level destruction, and can achieve large-scale load balancing.

Digital cloud advocates the adoption of a microservices architecture that containerized the services of MicroServices architecture applications as a standard for cloud application delivery. This benefit is not only to maintain the consistency of the application environment, so that development and operations from the chores environment to build out, but also easy to do fault tolerance, high availability, scale-out and other features.

With the acquaintance Cloud fast Big Jenkins Savings integration environment, testers can get:

Jenkins

Jenkins is a Java-based continuous integration (CI) tool that uses a digital cloud to deploy Jenkins to achieve dynamic scheduling of resources and improve resource utilization while achieving rapid setup.

Here's a look at the architecture and workflow of Jenkins on a number of people cloud:

    • Jenkins Master

      It is responsible for providing the entire Jenkin Setup, WebUI, workflow control customization and so on. First, the Jenkins-master is published using a few people cloud, the number of people cloud will be the program management and health checks, so that the application due to some unexpected crashes after the automatic recovery, to ensure that the Jenkins-master build system of the global high availability, Deploying Jenkins with a few people cloud enables your Jenkins application to run in a resource pool, further achieving resource sharing and increasing resource utilization.

    • Jenkins Slave

      Jenkins uses cluster resources built on several people clouds to improve resource utilization by leveraging elastic resource allocations, and by configuring the Jenkins-mesos-plugin plug-in, Jenkins Master can request the Jenkins-slave node dynamically when the job is built, and return the node after a period of time after the build is completed.

    • Jenkins-mesos-plugin

      The jenkins-mesos-plugin is mounted on the Jenkins Master in plug-in mode. The main purpose is to use the Jenkins-master as a cluster scheduler, which can dispatch the cluster resources established by several people cloud as Jenkins service.

Digital Cloud Sign up & Sign in & Create a cluster

Because the number of people cloud iteration is very fast, so there is no detailed description, the latest number of people cloud operations document will be in the number of people cloud User manual, users need to refer to the user click: several people cloud user manual

Deploying Jenkins with a few people cloud

Deploying the Jenkins application is simple, and here's how to do it:

Select App Management, click "New app" and follow the instructions below to create a new Jenkins app:

    • Fill in the Application name: Jenkins
    • Select cluster: Demo
    • Add app image address: Testregistry.dataman.io/centos7/mesos-0.23.0-jdk8-jenkins1.628-master
    • Fill in the image version: app.v0.3
    • Select Application Mode: Host mode
    • Select application Type: Stateful application
    • Select Host: Select any appropriate IP from the Host drop-down list
    • Host/Container directory configuration: Data mount directory:/data/jenkins container directory:/var/lib/jenkins
    • Select container size: cpu:0.2 Memory: 512MB

Advanced Settings:

    • Click Add environment variable, fill in the environment variable parameters:

        Key: JAVA_OPTS      Value: -Xmx512M -Xms512M  key: JENKINS_PORT   Value: 8002

After completing, click Create, you can see the application deployment status and other information after the creation is complete:

Wait a moment to see that the app is working:

Open the browser and access Jenkins with the internal agent configured, the Access address is: Yourip:jenkins_port, the following page shows that the Jenkins app has run successfully.

Jenkins Several people cloud settings

If we want to register Jenkins as a framework for Mesos to Mesos, we need to set up the plugin after the successful start of Jenkins.

Set Jenkins-mesos to three levels, click "System Management" in the top left corner, then click "System Settings" on the System administration page.

    1. Jenkins calls Mesos cluster settings

    2. Mesos Native Library path settings Mesos Lib library paths, generally in/usr/lib/libmesos.so, copy invalid, must be installed Mesos.

    3. Mesos Master [Hostname:port] Sets the Mesos-master address plus port, if the single mesos-master mode, use the mesos-master-ip:5050 format if it is more mesos-master using ZK: Zk1:2181,zk2:2181,zk3:2181/mesos format
    4. Framework name setting Mesos Master Check See the application framework names
    5. Slave username set Slave's name
    6. checkpointing Setup Check
    7. On-demand Framework registration setting whether to unregister the application framework from Mesos-master without a task

    1. Jenkins Slave call Mesos-slave type setting (available by Resource)

    2. Label String set Slave label

    3. Maximum number of executors per Slave each Slave can perform several tasks at the same time
    4. Mesos offer Selection Attributes choose to run on those Mesos Slave tag resources, format {"Clustertype": "Tag"}

    1. Jenkins Slave Call Docker mirroring settings

    2. Docker image Settings Jenkins Slave using mirroring

    3. Networking-bridge set the network mode, you must set the bridge mode here
    4. Port Mapping-container Port & host Prot Set container port relationship, must be set otherwise cause Mesos-dns to fail to find port mapping crashes, Host port default setting is empty (auto fetch), otherwise it must be set after 39000 Mesos Slave in the selected port.
    5. Volumes mount directory, must mount Slave.jar and Docker in Docker

Subsequent

The Dockerfile and startup scripts used are all open source, and uploaded to the acquaintances technology GitHub, if interested, can participate in, https://github.com/Dataman-Cloud/OpenDockerFile/tree/ Master/jenkins.

A continuous integrated cluster environment based on Jenkins has been set up, let us quickly iterate the product bar, to the user constantly bring a small surprise.

About cloud and continuous integration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.