Points table: 2015 Best Open source data center and cloud computing software

Source: Internet
Author: User
Tags kibana logstash etcd glusterfs apache mesos joyent kvm hypervisor ansible playbook

Points table: 2015 Best Open source data center and cloud computing software 2015-09-18Open source China

The well-known IT magazine InfoWorld the best open source platform, infrastructure, management and orchestration software of the year.

Best Open source data center and cloud computing software

You may have heard of the new technology for Docker containers. Developers love it because they can build containers with scripts, add layers of services, and push them directly from the MacBook Pro to the server for testing. Containers are practical because they are ultra-lightweight, unlike virtual machines that are now obsolete. Containers and other lightweight methods of delivery services are changing the landscape of operating systems, applications, and management tools. The best data centers and cloud computing software on the list are among the most outstanding.

Docker Machine , Compose and Swarm

Docker's Open source container technology has been adopted by a large public cloud and is built into the next version of Windows Server. Docker is a powerful data center automation tool that allows a broad range of developers and operations teams to separate applications from the infrastructure.

However, containers are only part of the Docker ecosystem. Docker also provides a range of tools that allow you to automate the entire lifecycle of your container using the Docker API, and automate the design and orchestration of your application.

Machine allows you to automate Docker container configuration. Starting at the command line, you can use a single line of code, lock one or more hosts, deploy the Docker engine, and even add it to the Swarm cluster. Support for most hypervisors and cloud platforms, you only need your own access credentials.

Swarm handles clustering and scheduling, and integrates with Mesos to provide more advanced scheduling capabilities. You can use Swarm to create a container host pool, and the application can scale up as the requirements increase. The application and all dependencies can be defined using Compose, which allows you to connect the containers to a distributed application and start as a group. The Compose description can be applied to various platforms, so that the developer's configuration can be quickly deployed to the production environment.

CoreOS and RKT

CoreOS is a thin, lightweight server operating system based on Google's Chromium os. It is intended to be used in conjunction with a Linux container, rather than using the package Manager to install features. By using containers to extend the thin core, CoreOS allows you to quickly deploy applications and run smoothly on the cloud infrastructure.

CoreOS's container management tool, Fleet, is designed to treat CoreOS server clusters as a single component, tools that can be used to manage high availability, and deploy containers to clusters based on the availability of resources. The cross-cluster key/value Storage System ETCD is responsible for handling device management and supporting service discovery. If a node fails, ETCD can quickly recover on the new replica and provide a distributed configuration management platform that connects to the CoreOS Automation update service.

Although CoreOS may be a household name for Docker support, CoreOS's team is developing its own container runtime environment Rkt, with its own container format: Application container image (App Container image). The RKT is also compatible with Docker containers and incorporates a modular architecture that allows different containerized systems (even hardware virtualization systems) to be plugged in. However, Rkt is still in its early stages of development, so it is not fully ready for use in production environments.

Rancheros

As we use containers to extract more and more services from the underlying operating system, we begin to consider what the future operating system will look like. Like our application, the future operating system will be a set of modular services running on thin cores that can be self-configured to provide only the services required by the application.

We can get a glimpse of the future operating system from Rancheros. Combined with the Linux kernel and Docker, the Rancheros is a minimalist operating system that is ideal for hosting container-based applications in the cloud infrastructure. Instead of using standard Linux packaging technology, Rancheros leverages Docker to host Linux user space services and applications in different container tiers. Low-level Docker instances are started first, and system services are hosted in their own containers. The user's application runs in a higher-level Docker instance, independent of the system container. Even if one of the containers crashes, the host still works.

Rancheros is only 20MB in size, so it's easy to replicate across the data center. It is also designed to be managed using automated tools rather than manual management, with API-level access for both Docker management tools and Rancher Labs ' own cloud infrastructure and management tools.

Kubernetes

Google's Kubernetes container orchestration system is designed to manage and run applications built with Docker and Rocket containers. Kubernetes focuses on managing microservices applications, allowing you to distribute containers across host clusters while addressing scaling issues to ensure that managed services run reliably.

Because the container provides an application abstraction layer, Kubernetes is an application-centric management service that supports many modern development patterns and focuses on user intent. This means that after you launch the application, Kubernetes manages the container so that the container runs within the parameters you set, and uses the Kubernetes scheduler to make sure it gets the resources it needs. The container is divided into pods, which are managed by the replication engine. The engine can recover the failed container, or add more pods when the application increases.

Kubernetes supports Google's own container engine (Container engine), which can run on a range of other cloud and data center services, including AWS and Azure, as well as VSphere and Mesos. Containers can be loosely coupled or tightly coupled, so that applications that are not designed for the cloud Platform as a service (PaaS) operating environment can be migrated to the cloud as a tightly coupled set of containers. Kubernetes also supports the rapid deployment of applications to the cluster, giving you the endpoints to implement the continuous delivery process.

Mesos

Turning the data center into a private or public cloud requires more than just hypervisors, but also a new layer of operations that can manage data center resources as if they were a single computer, processing resources, and scheduling. Apache Mesos is known as a "distributed system kernel" that allows you to manage thousands of servers, use containers to host applications and APIs, and support application parallel development.

The core of Mesos is a set of daemons that expose resources to the central scheduler. Tasks are distributed across nodes to take advantage of available processor and memory resources. A key approach is that applications can reject resources if they are not met by the resources provided. This approach works well for big data applications, where you can run Hadoop and Cassandra distributed databases with Mesos and Apache's own Spark data processing engine. It also supports the Jenkins continuous integration Server, allowing you to run build worker and test worker in parallel on a server cluster and dynamically adjust tasks based on workload size.

The Mesos is designed to run on Linux and Mac OS X and has recently been ported to Windows to support the development of extensible parallel applications on the Azure platform.

Smartos and the Smartdatacenter

Joyent's smartdatacenter is a software that runs its public cloud, adding a management platform software to the Smartos thin server operating system. Smartos is a later version of OpenSolaris, combined with the Zones container and KVM hypervisor, an in-memory operating system that can be started quickly from a USB flash drive and run on a bare metal server.

With Smartos, you can quickly deploy a set of lightweight servers that can be managed programmatically through a set of JSON APIs that are delivered via virtual machines and downloaded by the built-in image management tools. By using virtual machines, all user space (Userland) operations are isolated from the underlying operating system, reducing the security risks for hosts and visitors.

Smartdatacenter runs on the Smartos server, one server runs as a dedicated management node, and the rest of the cluster runs as a compute node. You can start with the cloud version on your laptop, which is a VMware virtual appliance, so you can try out the Management server. In the real data center, you will deploy Smartos to the server and use ZFS to process the storage, which includes the local image library. The service is deployed as an image, and the components are stored in the object library.

This combination of smartdatacenter and Smartos improves the joyent public cloud experience and provides a proven set of tools to help you launch your cloud data center. This infrastructure focuses on today's virtual machines, but it also lays the groundwork for tomorrow's virtual machines. A related Joyent project Sdc-docker exposes the entire Smartdatacenter cluster as a single Docker host, controlled by native Docker commands.

Sensu

The key to managing a large data center is not to use the server's graphical user interface (GUI), but to automatically script, forward information from sensors and logs, and inform the application of actions/actions based on information from monitoring tools and services. Sensu is a tool that begins to provide this functionality, which is often referred to as the "monitoring router."

Scripts that run in the datacenter send information to Sensu, and then Sensu send the information to the appropriate handlers, using a RabbitMQ-based publish/subscribe architecture. The server can be distributed, sending the published inspection results to the code responsible for processing. You can view the results in an e-mail, Slack room, or Sensu own dashboard. The message format is defined by a JSON file, and the mutator is used to format the data in real time, and the message is filtered to one or more event handlers.

Sensu is still a relatively new tool, but promising. If you want to automate your data center, you'll need a tool that will not only indicate what's happening in your data center, but also provide this information when you need it most. A commercial version adds functionality to support integration with third-party applications, but the open source version contains most of the functionality required to manage the data center.

Prometheus

Managing a modern data center is a complex task. A row of servers needs to be treated with care, and you also need a monitoring system designed to handle thousands of nodes. Monitoring application software poses a special challenge, and Prometheus can be useful. As a service monitoring system designed to provide alerts to operators, the Prometheus can be run on every system, whether it is a laptop or a high availability cluster of multiple monitoring servers.

The time series data is captured and stored, then compared with the pattern to identify faults and problems. You need to expose the data on the HTTP endpoint and use the Yaml file to configure the server. The browser-based reporting tool handles data display, and there is an expression console where you can try out queries. Dashboards can be built with a GUI builder, or written in a series of templates, so you can provide an application console that can be managed using a version control system like Git.

For example, captured data can be managed using expressions, making it easy to aggregate data from several data sources, such as the ability to centralize data from a series of Web endpoints into one store. An experimental alert management module sends alerts to common collaboration and development operations tools, including Slack and Pagerduty. Provides an official customer software library for common languages such as Go and Java, which means it's easy to add support for Prometheus to applications and services, while third-party options can extend Prometheus to node. js and. Net.

Elasticsearch , Logstash and Kibana

Running a modern datacenter generates a lot of data and tools to get information from that data. At this time, Elasticsearch, Logstash, and Kibana are the combinations (often referred to as Elk Architectures) that can work.

Elasticsearch is designed to handle scalable search tasks, performing searches on multiple types of content, including structured and unstructured documents. It is based on Apache's Lucene Information retrieval tool and a set of JSON APIs that take full advantage of rest. It is used to provide search services for websites such as Wikipedia and GitHub, using distributed indexing, with automated load balancing and routing capabilities.

The modern cloud architecture is based on a physical server array that runs as a virtual machine host. Monitoring thousands of servers requires centralized logging. Logstash is responsible for collecting and filtering logs generated by those servers (as well as applications running on the server) and using forwarders on each of the physical and virtual machines. Then, the data in Logstash format is sent to Elasticsearch, providing you with a search index: After adding more servers, the search engine can expand quickly.

At a higher level, Kibana adds a visual layer to Elasticsearch, which provides a web dashboard for exploring and analyzing data. Dashboards can be built around custom search and shared with teams, providing a fast, easy-to-understand source of devops information.

Ansible

Managing server configuration is a critical part of any development operations approach to managing a modern data center or cloud infrastructure. The Configuration management tool uses the desired state approach to simplify cloud-level system management, using server and application descriptions to handle server and application deployments.

Ansible provides minimalist management services, uses SSH to manage UNIX nodes, and uses PowerShell to work with Windows servers without deploying agents. Ansible Playbook describes the state of the server or service with YAML, deploys the Ansible module to the server that handles the configuration, and deletes the module once the service is running. You can use Playbook to orchestrate tasks, such as deploying multiple Web endpoints with a single script.

You can make module creation and playbook delivery part of the continuous delivery process, and use build tools to provide configuration and automate deployment. Ansible can bring together information from cloud service providers to simplify the management of virtual machines and networks. Monitoring tools in Ansible can automatically trigger additional deployments, help manage and control cloud services, and manage the resources used by large-scale data platforms such as Hadoop.

Jenkins

Continuous delivery requires not only an orderly approach to development, but also a tool for managing testing and building. At this point, the Jenkins continuous integration Server was born. Jenkins works with the source control, test tools, and build servers you choose. It is a flexible tool that was originally designed to work in conjunction with Java and has now expanded to support web development and mobile development, and can even build Windows applications.

It might be best to compare Jenkins to a switching network, allowing files to pass the test and build process and respond to signals from the various tools used-thanks to a huge library of more than 1000 plugins. These include tools that integrate Jenkins with local git instances and GitHub, so you can extend the continuous development model to your build and delivery processes.

Using automation tools like Jenkins not only pursues an idea, but also implements the construction process. Once you are committed to continuous integration as part of the continuous delivery model, as long as the code is distributed to the source Control version branch, you run the test and build cycle, and once the code enters the main branch, it is distributed to the user.

node. js and Io.js

Modern cloud applications are built using design patterns that are different from the N-tier enterprise and Web applications that we are familiar with. They are a distributed, event-driven portfolio of services that can scale rapidly and support thousands of concurrent users. node. JS is a key technology that uses this new paradigm, which is used by many major cloud platforms and is easy to install as part of a thin server or container on a cloud infrastructure.

The key to the success of node. JS is the NPM package format, which allows you to quickly install extensions for the core node. JS Service. This includes frameworks such as Express and Seneca, which help build scalable applications. The Central registry handles package distribution, and dependencies are automatically installed.

Although the Io.js branch exposes project management issues, it also allows a group of developers to actively add ECMAScript 6 support capabilities to the NPM-compliant engine. After the two teams have coordinated, the node. JS and Io.js code libraries are merged, and now the new version comes from the Io.js code base.

Next year, other branches may be reintegrated into the main branch, such as Microsoft's Io.js branch adds support for the 64-bit Chakra JavaScript engine and Google V8, allowing the node. JS platform to evolve and solidify its position as the cloud-level microservices preferred host.

Seneca

The developer of the Seneca Micro-service Framework has a motto: "Build Now, expand later". For anyone who wants to develop a micro-service, this maxim is more appropriate, because it allows you to start small, and then gradually add functionality as the service grows.

Essentially, Seneca implements the object/message (actor/message) design pattern, focusing on using node. js as the switching engine: After the message is fetched, the content is processed, and the appropriate response is sent, either to the originator of the message or to another service. With the focus on message patterns that correspond to business use cases, it's easier to quickly build a minimalist, actionable product for your application Seneca. The plug-in architecture makes it easy to integrate Seneca with other tools to quickly add functionality to your service.

When the requirements of your application increase or change, you can quickly add new patterns to the codebase or break existing patterns into different services. A pattern can also invoke another pattern to quickly reuse code. It's also easy to add Seneca to the message bus, which can be used as a framework for processing data from IoT devices because you only need to define the listening ports and monitor where the JSON data is provided.

The service may not be sustainable, and Seneca provides you with this option: Use the built-in object-relational mapping layer to handle data abstraction and provide plug-ins for common databases.

. Net Core and the ASP. VNext

Microsoft has opened up the source code for. Net, which is tantamount to exposing most of the company's code for this WEB platform. The new. Net Core version can run on Windows, OS X, and Linux. Netcore is currently migrating from Microsoft's Codeplex code base to GitHub, providing a more modular approach to. Net, allowing you to install the required functionality when you need it.

ASP. Net 5 is currently in development, and the open source version of this WEB platform can run on. You can use Microsoft's MVC 6 framework as the basis for WEB applications. also supports the new SignalR library, which adds support for WebSockets and other real-time communication protocols.

If you plan to use Microsoft's new Nano server, write code using. Net Core because it is designed for thin environments. The new DNX is the. Net execution environment, which simplifies the task of deploying ASP. Application to many platforms, as well as tools for wrapping code and starting the runtime environment on the host. Features can be added using the NuGet Package Manager, allowing you to use only the libraries you need.

Microsoft's Open source. Net is still young, but Microsoft has pledged to ensure its success. Microsoft's own next-generation server operating system supports open source. Net, which means it has a place in both the data center and the cloud.

GlusterFS

Glusterfs is a distributed file system. Gluster aggregates various storage servers into a large, parallel network file system. You can even use it instead of HDFS in a Hadoop cluster, or instead of a costly storage area network (SAN) system, or both. Although HDFS is good for Hadoop, there is a common distributed file system that does not need to be analyzed for data transfer to another place.

When it's time to commercialize hardware, commercialize computing, and a higher demand for performance and latency, buy a big, expensive EMC SAN, and hopefully it meets all the requirements (naturally not) and is no longer your only viable option. GlusterFS was acquired by Red Hat in 2011.

Points table: 2015 Best Open source data center and cloud computing software

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.