glusterfs ceph

Discover glusterfs ceph, include the articles, news, trends, analysis and practical advice about glusterfs ceph on alibabacloud.com

Use Glusterfs as Kubernetes persistentvolume persistentvolumeclaim persistence Warehouse, high availability RABBITMQ, high availability MySQL, highly available Redis

Tags: lib part Sch connection Sele storage failure Shang GlusterGlusterfs how to cluster, online a search overwhelming This feature can be used to make a single node high availability, because k8s even if the node down the master will be at random a node to put off the resurrection Of course, I am running in my own environment, through the network of Glusterfs, data transmission, etc. have a certain performance loss, the network requirements are parti

Ceph installation in CentOS (rpm package depends on installation)

Ceph installation in CentOS (rpm package depends on installation) CentOS is a Community version of Red Hat Enterprise Linux. centOS fully supports rpm installation. This ceph installation uses the rpm package for installation. However, although the rpm package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth.

Ceph installation in CentOS (rpm package depends on installation)

CentOS is the community version of RedHatEnterpriseLinux. centOS fully supports rpm installation. this ceph installation uses the rpm Package for installation. However, although the rpm Package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth. In practice, if the bandwidth is not good, it takes a long time to download and install the tool, this is unacce

1. CentOS7 Installing Ceph

1. Installation Environment|-----node1 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks||-----node2 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disksAdmin-----||-----node3 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks||-----Client6789-Port communication is used by default between Ceph Monitors, and 6,800:7,300 ports in this range are used to communicate between OSD by default2. Preparation work (all nodes)2.1. Modify th

Ceph Multi-Mon mds__ Distributed File system

1. Current status 2. Add a Mon (mon.node2) SSH node2 to 172.10.2.172 (Node2) vim/etc/ceph/ceph.conf Add Mon.node2 related configuration Ceph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyring Monmaptool--create--add node1 172.10.2.172--fsid Mkdir-p/var/lib/ceph/mon/

Ceph deadlock failure under high IO

Ceph deadlock failure under high IO On a high-performance PC server, ceph is used for VM image storage. In the case of stress testing, all virtual machines on the server cannot be accessed. Cause: 1. A website service is installed on the virtual machine, and redis is used as the cache server in the website service. When the pressure is high (8000 thousand accesses per second), all the VMS on the host machin

OSD Error after ceph reboot

Here will encounter an err, because the jewel version of CEPH requirements journal need to be Ceph:ceph permissions, the error is as follows: Journalctl-xeu ceph-osd@9.service 0月 09:54:05 k8s-master ceph-osd[2848]: Starting Osd.9 at:/0 osd_data/var/lib/c Eph/osd/ceph-9/var/lib/ce

How to choose Glusterfs Version--20160705 edition

Gluster 3.7. The 12 version has a major bug to fix, from the previous message list information, 3.7 version reported a lot of problems, are still in repair.Relatively speaking, the 3.6 series version is relatively stable, no major bugs have been found recently, the new update is not released, the explanation is very stable.3.6 The last update is currently in March 2016, while the 3.7 version has been updated 3 times since March and has been upgraded to the current 3.7.12, the community is also p

Causes of File Deletion exceptions in the glusterfs file system and Solutions

Delete the glusterfs shared directory Exception: transport endpoint is not connected Cause: data inconsistency between glusterfs storage systems causes communication problems between servers; failed to rebalance volume; sync volume to ensure data consistency between storage locations; Solution: 1. Check the network configurations of each storage server to ensure normal network communication; 2. Isolate

Cinder-multi Glusterfs Volume backends

Cinder supports a variety of backends in the backend, and it really fits the needs of the user. Recently, our company's own internal OpenStack platform with SATA, SSD glusterfs volume, studied the configuration of the next cinder multi backends. Actually, the main thing is cinder.conf.The cinder.conf configuration file is as follows:[default]enabled_backends = glusterfs1,glusterfs2 # enable two glusterfs vo

Ceph Multiple Mon Multi MDS

1. Current status2. Add another Mon (mon.node2) SSH node2 in 172.10.2.172 (Node2)Vim/etc/ceph/ceph.conf adding MON.NODE2-related configurationCeph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyringMonmaptool--create--add node1 172.10.2.172--fsid e3de7f45-c883-4e2c-a26b-210001a7f3c2/tmp/monmapMkdir-p/var/lib/ceph/mon/

Glusterfs Min-free-disk option Function Description

Label: gluster DHT Min-free-disk space threshold Configuration In scenarios where glusterfs's distributed volume (distriick) or distri composite volume is used, many people will be concerned about whether to fill up some brick, and other brick is relatively empty. What will happen? Glusterfs provides a min-free-disk option that allows you to configure a threshold value for the remaining free space. When the remaining space of a brick is less th

GlusterFS-3.7.18 Release Notes

GlusterFS 3.7.18 Fixed 13 bugs on 3.7.17 basis, but also found a number of known issues, which still have 8 known issues to be processed. ( from BLOG )from a known issue, even the latest version of 3.7.18, there are still some serious problems to be addressed, such as: The Inode memory leak under fuse mounts, GLUSTERFSD Brick process possible existence of the vsz memory leak, these are very serious, online deployment or there is a major hidden trouble

[Reprinted] glusterfs six-volume model description

This article is reprinted from the soaring water drop "glusterfs six-volume model description" Description of glusterfs 6 First, sub-volume The distributed volume files are randomly distributed across the entire brick volume. To use distributed volumes, you need to expand storage. Redundancy is important or provides other hardware/software layers. (Introduction: distributed volumes. Files are randomly dis

Some Ideas about CEpH tier

The CEpH experiment environment has been used within the company for a period of time. It is stable to use the Block devices provided by RBD to create virtual machines and allocate blocks to virtual machines. However, most of the current environment configurations are the default CEpH value, but the journal is separated and written to a separate partition. Later, we plan to use

Centos7.1 manual installation of ceph

Centos7.1 manual installation of ceph 1. Prepare the environmentOne centos7.1 hostUpdate yum Source [root@cgsl ]# yum -y update 2. Install the key and add it to the trusted Key List of your system to eliminate security alarms. [root@cgsl ]# sudo rpm --import 'https://download.ceph.com/keys/release.asc' 3. To obtain the RPM Binary Package, you need to add a Ceph library in the/etc/yum. repos. d/directory: Cr

CEpH RPM foor rhel6

ceph-0.86-0.el6.x86_64.rpm 09-Oct-2014 10:00 13M ceph-0.87-0.el6.x86_64.rpm 29-Oct-2014 13:38 13M ceph-common-0.86-0.el6.x86_64.rpm 09-Oct-2014 10:00 5.4M ceph-common-0.87-0.el6.x86_64.rpm 29-Oct-2014 13:38 5.4M

The combination of Nova and Ceph

First, Nova and Ceph combine1. Create storage pool pools in Ceph[[Email protected]_10_1_2_230 ~]# ceph OSD Pool Create VMs #创建一个pools, named vms,128 PGPool ' VMS ' created[Email protected]_10_1_2_230 ~]# ceph OSD Lspools #查看pools创建的情况0 rbd,1 images,2 VMs,[Email protected]_10_1_2_230 ~]#

Overview of OpenStack Ceph

The Oepnstack Ceph series is a collection of notes based on Ceph Cookbook, divided into the following sections:1. "Ceph profile"2. "Ceph cluster Operations"3. "Ceph block Device Management and OpenStack configuration"4. "In-depth ceph

Kubernetes How to Mount Ceph RBD and CEPHFS

[TOC]k8s Mount Ceph RBDk8s Mount Ceph RBD There are two ways, one is the traditional way of PVPVC, which means that the administrator needs to pre-create the relevant PV and PVC, and then the corresponding deployment or replication to mount the PVC use. After k8s 1.4, Kubernetes provides a more convenient way to dynamically create PV, that is, Storageclass. Using Storageclass, you do not have to create a fi

Total Pages: 15 1 .... 7 8 9 10 11 .... 15 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.