glusterfs ceph

Discover glusterfs ceph, include the articles, news, trends, analysis and practical advice about glusterfs ceph on alibabacloud.com

A simple introduction to CEPH distributed storage clusters

In planning the CEPH distributed storage cluster environment, the choice of hardware is very important, this is related to the performance of the entire Ceph cluster, the following comb to some of the hardware selection criteria, for reference:1) CPU SelectionCeph metadata server will dynamically redistribute the load, which is CPU sensitive, so metadata server should have better processor performance (such

CEpH detaches OSD nodes

1. Disable the CEpH OSD process. Service CEpH stop OSD 2. Balance the data in the CEpH Cluster 3. Delete the OSD node when all PG balances are active + clean. CEpH cluster status before deletion [[Email protected] ~] # CEpH OSD tree # ID weight type name up/down reweight -1

Install and deploy CEpH calamari

According to http://ovirt-china.org/mediawiki/index.php/%E5% AE %89%E8%A3%85%E9%83%A8%E7%BD%B2Ceph_Calamari The original article is as follows: Calamari is a tool for managing and monitoring CEpH clusters and provides rest APIs. The recommended deployment platform is ubuntu. This article uses centos 6.5.Installation and deployment Obtain calamari code# git clone https://github.com/ceph/calamari.git# g

Cloud Ceph Classroom: Use Civetweb to build RGW quickly

Transferred from: https://www.ustack.com/blog/civetweb/The excellent open source project is changing the traditional It,openstack name most loudly, has become the IaaS the fact standard. Ceph is also a great achievement, with its three storage interfaces to meet the diverse needs of the enterprise. Unitedstack has a cloud that combines the benefits of an open source project, such as OpenStack and Ceph, to b

2.ceph Advanced Operation

This section reads as follows:Increase Monitoring NodeAdding OSD NodesRemove the OSD node1: Increase monitoring nodeHere we use the last environment, to increase the monitoring node is very simpleLet's get the Monitoring node environment ready: Change the Hosts file and hostname, and update the hosts file for the Deploy nodeOn the Deployment nodeCD first-ceph/Ceph-deploy new Mon2 Mon3//here refers only to w

Ceph Storage, umount error

Tag: Ceph Storage Umount ErrorPhenomenon: [Email protected]:~# Umount/mnt/ceph-zhangboUmount:/mnt/ceph-zhangbo: The device is busy.(In some cases lsof (8) or fuser (1)) can be foundUseful information about processes that use the deviceWorkaround:1, according to the above tips, we use Fuser to check the use of the situation[Email protected]:~# fuser-m/mnt/

Ceph Newstore Storage Engine Introduction

As Ceph is increasingly used in various storage business processes, its performance and tuning strategy has become a topic for users to pay close attention to, one of the key factors affecting performance is the OSD storage engine implementation; The Ceph base component Rados is a strong consistent, object storage System, The storage engines supported by its OSD are as follows:The ObjectStore layer encapsul

Ceph's Crush Map

Edit Crush Map:1, obtain crush map;2, anti-compilation crush map;3. Edit at least one device, bucket, rule;4, recompile crush map;5, re-inject crush map;Get Crush MapTo get the crush map of the cluster, execute the command:Ceph OSD Getcrushmap-o {Compiled-crushmap-filename}Ceph will crush output (-O) to the file you specify, and because crush map is compiled, it needs to be recompiled;Anti-compilation Crush mapTo decompile the crush map, execute the c

Install CEpH on Ubuntu 14.04 Server

Ceph1:VI/etc/hosts (on all nodes)127.0.0.1 localhost192.168.1.15 ceph1192.168.1.16 ceph2192.168.1.17 ceph3Ssh-keygen-Q-t rsa-f ~ /. Ssh/id_rsa-C ''-N''VI ~ /. Ssh/configHost ceph2Hostname ceph2User RootStricthostkeychecking NoHost ceph3Hostname ceph3User RootStricthostkeychecking NoSsh-copy-ID ceph2Ssh-copy-ID ceph3 To get latest CEpH-deploy: Wget-Q-o-'https: // ceph.com/git /? P = CEpH. Git; A = blob_plai

Ceph knowledge excerpt (Crush algorithm, PG/PGP)

Crush Algorithm1, the purpose of crushOptimize allocation data, efficiently reorganize data, flexibly constrain object copy placement, maximize data security when hardware fails2. ProcessIn the Ceph architecture, the Ceph client is directly read and written to the Rados Object stored on the OSD, so ceph needs to go through (pool, Object) → (pool, PG) →osd set→osd

Ceph-dokan compiling using

Ceph-dokan compiled using the following is compiled on the Win7 64-bit machine, running 1. Download the source code, compile can refer to the inside of the readme.md Https://github.com/ketor/ceph-dokan 2. Download TDM-GCC and install, select 32 bit (default) when installing Https://sourceforge.net/projects/tdm-gcc/files/TDM-GCC%20Installer/tdm-gcc-5.1.0-3.exe/download 3. Download and install Dokan, select v

The reason and solution of Glusterfs in CentOS under the condition of being unable to mount

Mount command to execute: Mount Target_host:/volume_name Current_path Prompt for error message after executing mount command /usr/sbin/start-statd:line 8:systemctl:command not found MOUNT.NFS:RPC.STATD is isn't running but is required for remote locking. Mount.nfs:Either use '-o nolock ' to keep locks the local, or start statd. Mount.nfs:Operation not permitted In general, the use of glusterfs storage is at least two machines configured to comp

Common Ceph Operations Commands

1. RBD LS View the image of the Ceph default resource pool RBD2.RBD Info xxx.img View xxx.img specific information3.RBD RM xxx.img Delete xxx.img4.RBD cp aaa.img bbb.img copy image aaa.img to Bbb.img5.RBD rename aaa.img bbb.img rename aaa.img to Bbb.img6.RBD Import aaa.img The local aaa.img into the Ceph cluster7.RBD Export aaa.img aaa.img the Ceph cluster to a l

Ceph Cache Pool Configuration

0. IntroductionThis article describes how to configure the cache pool tiering. The role of the cache pool is to provide a scalable cache for caching Ceph hotspot data or for direct use as a high-speed pool. How to create a cache pool: First make a virtual bucket tree from an SSD disk,Then create a cache pool, set its crush mapping rule and related configuration, and finally associate the pool to the cache pool that you need to use.1. Build SSD bucket

Ceph synchronization Data process OSD process abnormal exit record

Operation: The Ceph cluster expands several nodes.Anomaly: When the Ceph cluster synchronizes, the OSD process is always abnormally down (after a period of time data is synchronized).Ceph Version: 9.2.1Log:July 2509:25:57ceph6ceph-osd[26051]:0>2017-07-2509:25:57.471502 7f46fe478700-1common/HeartbeatMap.cc:Infunction ' Boolceph:: Heartbeatmap::_ch7 Month 2509:25:5

Redhat Installation Ceph

Redhat 6.2 Installation Configuration ceph (former) 1. Install Ceph-deploy Vim/etc/yum.repos.d/ceph.repo [Ceph] Name=ceph Packages for $basearch Baseurl=http://ceph.com/rpm-giant/el6/x86_64 Enabled=1 Gpgcheck=1 Type=rpm-md Gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [

Monitoring Ceph clusters with Telegraf+influxdb+grafana

Telegraf is a collection of monitoring agents, there are support to collect a variety of data plug-ins such as Ceph,apache,docker,haproxy,system, but also support a variety of output plug-ins such as Influxdb,graphite and so on.Influxdb is a time series database and is used for monitoring scenariosGrafana is a great drawing tool.The combination of the three has three main processes: Nbsp;1. The Telegraf installed on all nodes of the

The Ceph tutorial that does not speak crush is incomplete

As we mentioned earlier, Ceph is a distributed storage service that supports a unified storage architecture. A brief introduction to the basic concepts of ceph and the components that the infrastructure contains, the most important of which is the underlying rados and its two types of daemons, OSD and Monitor. We also dug a hole in the previous article and we mentioned crush. Yes, our tutorial is an incompl

Ubuntu 14.04 Deployment Ceph Cluster

Note: All operations below are performed at the admin node1, prepare three virtual machines, one as the admin node, the other two as the OSD node, and corresponding with the hostname command to modify the hostname to ADMIN,OSD0,OSD1, and finally modify the/etc/hosts file as shown below127.0.0.1 localhost10.10.102.85 admin10.10.102.86 osd010.10.102.87 OSD12. Configure password-free accessSsh-keygen //press ENTER to generate a public key to Ssh-copy-id-i/root/.ssh/id_rsa.pub

ceph-Intelligent Distribution Crush object with PG and OSD

The Ceph Crush algorithm (controlled Replication under Scalablehashing) is an algorithm based on data distribution and replication for random control.Basic principle:Storage devices typically support stripe to increase storage system throughput and improve performance, and the most common way to stripe is to do raid. As RAID0.The data is distributed in strips on the hard disk in the array, which is the stored procedure of the data in all the hard driv

Total Pages: 15 1 .... 10 11 12 13 14 15 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.