ceph gui

Want to know ceph gui? we have a huge selection of ceph gui information on alibabacloud.com

Ceph Storage, umount error

Tag: Ceph Storage Umount ErrorPhenomenon: [Email protected]:~# Umount/mnt/ceph-zhangboUmount:/mnt/ceph-zhangbo: The device is busy.(In some cases lsof (8) or fuser (1)) can be foundUseful information about processes that use the deviceWorkaround:1, according to the above tips, we use Fuser to check the use of the situation[Email protected]:~# fuser-m/mnt/

Ceph Newstore Storage Engine Introduction

As Ceph is increasingly used in various storage business processes, its performance and tuning strategy has become a topic for users to pay close attention to, one of the key factors affecting performance is the OSD storage engine implementation; The Ceph base component Rados is a strong consistent, object storage System, The storage engines supported by its OSD are as follows:The ObjectStore layer encapsul

Ceph's Crush Map

Edit Crush Map:1, obtain crush map;2, anti-compilation crush map;3. Edit at least one device, bucket, rule;4, recompile crush map;5, re-inject crush map;Get Crush MapTo get the crush map of the cluster, execute the command:Ceph OSD Getcrushmap-o {Compiled-crushmap-filename}Ceph will crush output (-O) to the file you specify, and because crush map is compiled, it needs to be recompiled;Anti-compilation Crush mapTo decompile the crush map, execute the c

Install CEpH on Ubuntu 14.04 Server

Ceph1:VI/etc/hosts (on all nodes)127.0.0.1 localhost192.168.1.15 ceph1192.168.1.16 ceph2192.168.1.17 ceph3Ssh-keygen-Q-t rsa-f ~ /. Ssh/id_rsa-C ''-N''VI ~ /. Ssh/configHost ceph2Hostname ceph2User RootStricthostkeychecking NoHost ceph3Hostname ceph3User RootStricthostkeychecking NoSsh-copy-ID ceph2Ssh-copy-ID ceph3 To get latest CEpH-deploy: Wget-Q-o-'https: // ceph.com/git /? P = CEpH. Git; A = blob_plai

Ceph knowledge excerpt (Crush algorithm, PG/PGP)

Crush Algorithm1, the purpose of crushOptimize allocation data, efficiently reorganize data, flexibly constrain object copy placement, maximize data security when hardware fails2. ProcessIn the Ceph architecture, the Ceph client is directly read and written to the Rados Object stored on the OSD, so ceph needs to go through (pool, Object) → (pool, PG) →osd set→osd

Ceph-dokan compiling using

Ceph-dokan compiled using the following is compiled on the Win7 64-bit machine, running 1. Download the source code, compile can refer to the inside of the readme.md Https://github.com/ketor/ceph-dokan 2. Download TDM-GCC and install, select 32 bit (default) when installing Https://sourceforge.net/projects/tdm-gcc/files/TDM-GCC%20Installer/tdm-gcc-5.1.0-3.exe/download 3. Download and install Dokan, select v

Common Ceph Operations Commands

1. RBD LS View the image of the Ceph default resource pool RBD2.RBD Info xxx.img View xxx.img specific information3.RBD RM xxx.img Delete xxx.img4.RBD cp aaa.img bbb.img copy image aaa.img to Bbb.img5.RBD rename aaa.img bbb.img rename aaa.img to Bbb.img6.RBD Import aaa.img The local aaa.img into the Ceph cluster7.RBD Export aaa.img aaa.img the Ceph cluster to a l

Ceph Cache Pool Configuration

0. IntroductionThis article describes how to configure the cache pool tiering. The role of the cache pool is to provide a scalable cache for caching Ceph hotspot data or for direct use as a high-speed pool. How to create a cache pool: First make a virtual bucket tree from an SSD disk,Then create a cache pool, set its crush mapping rule and related configuration, and finally associate the pool to the cache pool that you need to use.1. Build SSD bucket

Ubuntu 14.04 Deployment Ceph Cluster

Note: All operations below are performed at the admin node1, prepare three virtual machines, one as the admin node, the other two as the OSD node, and corresponding with the hostname command to modify the hostname to ADMIN,OSD0,OSD1, and finally modify the/etc/hosts file as shown below127.0.0.1 localhost10.10.102.85 admin10.10.102.86 osd010.10.102.87 OSD12. Configure password-free accessSsh-keygen //press ENTER to generate a public key to Ssh-copy-id-i/root/.ssh/id_rsa.pub

ceph-Intelligent Distribution Crush object with PG and OSD

The Ceph Crush algorithm (controlled Replication under Scalablehashing) is an algorithm based on data distribution and replication for random control.Basic principle:Storage devices typically support stripe to increase storage system throughput and improve performance, and the most common way to stripe is to do raid. As RAID0.The data is distributed in strips on the hard disk in the array, which is the stored procedure of the data in all the hard driv

Replace the hard drive jumper, the Ceph OSD does not start properly

1. Environmental descriptionWith Kolla deployed Ceph, since the OSD 0 occupies the SATA 0 channel, the system disk needs to be swapped with the OSD 0 jumper, after the jumper switch, the OSD 0 does not start properly.2, Reason analysis:Before switching jumpers, the device file of the OSD 0 is/dev/sda2, the switch jumper becomes/dev/sdc2,osd at startup,--osd-journal/dev/sda2, specifies the log device, because the log partition device name becomes/DEV/S

Ceph Automated Automation installation

1. Introduction to the basic Environment Ubuntu12.04.5 OpenSSH all require the default installation source nodeceph0.80.4 ceph-admin Management and Client node,ceph01,ceph02,ceph03 cluster node, network gigabit:192.168.100.11 cluster node hard disk needs 3 of them. The above is the basic configuration2. Deploy the 3 -node ceph environment with ice installation calamari-server,

Ceph Librados Programmatic access

IntroductionI need direct programmatic access to Ceph's object storage to see the performance difference between using gateways and without gateways. Examples of access based on Gate-way have gone through. Now the test is not to go to the gateway, with Librados directly with the Ceph cluster.Environment configuration1. Ceph cluster: You have a ceph cluster that i

Ceph adds OSD process

If you need to add a host name: OSD4 ip:192.168.0.110 OSD1. Create a directory for Mount directories and placement profiles in OSD4SSH 192.168.0.110 (this is from Mon host ssh to OSD4 host)Mkdir/ceph/osd.4Mkdir/etc/ceph2. On OSD4, format the EXT4 Sda3 partition, mount the partition.Mkfs.ext4/dev/sda3Mount-o User_xattr/dev/sda3/ceph/osd.43. The id_dsa.pub of the Mon host is copied to the OSD4 host for passwo

[Analysis] Ceph programming instance interface Librbd (C ++) -- image creation and data read/write, cephlibrbd

[Analysis] Ceph programming instance interface Librbd (C ++) -- image creation and data read/write, cephlibrbd Currently, we have two ways to use Ceph Block Storage :?-Use QEMU/KVM to interact with Ceph Block devices through librbd. This mainly provides block storage devices for virtual machines, as shown in ;? -Use the kernel module to interact with the Host

Ceph client cannot connect to cluster problem resolution

1. Description of the problem after doing the iptables strategy today and restarting one of the machines in the cluster, the input ceph-s discovers the following conditions: [[email protected] ~]# ceph-s2015-09-10 13:50:57.688516 7f6a6b8cc700 0 monclient (Hunting): Authenticate timed out AF ter 3002015-09-10 13:50:57.688553 7f6a6b8cc700 0 librados:client.admin authentication error (+) Connection timed O U

Ceph Introduction of RBD Implementation principle __ceph

RBD is a block device provided by Ceph, this article will briefly introduce its implementation principle. Ceph official documentation tells us that Ceph is essentially an object store. It is also understood that ceph block storage is actually handled by several objects at the client. In other words, for

Ceph Translations Rados:a Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters

, error detection, error recovery will bring great pressure to the client, controller, metadata directory node, and limit the scalability.We have designed and implemented Rados, a reliable, automated distributed object store that seeks to distribute device intelligence to complex thousands of-node-scale clusters involving data-consistent access, redundant storage, error detection, and recovery of logging problems. As part of the Ceph Distributed syste

Ceph Cache Tier

Cachetier is a ceph server-side caching scheme, simply add a layer of cache layer, the client directly with the cache layer to deal with , improve access speed, the backend has a storage layer, Actually store large amounts of data. The principle of tiered storage is that the access to the stored data is hot, and the data is not evenly accessed. There is a general rule called the 28 principle, that is, 80% 's application only accesses 20% data, this 20

Ceph Basic Operations Finishing

One, Ceph replacement drive process:1. Delete OSD:A, stop the OSD daemonStop Ceph-osd Id=xB, Mark OSD outCeph OSD out OSD. XC, OSD Remove from CrushmapCeph OSD Remove OSD. XD, Delete ceph anthentication keysCeph Auth del osd. XE, remove OSD from Ceph clusterCeph OSD RM OSD. X2, add OSD (warning: Add after deletion, OSD

Total Pages: 15 1 .... 6 7 8 9 10 .... 15 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.