K8s Cluster Install the client on each of the above:Ceph-deploy Install k8s IP addressCreate a k8s action user :Ceph auth Add client.k8s mon ' Allow rwx ' OSD ' Allow rwx 'Ceph auth get client.k8s-o/etc/ceph/ceph.client.k8s.keyring #导出新建用户的钥匙 to place the exported keys under the/etc/
client writes directly to the back-end cold data pool while reading, Ceph reads the data from the backend to the cache layer.This model is suitable for immutable data, such as Photos/videos on Weibo/video, DNA data, X-ray images, etc.Crush algorithm is the core of Ceph cluster, on the basis of deep understanding of crush algorithm, utilize SSD's high performance
The device exception in the cluster (the addition and deletion of the exception OSD) can result in inconsistent data between the various copies of the PG, and data recovery is required so that all replicas are in a consistent state.First, the OSD Fault and treatment methods:1. The type of malfunction of the OSD: Fault A: A normal OSD because the device is abnormal, resulting in the OSD does not work properly, so that the OSD over the set time will be
Attach/mount for pod "INDEX-API-3362878852-PZXM8"/"Default". List of unattached/unmounted VOLUMES=[INDEX-API-PV]; Skipping podE0216 13:59:32.696223 1159 pod_workers.go:183] Error syncing pod 7e6c415a-f40c-11e6-ad11-00163e1625a9, skip Ping:timeout expired waiting for volumes-attach/mount for pod "INDEX-API-3362878852-PZXM8"/"Default". List of unattached/unmounted VOLUMES=[INDEX-API-PV] ...
With the Kubelet log we can see that the Index-api Pod on node 10.46.181.146 is unable to mount
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. in order to achieve our goal, we need to modify the crush map. my example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.
To revoke strate, please refer to the following picture:
I. Crush Map
Crush is very flexible and topology aware which is extremely useful in our scenario. we are about to create two different root
leader Elections Cephin theLeaderelection is aPaxos?Leaseprocess, withBasicpaxosthe purpose is different. The latter is used to resolve data consistency issues, whilePaxos Leaseis to elect aLeaderBearMonmapsynchronization Tasks, and is responsible for theLeaderafter offline select a newLeader. Cephthe cluster will only have oneMonitoras aLeader, is the current allMonitorinRankThe one with the lowest value. The electoral process will produceLeaderand
Ceph installation in CentOS (rpm package depends on installation)
CentOS is a Community version of Red Hat Enterprise Linux. centOS fully supports rpm installation. This ceph installation uses the rpm package for installation. However, although the rpm package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth.
.
Enter file in which to save the key (/HOME/CEPHUSER/.SSH/ID_RSA):
Enter passphrase (empty for no passphrase):
Enter same Passphrase again:
[Email protected]:~$ ssh-copy-id vsm-node1
[Email protected]:~$ ssh-copy-id Vsm-node2
[Email protected]:~$ ssh-copy-id vsm-node3
9. Execute after completion./install.sh-u root-v 2.2. The installation process is to download the installation dependent packages on the controller node before copying them to the Agent node installati
In the previous article, we introduced the use of CEpH-deploy to deploy the CEpH cluster. Next we will briefly introduce the CEpH operations.
Block device usage (RBD)A. Create a user ID and a keyringCEpH auth get-or-create client. node01 OSD 'Allow * 'mon 'Allow * '> node01.keyringB. Copy the keyring to node
CentOS is the community version of RedHatEnterpriseLinux. centOS fully supports rpm installation. this ceph installation uses the rpm Package for installation. However, although the rpm Package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth. In practice, if the bandwidth is not good, it takes a long time to download and install the tool, this is unacce
Keyvaluestore is another storage engine that Ceph supports (the first is Filestore), which is in the Emporer version of Add LevelDB support to Ceph cluster backend store Design Sum At MIT, I put forward and implemented the prototype system, and achieved the docking with ObjectStore in the firely version. is now incorporated into Ceph's Master. Keyvaluestore is a
Here will encounter an err, because the jewel version of CEPH requirements journal need to be Ceph:ceph permissions, the error is as follows:
Journalctl-xeu ceph-osd@9.service
0月 09:54:05 k8s-master ceph-osd[2848]: Starting Osd.9 at:/0 osd_data/var/lib/c Eph/osd/ceph-9/var/lib/ce
Using the Ceph OSD Tree command to view the Ceph cluster, you will find weight and reweight two values
Weight weight and disk capacity, general 1T, value is 1.000, 500G is 0.5
It is related to the capacity of the disk and does not change due to reduced disk available space
It can be set with the following command
Ceph
The CEpH experiment environment has been used within the company for a period of time. It is stable to use the Block devices provided by RBD to create virtual machines and allocate blocks to virtual machines. However, most of the current environment configurations are the default CEpH value, but the journal is separated and written to a separate partition. Later, we plan to use
Centos7.1 manual installation of ceph
1. Prepare the environmentOne centos7.1 hostUpdate yum Source
[root@cgsl ]# yum -y update
2. Install the key and add it to the trusted Key List of your system to eliminate security alarms.
[root@cgsl ]# sudo rpm --import 'https://download.ceph.com/keys/release.asc'
3. To obtain the RPM Binary Package, you need to add a Ceph library in the/etc/yum. repos. d/directory: Cr
Redhat's Ceph and Inktank code libraries were hacked
RedHat claims that Ceph community projects and Inktank download websites were hacked last week and some code may be damaged.
Last week, RedHat suffered a very unpleasant accident. Both the Ceph community website and the Inktank download website were hacked. The former is the open-source
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.