The Manual to deploy Strom/hurricane

Source: Internet
Author: User

Os:ubuntu 14.04

1. Install dependencies and hurricane on the nodes of cluster.

Dependencies:

1 sudo Install binutils libaio1 libboost-system1. 54.0 libboost-thread1. 54.0 libcrypto++9 libgoogle-perftools4 libjs-jquery libleveldb1 libreadline5 libsnappy1 Libtcmalloc-minimal4 Libunwind8 python-blinker python-flask python-itsdangerous python-jinja2 Python-markupsafe Python-pyinotify Python-werkzeug xfsprogs libfcgi0ldbl gdebi-core python3-chardet Python3-debian python3-six gdisk Cryptsetup-bin cryptsetup syslinux liblz4-dev libevent1-dev libsnappy-dev libaio-dev python-setuptools Python-boto Syslinux
View Code

The Debs of Hurricane:
Copy the debs from some servers, like:
[Email protected]:~/sndk-ifos-2.0.0.07/
Notes:
Can ' t get the location's release on the network on release notes.

Install all Debs under main/x86_64/ceph/
$ sudo dpkg-i *.deb
Install Radosgw*.deb under main/x86_64/client/
$ sudo dpkg-i radosgw*.deb

2. Use Ceph-deploy to deploy cluster
a) Create ceph monitor on monitor server (rack3-client-6):
$ Mdkir ceph-admin & CD Ceph-admin
$ ceph-deploy New rack3-client-6
$ ceph-deploy Mon Create rack3-client-6
$ ceph-deploy gathe Rkeys rack3-client-6
B) Enable ZS backend:
Install zs-shim, need copy it from some server, like:
[Email prot Ected]:~/sndk-ifos-2.0.0.06/sndk-ifos-2.0.0.06/shim/zs-shim_1.0.0_amd64.deb
Than Install this deb:
$ sudo Dkpg-i zs-shim_1.0.0_amd64.deb
Next, add following configurations on OSD part on ceph.conf:
Osd_objectstore = Keyv Aluestore
Enable_experimental_unrecoverable_data_corrupting_features = Keyvaluestore
Filestore_omap_ Backend = propdb
Keyvaluestore_backend = propdb
Keyvaluestore_default_strip_size = 65536
Keyvaluestore_ Backend_library =/opt/sandisk/libzsstore.so
C) Update ceph.conf on cluster
$ ceph-deploy--overwrite-conf Config Push rack3-client-6

3. Zap previous Osds and create new Osds on monitor node
$ Ceph-deploy Disk Zap Rack6-storage-4:/dev/sdb
$ ceph-deploy OSD Create Rack6-storage-4:/dev/sdb
On this cases, need zap and create OSDS on 3 OSD servers (rack6-storage-4, rack6-storage-5, storage-storage-6)

4. Create Pools and RBD
Create replicate pool and set replicate size to 3:
$ sudo ceph OSD Pool create hcn950 1600 1600
$ sudo ceph OSD Pool set hcn950 size 3
Create EC Pools:
$ sudo ceph OSD Pool Create EC1 1400 1400 erasure
$ sudo ceph OSD Pool create EC2 1400 1400 erasure
Create 4TB image and map to pool:
$ sudo rbd create image--size 4194304-p hcn950
$ sudo rbd map image-p hcn950
Than can check the RBD:
$ sudo rbd showmapped

How to calculate pg_num for replicate pool and EC pool?
For Replcate Pool:
Pg_num = (Osd_num *)/Pool Size
For EC Pool:
Pg_num = (Osd_num * +)/(K+M)
k=2, m=1 by default.
Use below cmd to get the defaults:
$ sudo ceph osd Erasute-code-profile get default

5. Start IO on Hurricane cluster:
Firstly, need IO tool, like Fio:
$ sudo apt-get install Fio
Config the configurations for FIO:
$ cat Fio1.fio
Ioengine=libaio
Iodepth=4
Rw=randwrite
bs=32k
Direct=0
size=4096g
Numjobs=4
Filename=/dev/rbd0
Finally, start Fio:
$ sudo fio Fio1.fio

6. Check The status of Hurricane cluster:
Fio console log:
$ sudo Fio fio1.fio
Random-writer: (g=0): RW=RANDW Rite, bs=32k-32k/32k-32k/32k-32k, Ioengine=libaio, iodepth=4
...
Random-writer: (g=0): Rw=randwrite, bs=32k-32k/32k-32k/32k-32k, Ioengine=libaio, iodepth=4
fio-2.1.3
Starting 4 processes
Jobs:4 (f=4): [wwww] [1.6% done] [0kb/28224kb/0kb/s] [0/882/0 IOPS] [eta 07d:23h:50m:57s]]m:48 S]

Cluster Status:
$ sudo ceph-s
Cluster 2946B6E6-2948-4B3F-AD77-0F1C5AF8EED6
Health HEALTH_OK
Monmap E1:1 mons at {rack3-client-6=10.242.43.1:6789/0}
Election Epoch 2, quorum 0 rack3-client-6
Osdmap e334:48 osds:48 up, + in
Pgmap v12208:4464 pgs, 4 pools, 3622 GB data, 1023 kobjects
1609 GB used, 16650 gb/18260 GB Avail
4464 Active+clean
Client IO 20949 kb/s wr, 1305 op/s

$ sudo rados DF
Pool name KB objects clones degraded Unfound Rd RD KB WR WR KB
EC1 0 0 0 0 0 0 0 0 0
EC2 0 0 0 0 0 0 0 0 0
hcn950 3799132673 1048329 0 0 0 2 1 17493652 280696934
RBD 0 0 0 0 0 0 0 0 0
Total used 1691025408 1048329
Total Avail 17455972352
Total Space 19146997760

$ sudo ceph osd tree

7. Remove Osds
Firstly, Mark osd.0 out of the distribution.
$ sudo ceph osd out 0
Than, Mark osd.0 down.
$ sudo ceph osd down 0
Finally, remove osd.0
$ sudo ceph osd Crush Remove osd.0
$ sudo ceph auth del osd.0
$ sudo ceph OSD RM 0

8. Restart OSD
Refer to:http://ceph.com/docs/master/rados/operations/operating/
To start all daemons of a particular type on a Ceph Node, execute one of the following:
$ sudo start ceph-osd-all
To start a specific daemon instance on a Ceph Node, execute one of the following:
$ sudo start ceph-osd id=0 # start osd.0 for example

To stop all daemons of a particular type on a Ceph Node, execute one of the following:
$ sudo stop Ceph-osd-all
To stop a specific daemon instance on a Ceph Node, execute one of the following:
$ sudo stop ceph-osd id=0 # sto osd.0 for example

9. Rebalance the data when some OSD hits the warning "near full OSD"
$ sudo ceph osd reweight-by-utilization
Than Check the status of cluster
$ sudo ceph-s
Or
$ sudo ceph-w
Recovery io would be found when rebalance.

The Manual to deploy Strom/hurricane

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.