1. Introduction to the basic Environment Ubuntu12.04.5 OpenSSH all require the default installation source node
ceph0.80.4 ceph-admin Management and Client node,ceph01,ceph02,ceph03 cluster node, network gigabit:192.168.100.11 cluster node hard disk needs 3 of them. The above is the basic configuration
2. Deploy the 3 -node ceph environment with ice installation calamari-server,Ceph-deploy , because the offline installation , so /etc/apt/sources.list backup. With a good hostname.
Each node needs to write Ceph-deploy /etc/hostsip need to be in that /etc/hosts
Ceph-deploy need to be free of KEY between other nodes
#ssh-keygen
#ssh-copy-idtest-ceph01 ceph02 ceph03
Locale
Exportlc_all= "C"
3. Configure NTP time synchronization
3.1) build time synchronization server
Installing NTP apt-get installntp
3.2) Modifying the ntp.conf backup base
Vim/etc/ntp.conf
# (Again, the addressis an example only.)
Broadcast 160.17.5.255
3.3) Start the service
Service NTPD Start
# more information.
Server ceph01
ntpdate-d IP
View Firewall shutdown
Ufwdisable
firewallstopped and disabled on system startup
4. Create a ceph_config directory in the root directory to extract the package into this directory
The following installation steps are performed:
#pythonice_setup. py
Chown-r www-data:www-data/var/log/calamari/
#calamari-ctl Initialize
Apt-keylist
Ceph-deploycalamari Connect ceph01 ceph02 ceph03 ceph04
#ceph-deployinstall ceph-admin ceph01 ceph02 ceph03 Installing a ceph cluster
#ceph-deploynew test-ceph01 initialize mon and generate ceph.conf
Modify the ceph.conf file
[Global]
Fsid =de882364-b3f9-45ce-b625-42acdbba3922
Mon_initial_members =ceph01, ceph02, ceph03
Mon_host =160.17.5.11,160.17.5.12,160.17.5.13
Auth_cluster_required= Cephx
Auth_service_required= Cephx
Auth_client_required =cephx
filestore_xattr_use_omap= true
Public_network =160.17.5.0/24
Cluster_network =192.168.110.0/24
Osd_pool_default_size= 3
Osd_pool_default_min_size= 2
osd_pool_default_pg_num= 256
osd_pool_default_pgp_num= 256
Osd_max_backfills = 3
Osd_recovery_max_active= 5
Ceph Mon Remove ceph01
Mon_host =192.168.111.11,192.168.111.12,192.168.111.13
Ceph-deploymon Create test-ceph01 can be set to 3 Mon
Ceph-deploy--overwrite-conf Mon Create ceph01
Ceph-deploygatherkeys ceph01 ceph02 ceph03
Environment has dirty data perform the following actions
CEPH-DEPLOYOSD Create--zap-disk CEPH01:/DEV/SDC CEPH02:/DEV/SDCCEPH03:/DEV/SDC
Force installation
ceph-deploy--overwrite-conf OSD Create Rceph01:/dev/sdb Rceph02://dev/sdbrceph03:/dev/sdb
Pkill-i salt-minion
salt-key-l found
salt-key-a Join
Node needs to install the name of the network management can join the host
Installation
#apt-getinstall TGT RADOSGW
#apt-getinstall libapache2-mod-fastcgi
First copy the node ceph01 node ceph.client.radosgw.keyring key to other nodes that need to be managed
Distributing the node ceph01 file to a node's home
./gateway.sh
./server
Create Pool
./pool.sh
Select the managed node
ceph-deploy--overwrite-conf Config push ceph03
Ceph Automated Automation installation