- Centos Kvm+ceph
- I. centos6.5 Installing KVM
- 1. Disable SELinux
- 2. Confirm that Intel virtualization is supported
- 3. Install the required packages
- 4. Setting up a bridging network
- 5. Run KVM Instance (this step is only used to test the environment for successful installation)
- 6. Connect to KVM
- Two. CentOS Install Ceph (Firefly version)
- Preparing the Machine
- Management machine Installation
- Installing additional nodes
- Three. KVM uses Ceph
- Create an OSD pool (container for block devices)
- Set the account's read and write permissions to the pool
- Create an IMG in the pool with qemu-img
- Verify that IMG was created successfully
- Create a virtual machine with KVM
Centos Kvm+ceph
CentOS 7 cannot install ceph with a common source, package dependencies are not met
CentOS 6 can be installed, but kernel does not support RBD, need to update kernel
RPM--importhttp://elrepo.org/rpm-gpg-key-elrepo.org
rpm-uvhhttp://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
Yum--enablerepo=elrepo-kernel install kernel-lt-y)
One. centos6.5 install KVM1. Disable SELinux
Vi/etc/selinux/config
Reboot
2. Confirm that Intel virtualization is supported
Egrep ' (VMX|SVM) '--color=always/proc/cpuinfo
Null representative does not support
3. Install the required packages
RPM--import/etc/pki/rpm-gpg/rpm-gpg-key*
Yum install Virt-manager libvirt QEMU-KVM openssh-askpass KVM Python-virtinst
Service LIBVIRTD Start
Chkconfig LIBVIRTD on
Emulator are generally QEMU-KVM, and may be other.
/usr/libexec/qemu-kvm-m? Check the supported host system type (I encountered the situation, the boot virtual machine prompt host type rhel6.5 not support, Virsh edit ... Change the rhel6.5 in host to PC pass)
/usr/libexec/qemu-kvm-drive format=? View supported device types (RBD must be supported and if not, the supported version needs to be installed.) Source install git clone git://git.qemu.org/qemu.git;. /configure--ENABLE-RBD. may not produce QEMU-KVM, perhaps qemu-system-x86_64 and the like, it will need to be in the configuration file emulator to the compiled executable file)
4. Setting up a bridging network
Yum Install Bridge-utils
Vi/etc/sysconfig/network-scripts/ifcfg-br0
Device= "Br0"
Nm_controlled= "No"
Onboot=yes
Type=bridge
Bootproto=none
ipaddr=192.168.0.100
Prefix=24
gateway=192.168.0.1
dns1=8.8.8.8
dns2=8.8.4.4
Defroute=yes
Ipv4_failure_fatal=yes
Ipv6init=no
Name= "System br0"
Vi/etc/sysconfig/network-scripts/ifcfg-eth0
Device= "Eth0"
Nm_controlled= "No"
Onboot=yes
Type= "Ethernet"
Uuid= "73CB0B12-1F42-49B0-AD69-731E888276FF"
Hwaddr=00:1e:90:f3:f0:02
Defroute=yes
Ipv4_failure_fatal=yes
Ipv6init=no
Name= "System eth0"
Bridge=br0
/etc/init.d/network restart
5. Run KVM Instance (this step is only used to test the environment for successful installation)
Virt-install--connect qemu:///system-n vm10-r--vcpus=2--disk path=/var/lib/libvirt/images/vm10.img,size=12-c/d Ev/cdrom--vnc--noautoconsole--os-type linux--os-variant debiansqueeze--accelerate--network=bridge:br0--HVM
6. Connect to KVM
If you need to install a GUI
Yum-y groupinstall "desktop" "Desktop Platform" "X window System" "Fonts"
Two. CentOS Install Ceph (Firefly version) Preparation machine
One installation management machine: admin
A set of Monitor:node1
Two data machine Osd:node2, NODE3
A client that uses Ceph RBD: ceph-client
Ceph users are created on all machines, giving sudo permission, so subsequent commands are executed by ceph users.
All Machines off SELinux
All Machines off defaults requiretty (sudo visudo)
All machines check firewall settings such as iptables, inter-node communication using ports such as 22,6789,6800 to prevent rejection
Management machine Installation
Add Repo
sudo vim/etc/yum.repos.d/ceph.repo
[Ceph-noarch]
Name=ceph Noarch Packages
Baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
I replaced the baseurl with the Baseurl=http://ceph.com/rpm-firefly/el6/noarch
Execute sudo yum update && sudo yum install Ceph-deploy
Allows admin to log on to other machines with SSH key
Ssh-keygen
Ssh-copy-id [Email protected]1
Ssh-copy-id [email protected]
Ssh-copy-id [email protected]
Installing additional nodes
On the admin machine
mkdir My-cluster
CD My-cluster
Initialize the configuration:
Ceph-deploy New Node1
The configuration file ceph.conf is created in the current directory, and the ceph.conf is edited,
Add OSD Pool Default size = 2, which is because we only have 2 OSD
Add RBD Default format = 2, set RBD image format to 2, support for the clone feature of image
Add journal Dio = False,
Install Ceph on all nodes
Ceph-deploy Install admin Node1 node2 node3
Initializing the Monitoring node
Ceph-deploy Mon create-initial node1
Initializing the OSD node
SSH Node2
sudo mkdir/var/local/osd0
Exit
SSH Node3
sudo mkdir/var/local/osd1
Exit
Ceph-deploy OSD Prepare node2:/var/local/osd0 NODE3:/VAR/LOCAL/OSD1
Ceph-deploy OSD Activate node2:/var/local/osd0 NODE3:/VAR/LOCAL/OSD1
(here two commands should end in a few seconds, and if a long time does not respond until the 300-second timeout, consider whether there is a firewall factor.) )
Copy the configuration to each node
Ceph-deploy admin Admin-node node1 node2 node3
sudo chmod +r/etc/ceph/ceph.client.admin.keyring
Ceph Health
Ceph Status
Want to get Active_clean status
Three. KVM uses Ceph to create an OSD pool (container for block devices)
Ceph OSD Pool Create libvirt-pool 128 128
Set the account's read and write permissions to the pool
Let's say we use the account is Libvirt (if you use the Admin account default has all permissions, without setting)
Ceph auth get-or-create client.libvirt mon ' Allow R ' OSD ' Allow Class-read object_prefix rbd_children, allow rwx POOL=LIBV Irt-pool '
Create an IMG in the pool with qemu-img
Qemu-img create-f RBD rbd:libvirt-pool/new-libvirt-image 10G
This step I encountered unknown file format ' RBD ', because the low version of QEMU-IMG does not support RBD.
But my qemu-img version number is enough, guess my package does not have the option to add RBD at compile time. Helpless forced to install the version number lower http://ceph.com/packages/qemu-kvm/centos/x86_64/qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64.rpm only support)
Verify that IMG was created successfully
Rbd-p Libvirt-pool ls
Create a virtual machine with KVM
1. Create a virtual machine with the Virsh command or Virt-manager
Requires an ISO or IMG under/var/lib/libvirt/images/
I'll name it test. Choose an ISO in the CD ROM, such as Debian.iso. Do not select the hard drive.
2. Virsh Edit Test
Modify the configuration of the VM to use RBD storage
Found it
<devices>
Then add the following:
<disk type= ' network ' device= ' disk ' >
<source protocol= ' RBD ' name= ' libvirt-pool/new-libvirt-image ' >
</source>
<target dev= ' Vda ' bus= ' Virtio '/>
</disk>
3. Create an account to access Ceph
Cat > Secret.xml <<eof
<secret ephemeral= ' no ' private= ' no ' >
<usage type= ' Ceph ' >
<name>client.libvirt secret</name>
</usage>
</secret>
Eof
sudo virsh secret-define--file secret.xml
<uuid of secret is output here>
Save the user Libvirt key
Ceph Auth Get-key Client.libvirt | sudo tee client.libvirt.key
Save the resulting UUID
sudo virsh secret-set-value--secret {uuid of Secret}--base64 $ (cat client.libvirt.key) && RM client.libvirt.key Secret.xml
Virsh Edit Test
Add Auth
...
</source>
<auth username= ' Libvirt ' >
<secret type= ' ceph ' uuid= ' 9ec59067-fdbc-a6c0-03ff-df165c0587b8 '/>
</auth>
4. Turn on the virtual machine installation OS
Also Virsh edit test
Change the configured boot mode from CDROM to HD
5. Reboot the virtual machine at the end of installation, boot from VDA, i.e. RBD
6. Next you can use RBD snap, Rdb clone, Virsh, Guestfs to create and use virtual machine templates
Centos Kvm+ceph