Preliminary discussion on ovirt-use summary FAQ
2015/9/28 Time 18:52
"Q1": When you execute the virsh command, you will be prompted for user authentication (please enter your authentication name), see the error prompt appears and configure the VDSM service, The use of SASL has a relationship, how to solve? A: Use the tool "Saslpasswd2 - set a user ' S sasl password" to create the user. This is what happens when the problem:# virsh listplease enter your authentication name: please enter your password: error: failed to reconnect to the Hypervisorerror: no valid connectionerror: authentication failed: failed to step SASL negotiation: -1 (SASL ( -1): generic failure: all-whitespace username.) Let's create a user:# saslpasswd2 -a libvirt myusername password: mypasswordagain (for verification): mypassword where the,-a parameter follows appname, Here we need to specify is the Libvirt service reason is: VDSM when joining Ovirt will use SASL again encryption Libvirt test again: # virsh listplease enter your authentication name: myusernameplease enter your password: id name state---------------------------------------------------- 1 tvm-test-template running 2 tvm-test-clone running 3 tvm-test-clone-from-snapshot running 4 testpool001 running 5 testpool007 running 6 testpool006 running meets expectations. "Q2": Perform the restart operation for VM on Ovirt interface, the Ovirt Web interface has the change of prompt state, but the VM console does not restart, what is this? A:VM There is no agent installed, under Linux is: Ovirt-guest-agent installation ovirt-guest-agent on the VM first installed ovirt-release35.rpm this yum source. # yum -y install http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm# Yum -y install ovirt-guest-agent Start Service # service ovirt-guest-agent start# Chkconfig ovirt-guest-agent on "Q3": When cloning a VM, the disk waits for a long time not ready a: state: Cloned object, attaching a large capacity disk (2T). Check the process running on host, find qemu-img, check if it is stuck, end manually. "Q4": How is cloud-init used? A:cloud-init is used to set the hostname, time zone, authentication, network, and custom scripts for VM startup. 1) Install Cloud-init# yum -y install cloud-init on VM to check boot start # chkconfig --list |grep &NBSP;CLOUD2) test shutdown VM, select Menu "Run only once"-"initial run"-"Use cloud-init", set hostname and other lettersInterest. Click OK and the VM starts, and the Cloud-init service will be used to automatically set the underlying information for the VM during the boot process. Look at the log (/var/log/cloud-init-output.log ) show:cloud-init v. 0.7.5 running ' init-local ' In conjunction with previous articles found in the network, mention: cloud-init on rhev searches for a floppy drivecontaining a user-data.txt file Reference: http://people.redhat.com/mskinner/rhug/q3.2014/ Cloud-init.pdf My understanding is: 1) Normally when we create a VM, we use "run only once" to enable the boot option, similar to Personal speculation also cloud-init function is injected into the VM at this time in a similar floppy drive (the ovirt-guest-agent in the VM is temporarily mounted on startup?). ), and the VM reads the USER-DATA data during the boot process. 2) viewing directory /var/lib/cloud/instance3) view file datasource# cat datasource datasourceconfigdrive : datasourceconfigdrive [local,ver=2][source=/dev/sr1]# ll /dev/cd*lrwxrwxrwx. 1 root root 3 sep 16 2015 /dev/cdrom -> sr0lrwxrwxrwx. &NBSP;1&NBSP;ROOT&NBSP;ROOT&NBSP;3&NBSP;SEP&NBSP;16&NBSP;&NBSP;2015&NBSP;/DEV/CDROM1&NBSP;->&NBSP;SR1 can thus be judged , a new device appears: Cdrom1, let's mount a look inside the content: # mount /dev/sr1 /mntmount: block device /dev/sr1 is write-protected, mounting read-only# tree /mnt//mnt/└── openstack ├── content │ └── 0000 └── latest ├── meta_data.json └── user_data3 directories, 3 files# cat /mnt/openstack/latest/user_data #cloud-configssh_pwauth: truetimezone: Asia/Shanghaidisable_root: 0output: all: ' >> /var/log /cloud-init-output.log ' user: rootpassword: yourpasswdchpasswd: expire: falseruncmd:- ' sed -i '/^datasource_list: /d ' /etc/cloud/cloud.cfg; echo ' Datasource_ list: ["Nocloud", "configdrive"] " >> /etc/cloud/cloud.cfg" # cat / Mnt/openstack/latest/meta_data.json { "Launch_index" : "0", "Availability_zone" : "Nova", "Network-interfaces" : "auto eth0\niface eth0 inet static\n address 10.0.200.101\n netmask 255.255.255.0\n gateway 10.0.200.254 \n dns-nameservers 10.0.200.253\nauto eth1\niface eth1 inet static\n address 10.0.201.101\n netmask 255.255.255.0\n dns-nameservers 10.0.200.253\n ", " name " : " Cloud-init ", " Network_config " : { "Content_path" : "/content/0000", "path" : "/etc/ Network/interfaces " }, " hostname " : " cloud-init ", " uuid "&NBSP;: "72be0e3f-10a7-433e-b6b3-a9daded7948f", "meta" : { " Essential " : " false ", &NBsp; "Role" : "Server", "Dsmode" : "local" } the above 2 files, instance name, hostname, network and other information, are the content we configure in the Ovirt Web interface. Combining the doc of cloud-init to see "No cloud" This paragraph: http://cloudinit.readthedocs.org/en/latest/topics/ Datasources.html#no-cloud a simple translation of data sources such as:nocloud and NoCloudNet , allowing users to provide files without running network services (even without activating the network): user-data and meta-data give instances (instance) You can provide files via vfat or iso9660 file system: meta-data and user-data give a local vm at startup is used. "Q5": When using Glusterfs service, error "glusterfs: failed to get the ' Volume file ' From server "A: Check the Gluster version first and keep it consistent. Host to enable the version installed after the Gluster service, according to the source of ovirt analysis, may be the latest version of the official website. The default ovirt at the time of installation, using: Ovirt-3.5-dependencies.repo, the current download glusterfs/3.7 client manually install a new version of the official website: # wget http:// download.gluster.org/pub/gluster/glusterfs/3.7/latest/centos/epel-6/x86_64/glusterfs-3.7.4-2.el6.x86_64.rpm# wget http://download.gluster.org/pub/gluster/glusterfs/3.7/latest/centos/epel-6/x86_64/glusterfs-libs-3.7.4-2.el6.x86_64.rpm# wget http:// download.gluster.org/pub/gluster/glusterfs/3.7/latest/centos/epel-6/x86_64/ glusterfs-client-xlators-3.7.4-2.el6.x86_64.rpm# wget http://download.gluster.org/pub/gluster/ glusterfs/3.7/latest/centos/epel-6/x86_64/glusterfs-fuse-3.7.4-2.el6.x86_64.rpm# rpm -ivh *.rpm "Q6": Do not use Ovirt management Glusterfs, their own configuration glusterfs, how to do? How are data fields mounted and what optimizations are done? A: First, Ovirt's optimization did the following: After---optimization, the configuration will be adjusted as follows:options reconfigured:diagnostics.count-fop-hits: ondiagnostics.latency-measurement: onstorage.owner-gid: 36storage.owner-uid: 36cluster.server-quorum-type: servercluster.quorum-type: autonetwork.remote-dio: enablecluster.eager-lock: enableperformance.stat-prefetch: offperformance.io-cache: offperformance.read-ahead: offperformance.quick-read: offauth.allow: *user.cifs: Enablenfs.disable: offperformance.readdir-ahead: oN---Second, each host in the cluster needs to be able to parse the Gluster node name->IP mapping (not only the host specified when "new domain" is required to configure the hosts or the DNS server's A records) again, the firewall example is enabled in Ovirt gluster firewall configuration after service:# rpc.statd-a input -p tcp --dport 111 -j Accept-a input -p udp --dport 111 -j accept# glusterd-a input -p tcp -m tcp --dport 24007 -j accept# gluster swift-a INPUT -p tcp -m tcp --dport 8080 -j ACCEPT# portmapper-a input -p tcp -m tcp --dport 38465 -j accept-a Input -p tcp -m tcp --dport 38466 -j accept# nfs-a input -p tcp -m tcp --dport 38467 -j accept-a input -p tcp -m tcp --dport 2049 -j ACCEPT-A INPUT -p tcp -m Tcp --dport 38469 -j accEpt# status-a input -p tcp -m tcp --dport 39543 -j accept-a input -p tcp -m tcp --dport 55863 -j accept# nlockmgr-a input -p tcp -m tcp --dport 38468 -j accept-a input -p udp -m udp --dport 963 -j accept-a input -p tcp -m tcp --dport 965 -j accept# ports for gluster volume bricks (default 100 ports)-a input -p tcp -m tcp --dport 24009:24108 -j accept-a input -p tcp -m tcp --dport 49152:49251 -j accept "Example" Configuration Storage domain-glusterfs cluster: Node72,node73,node86 (example provides 3 copy as data field) "Data disk Partition" If the device on which the partition is located is already mounted, uninstall and remove the existing system first. [[email protected] ~]# yum install lvm2 xfsprogs -y [[ Email prOtected] ~]# pvcreate /dev/sdb[[email protected] ~]# vgcreate vg0 /dev /sdb [[email protected] ~]# lvcreate -l 100%free -n lv01 vg0[[ Email protected] ~]# mkfs.xfs -f -i size=512 /dev/vg0/lv01 [[email protected] ~]# blkid /dev/vg0/lv01/dev/vg0/lv01: uuid= " 58a47793-3202-45ab-8297-1c867b6fdd68 " type=" XFS " [[email protected] ~]# mkdir / data[[email protected] ~]# cat << ' _eof ' >>/etc/fstabuuid= 58a47793-3202-45ab-8297-1c867b6fdd68 /data xfs defaults 0 0_eof[[email protected] ~]# mount -a[[email protected] ~]# df -h |grep data/dev/mapper/vg0-lv01 16t 33m 16t 1% /data "Ready to Work" installation service [email protected] ~]# yum install glusterfs-server[[email protected] ~]# Service glusterd start[[email protected] ~]# chkconfig glusterd on Tuning Firewall:-A input -s 192.168.25.0/24 -j accept configuration cluster: [[Email protected] ~]# gluster peer probe 192.168.25.72[[email protected] ~]# gluster peer probe 192.168.25.73 the directory on each cluster node [[Email protected] ~]# mkdir /data/gv1/brick1 -p] Provide the data domain "create volume gv0 as the primary data domain:[[email protected] ~]# gluster volume create gv1 Replica 3 transport tcp 192.168.25.86:/data/gv1/brick1 192.168.25.72:/data/gv1/brick1 192.168.25.73:/data/gv1/brick1 "Start" [[email protected] ~]# gluster volume Start gv1 "View Status" [[email p rotected] ~]# gluster volume info volume name: gv1type: replicatevolume id: 32b1866c-1743-4dd9-9429-6ecfdfa168a2status: startednumber of bricks: 1 x 3 = 3transport-type: tcpbricks:brick1: 192.168.25.86:/data/gv1/ BRICK1BRICK2:&NBSP;192.168.25.72:/DATA/GV1/BRICK1BRICK3:&NBSP;192.168.25.73:/DATA/GV1/BRICK1---configuration volume, Take Gv1 as an example:gluster volume set gv1 diagnostics.count-fop-hits ongluster volume set gv1 diagnostics.latency-measurement ongluster volume set gv1 Storage.owner-gid 36gluster volume set gv1 storage.owner-uid 36 gluster volume set gv1 cluster.server-quorum-type servergluster volume set gv1 cluster.quorum-type autogluster volume set gv1 network.remote-dio Enablegluster volume set gv1 cluster.eager-lock enablegluster volume set gv1 performance.stat-prefetch offgluster volume set gv1 performance.io-cache offgluster volume set gv1 performance.read-ahead offgluster volume set gv1 performance.quick-read offgluster volume set gv1 auth.allow \*gluster volume set gv1 User.cifs enablegluster volume set gv1 nfs.disable off---Configuring volumes to mount volumes on 1 nodes gv1 test [[ Email protected] ~]# mount.glusterfs 192.168.25.86:/gv1 /mnt[[email protected] ~]# df -h /mntfilesystem size Used Avail Use% Mounted on192.168.25.72:/gv1 16T 39m 16t 1% /mnt "Q7": Prompt to perform actions Add storage connection Error: There was A problem trying to mount the target "A: Note: When filling in the" path ", note that there is no space at the end, otherwise it will fail, by viewing the hanging/var/log/vdsm.log on the Load node, you can analyze the cause, for example, log display: Storage.StorageServer.MountConnection::(connect) mount failed: (32, ';mount.nfs: access denied by server while mounting 192.168.20.93:/data/ovirt/iso \n ') "Error" Path: 192.168.20.93:/data/ovirt/iso [iso followed by a space] "correct" Path: No space behind 192.168.20.93:/data/ovirt/iso[iso] assuming that the ISO is already mounted, we need to add the OS in, here's a tip: see the path to the NFS server (192.168.20.93) where the ISO resides # pwd/data/ovirt/iso/62a1b5e0-730f-47db-8057-3ed0fda7b83a/images/ 11111111-1111-1111-1111-111111111111 we can directly CD to this directory, upload the OS file here, modify the permissions # chown -r 36:36 . Go back to the web side and view the ISO domain image.
Preliminary discussion on ovirt-use summary FAQ