-scheduler and Cinder-volume will then create a blank volume, which, like the create volume, does not repeat.Next, analyze the process of data recovery. First, you can see the relevant information in the CINDER-API log.Note here that the volume_id and backup_id in the log are consistent with the output from the previous Backup-restore command.Let's look at how Cinder-backup recovers data.Cinder-backup performing a restore operationThe log is/opt/stack/logs/c-vol.log.
Start the restore operat
features many functions.VMDK is a virtual disk format for VMware, which means that VMware virtual machines can run directly on KVM.The next section describes the Storage Pool for LVM types.650) this.width=650; "title=" "src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20160313-1457875040598057697.jpg "style=" Border:0px;vertical-align:middle;color:rgb ( 63,63,63); font-family: ' Microsoft Yahei '; Font-size:15px;line-height:21.75px;text-align:justify;white-space: Normal;backgro
memory, so KVM is responsible for mapping the client's physical memory to the actual machine memory (PA-to-MA). Concrete implementation will not do too much introduction, we are interested to check the information.There is also a point to remind you that memory is also overcommit, that is, all the memory of the virtual machine can exceed the host's physical memory. However, it is also necessary to be fully tested, otherwise performance will be affected.In the next section we discuss how KVM imp
-20161110-1478760595748056384.jpg "/>Visible, vxlan-100 VNI is 100, the corresponding VTEP network interface is eth1.At this point the VXLAN100 structure:650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161110-1478760595861020045.jpg "/>In the next section we will deploy instance to vxlan100_net and analyze the connectivity of the network.650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161110-1478760596091013180.j
We discussed the theoretical knowledge of VXLAN and completed the relevant configuration in ML2.Today you will create vxlan100_net through the Web UI and observe the changes in the node network structure.Open Menu Admin--Networks, click the "Create Network" buttonThe Create page is displayed.Provider Network Type Select "VXLAN" Segmentation ID is VNI, set to 100Click "Create Network" to create a successful vxlan100.Click on the vxlan100 link, go to the Network Configuration page, there is no sub
Bridge to open VSwitch, you first need to install the Open VSwitch Agent. To modify the Devstack local.conf:Rerun./stack,devstack will automatically download and install the Open vSwitch.You can then modify the ML2 configuration file/etc/neutron/plugins/ml2/ml2_conf.ini, which is set using Openvswitch mechanism driver.Both the control node and the compute node need to install and configure the Open VSwitch as described above.After the Neutron service restarts, you can see Neutron-openvswitch-ag
In the previous section, we have configured and tested LBaaS, and today we focus on how Neutron uses Haproxy to achieve a balanced balance.Running IP netns on the control node, we found that Neutron created a new namespace qlbaas-xxx.The namespace corresponds to the pool "Web servers" we created. Its naming format is qlbaas-You can view its settings through IP a.The VIP 172.16.100.11 is already configured on the namespace interface. The corresponding configuration of the interface can also be fo
, because the traffic sent to the outside network is forwarded through the virtual router on the network node, so the Br-ext will only be placed on the network node (Devstack-controller).understand the various network devices in the Open vSwitch environmentIn an Open vSwitch environment, a packet sent from the instance to the physical NIC will roughly go through the following types of devices:
Tap interface is named Tapxxxx.
Linux Bridge is named Qbrxxxx.
Veth pair named Qvbxxxx, Qvoxxxx
public key into the instance ~/.ssh/authorized_keys file.650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161208-1481159364615005326.jpg "/>
SSH instance. Use the-I cloud.key to specify the private key and the Ubuntu user ssh "Web1" and "Web2".650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20161208-1481159364774007049.jpg "/>Login instance without a password directly.Note: For the sake of demonstration, here we
Communication between instance in different VLANs can be achieved through router.The next question to be explored is how instance communicates with the external network.The external network here refers to a network outside the tenant network. A tenant network is a network created and maintained by Neutron. The external network is not created by Neutron. In the case of a private cloud, the external network usually refers to the corporate intranet, and if it is a public cloud, the external network
.glb.clouddn.com/ Upload-ueditor-image-20170117-1484631963570088105.jpg "/>The IP assigned to the CIRROS-VM3 is 172.16.101.103.650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20170117-1484632053655003094.png "alt=" Image615.png "/>The CIRROS-VM3 is schedule to the compute node and the virtual network card is connected to the Br-int.650) this.width=650; "Src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20170117-1484631963692015111.jpg "/>65
hope everyone can master.In the next section we begin to learn about the container network across hosts.650) this.width=650; "Title=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20170711-1499780326078042943.png "src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20170728-1501192822377093870.jpg "alt=" QR code + fingerprint. png "style=" border:0px;font-family: ' Helvetica Neue ', Helvetica, ' Hiragino Sans GB ', ' Microsoft Yahei ', Arial, sans-serif;font-size:med
In addition to Overlay,docker, another Driver:macvlan has been developed to support cross-host container networks.The Macvlan itself is the Linxu kernel module, which allows multiple MAC addresses to be configured on the same physical network card, that is, multiple interface, each of which can be configured with its own IP. Macvlan is essentially a network card virtualization technology, it's not surprising that Docker implements a container network with Macvlan.The biggest advantage of Macvlan
respective ml2_conf.ini.After the Neutron service starts normally, all nodes will run Neutron-linuxbridge-agent650) this.width=650; "Title=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20160824-1471992817408015123.png "src=" http://7xo6kd.com1.z0.glb.clouddn.com/ Upload-ueditor-image-20160824-1471993265149058572.jpg "style=" border:0px;white-space:normal;float:none; "/>Linux-bridge mechanism driver is configured, the next section examines the current network status.With the prac
To support container Cross host communication, Docker provides overlay driver, allowing users to create overlay networks based on Vxlan. Vxlan can encapsulate two-tier data into UDP for transmission, Vxlan provides the same Ethernet two-tier service as the VLAN, but with greater scalability and flexibility. More detailed information about Vxlan can be referenced in the relevant chapters of Cloudman's 5-minute play
5 days to go to C # Parallel and multithreaded programming series of articles directory, the first three days of the directory, the last update will be accompanied by the full directory5 days of playing C # parallel and multithreaded programming--the first day to meet parallel5 days of playing C # parallel and multithreaded programming--next day parallel collections and PLINQ5 days to play C # parallel and
add unnecessary parts to it.
Some seem to be very concise, but in accordance with this method to solve, and did not achieve the purpose
I'm going to summarize some of the articles on the web.
Some steps are mentioned, some of the steps are some articles, some articles are not
If this is a sequential logic, then how to extract the necessary steps, and then in a certain order to implement, is the key to solve the problem
To sum up, there is this step, which is mentioned by everyone:
1.sudo syst
, 172.16.100.11,The purpose is to ensure that WEB1 can send the response data back to load balancer.5. WEB1 received the request packet.The image on the right is the data flow that the Web server answers:1. WEB1 sends the data packets to the load balancer.2. After the load balancer receives the data sent by WEB1, the destination IP is modified to the Client's address 10.10.10.4. At the same time, the source
, load balancer Select the pool member WEB1 after receiving the request,Set the destination IP of the packet to WEB1 address 172.16.100.9.4. Before forwarding the packet to WEB1, load balancer modifies the source IP of the packet to its own VIP address, 172.16.100.11,The purpose is to ensure that WEB1 can send the response data back to load balancer.5. WEB1 received the request packet.The image on the right is the data flow that the Web server answers
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.