A problem was found after the VMware Mech driver implementation was completed last week. The test found that the network created by neutron, the virtual machine created with Nova, was created on different ESXi hosts, and of course there was no such problem with only one ESXi host in cluster.
Switch to Nova-network, and found that there is also this problem. The bug was reported on the launchpad:
With the next/NOVA/VIRT/VMWAREAPI code, the error is probably like this:
Nova acquires Vif_infos through _get_vif_infos () when creating virtual machines.
In the _get_vif_infos method, the vif.py Get_network_ref method is invoked to obtain the network_ref.
If the use does not neutron NSX, call Ensure_vlan_bridge.
Ensure_vlan_bridge in cluster to find out if there is a network to create (a port group on the ESXi host), which is found on the 1 ESXi hosts specified in cluster, such as ESXi A. To create a network if it does not exist.
After the success of the creation continue to call Network_util.get_network_with_the_name for queries, you should be able to query to the new network, but this place has bugs, the new network or query is not, so Network_ref returned to empty.
So when you create a virtual machine, the virtual machines do not join the newly created network (port group).
Def ensure_vlan_bridge (session, vif, cluster=none, create_vlan=true):
"" "Create a vlan and bridge unless they already exist." "" vlan_num = vif[' network '].get_meta (' VLAN ') bridge = vif[' network ' [' Bridge '] vlan_interface = conf.vmware.vlan_ Interface network_ref = network_util.get_network_with_the_name (session, bridge, cluster) if network_ref and network_ref[' Type '] == '
Distributedvirtualportgroup ': return network_ref if not network_ref: # Create a port group on the vswitch associated with the # vlan_interface corresponding physical network
adapter on the esx # host. vswitch_associated = _get_associated_vswitch_for_ Interface (Session, Vlan_interface, cluster) network_util.create_port_group (
Session, bridge,
vswitch_associated,
vlan_num if create_vlan else 0, cluster) # bug is out here, the port created in the previous step
Group, not captured here. network_ref = network_util.get_network_with_the_name (SessioN, bridge, cluster) elif create_vlan: # Get the Vswitch associated with the physical adapter vswitCh_associated = _get_associated_vswitch_for_interface (Session, vlan_interface, cluster) # get the vlan id and vswitch corresponding to the port group _get_pg_info = network _util.get_vlanid_and_vswitch_for_portgroup pg_vlanid, pg_ Vswitch = _get_pg_info (session, bridge, cluster) # check if the vswitch associated is proper if pg_vswitch != vswitch_associated: &nbSp; raise exception. Invalidvlanportgroup ( bridge=bridge, expected=vswitch_associated, actual=pg_vswitch) # check if the vlan id is proper for the port group if pg_vlanid != vlan_num: raise exception. Invalidvlantag (Bridge=bridge, tag=vlan_num, pgroup=pg_vlanid) return network_ref
As for why this new Web query failed, I haven't figured it out yet.
Then I thought, since ESXi's standard switches do not support distributed port groups, you can create port groups (that is, the network) on each of the standard switches in cluster, so that there is no problem with previous scheduling failures.
Attempts to modify the Get_host_ref code in vm_util.py to return all ESXi hosts in the cluster. There are some function calls to Get_host_ref, so if you do not modify the previous code logic, call the SUDS client to invoke the Vsphere API to get some of the object properties in Vsphere, it will certainly be an error.
At this time, due to their carelessness, not careful analysis of the log, that in a session can not get vsphere multiple objects. It is not feasible to consider the method of creating port groups on each ESXi host.
Suddenly want to not so easy to give up, if not to achieve this, then there is no better way. Calm down to read the log, found that before the error, has not come to create_port_group this step, so from the place of the error, the code logic changed one by one.
Finally create the network, create virtual machine, find everything OK.