VSM Add new OSD process background get available device Process Code Analysis ADD_NEW_OSD page Add button code analysis detect if device path is available disk path Process Analysis Add_new_osd page Submit button Code Analysis
VSM Add new OSD Process
Get available devices in the background
|
Select available devices and OSD related information, etc.
|
Click the Add button to add the OSD information you want to deploy to the item list that needs to be deployed
|
Click the Submit button to add the device in the item list to the Ceph cluster
Below we analyze the background to obtain the available device process, click the Add button and click the Submit button background trigger process background Get the available device Process code Analysis
The JS function is changeserver when select Server is selected
Virtual-storage-manager-2.2.0\source\vsm-dashboard\static\dashboard\js\addosd.js
function Changeserver () {
//reset the Add OSD Form
resetform ();
Server = Server.create ();
Update the upload field post URL
//$formFileUpload. action= "/dashboard/vsm/devices-management/add_new_osd2/? Service_id= "+SERVER.SERVER_ID;
Update the OSD form data
postdata ("Get_available_disks", {"server_id": server.node_id});
}
Get the available disks from Changeserver through PostData
Connection
/dashboard/vsm/devices-management/get_available_disks/
Code: Source\vsm-dashboard\vsm_dashboard\dashboards\vsm\devices-management\views.py:get_available_disks
def get_available_disks (Request):
search_opts = json.loads (request.body)
server_id = search_opts["server_id "]
ret = vsmapi.get_available_disks (request, search_opts={
" server_id ": server_id
," Result_mode ":" get_ Disks ",
})
disks = ret[' disks ']
if disks is none:disks=[]
disk_data = json.dumps (disks)
return HttpResponse (Disk_data)
In Views.py:get_available_disks, call Source\vsm-dashboard\vsm_dashboard\api\vsm.py:get_available_disks
Code:
Source\vsm-dashboard\vsm_dashboard\api\vsm.py:get_available_disks
def get_available_disks (Request, search_opts):
return vsmclient (Request). Devices.get_available_disks ( search_opts=search_opts)
And then to the Virtual-storage-manager-2.2.0\source\python-vsmclient\vsmclient\v1\devices.py:devicemanager.get in Vsmclient. _available_disks
In the Get_available_disks function, the available disk is obtained through the rest request to the VSM-API
Vsm\vsm\api\v1\devices.py:controller.get_available_disks
def get_available_disks (Self,req,):
context = req.environ[' Vsm.context ']
server_id = req. Get.get (' server_id ', None)
BODY = {' server_id ': server_id}
disk_list = Self.scheduler_api.get_available_ Disks (context, body)
return {"Available_disks":d isk_list}
The Controller.get_available_disks will call the
source\vsm\vsm\scheduler\api.py:api.get_available_disks->source\vsm\vsm\scheduler\rpcapi.py: Schedulerapi.get_available_disks
def get_available_disks (self, Ctxt, Body=none):
return Self.call (Ctxt, self.make_msg (' Get_available_disks ', body =body))
This allows the VSM-API process to remotely invoke the get_available_disks of the scheduler process and enter
Source\vsm\vsm\scheduler\manager.py:schedulermanager.get_available_disks, in this Schedulermanager.get_available_ In the disks function,
1. Get server_id first.
2. Query the state of the server according to SERVER_ID to DB.
3. If the server's status is active, then tune Source\vsm\vsm\agent\rpcapi.py:agentapi.get_available_disks
Schedulermanager.get_available_disks function
def get_available_disks (self, Context, body):
server_id = body[' server_id ']
server = db.init_node_get_by_id ( context,id=server_id)
status = server[' status ']
if status = = ' Active ':
res = self._agent_rpcapi.get_ Available_disks (context,server[' host ')
return res
Agentapi.get_available_disks function
def get_available_disks (self, Context, host):
topic = rpc.queue_get_for (context, Self.topic, host)
res = Self.call (Context,
self.make_msg (' Get_available_disks '),
topic, version= ' 1.0 ', timeout=6000)
return Res
Implement remote Call Agent process in Agentapi.get_available_disks get_available_disks, enter g:\ work \2017 year \vsm\code\ Virtual-storage-manager-2.2.0\source\vsm\vsm\agent\manager.py:agentmanager.get_available_disks
def get_available_disks (self, context): #LOG. Info (' 333333333 ') Available_disk_name = Self.ceph_driver. Get_available_disks (context) log.info (' available_disk_name=====%s '%available_disk_name) devices = Db.devic e_get_all_by_service_id (context,self._service_id) Dev_used_by_ceph = [dev.journal for dev in devices] Avai
Lable_disk_info_list = [] name_by_path_dict = Self.ceph_driver.get_disks_name_by_path_dict (available_disk_name) Name_by_uuid_dict = Self.ceph_driver.get_disks_name_by_uuid_dict (available_disk_name) for disk in Available_ Disk_name:by_path_name = name_by_path_dict.get (disk, ') By_uuid_name = name_by_uuid_dict.get (disk , ') if not disk in Dev_used_by_ceph and not by_path_name in Dev_used_by_ceph and not by_uuid_name in dev_used
_by_ceph:available_disk_info_list.append ({' Disk_name ':d ISK, ' By_path ': By_path_naMe, ' by_uuid ': By_uuid_name,}) log.info (' available_disk_info_list= ====%s '%available_disk_info_list) return available_disk_info_list
1. Perform "Blockdev--report" to get the block device
2. Remove a bare device such as/DEV/SDA
3. Perform "mount-l" to get the mounted device
4. Removing mounted Devices
5. Performing "PVs--rows" Get a device that does LVM
6. Removal of LVM-made devices
add_new_osd page Add button code Analysis
After filling in the required elements of the page to add the OSD, click the Add button to trigger the JS Checkosdform function.
Virtual-storage-manager-2.2.0\source\vsm-dashboard\static\dashboard\js\addosd.js
function Checkosdform () {
//check the field is should not null
if ( $ctrlJournalDevice. Value = = ""
| | $ctrlDataDevice. Value = = ""
| | $ctrlWeight. Value = = ""
| | $ctrlStorageGroup. Value = = "") {
showtip ("Error", "the field is marked as ' * ' * ' should isn't be empty");
return false;
}
Check the device path is avaliable or not
var path_data = {
"server_id": Server.create (). node_id,
" Journal_device_path ": $ctrlJournalDevice. Value,
" Data_device_path ": $ctrlDataDevice. Value
}
// Post the data and check
//if the check result is successful, add the OSD
postdata ("Check_device_path", Path_data) ;
}
1. Detect if the device path is the path 2 of the disk that is available
. If condition 1 is met, place the feature that you just filled in the page into the item List
3. If condition 1 does not meet, error
detects if device path is an available disk's path process analysis
Let's look at the process of detecting if the device path is available disk's path
Check device path from Checkosdform through postdata
Connection
/dashboard/vsm/devices-management/check_device_path/
Code:
Source\vsm-dashboard\vsm_dashboard\dashboards\vsm\devices-management\views.py:check_device_path
def check_device_path (Request):
search_opts = json.loads (request.body)
server_id = search_opts["server_id"]
Data_device_path = search_opts["Data_device_path"]
Journal_device_path = search_opts["Journal_device_path "]
if Data_device_path = = Journal_device_path:
Status_json = {" Status ":" Failed ", ' message ': ' Data_device_path And Journal_device_path can not being the same hard disk '}
else:
ret = vsmapi.get_available_disks (Request, Search_op ts={
"server_id": server_id
, "path": [Data_device_path,journal_device_path]
})
if ret["ret"] = = 1 :
Status_json = {"Status": "OK"}
else:
Status_json = {"Status": "Failed", ' Message ': Ret.get (' message ') }
status_data = Json.dumps (Status_json)
return HttpResponse (status_data)
In Vsm.py:check_device_path, call Source\vsm-dashboard\vsm_dashboard\api\vsm.py:get_available_disks
Code:
Source\vsm-dashboard\vsm_dashboard\api\vsm.py:get_available_disks
def get_available_disks (Request, search_opts):
return vsmclient (Request). Devices.get_available_disks ( search_opts=search_opts)
And then to the Virtual-storage-manager-2.2.0\source\python-vsmclient\vsmclient\v1\devices.py:devicemanager.get in Vsmclient. _available_disks
In the Get_available_disks function, you will
1. Get the available disk 2 through the rest request to VSM-API.
if Result_mode=get_disks, return the available disks (previously said to get the available device process)
3. If it is path mode, Check to see if path is in the available disks path.
def get_available_disks (self, Search_opts=none): "" "Get a list of available disks" " If search_opts is none:search_opts = {} Qparams = {} for opt, Val in Search_opts.iteritems (): if val:qparams[opt] = val query_string = "?%s"% Urllib.urlencode (qparams) if QP Arams Else "" resp, BODY = Self.api.client.get ("/devices/get_available_disks%s"% (query_string)) BODY = b
Ody.get ("Available_disks") Result_mode = Search_opts.get (' Result_mode ') if Result_mode = = ' Get_disks ':
return {' Disks ': body} ret = {"ret": 1} message = [] paths = Search_opts.get ("path") disks = [] for disk in body:disk_name = Disk.get (' disk_name ', ') By_path = Disk.get (' b
Y_path ', ') By_uuid = Disk.get (' By_uuid ', ') if Disk_name:disks.append (Disk_name) If By_path:
Disks.append (By_path) if By_uuid:disks.append (by_uuid) If paths: unaviable_paths = [path for path in paths if path ' not ' in disks] if unaviable_paths:m Essage.append (' There is no%s '% (', '. Join (unaviable_paths))) If Message:ret = {"ret": 0, ' message ': '. ') . Join (Message)} return RET
add_new_osd page Submit button Code Analysis
Click the Submit button to trigger the JS Addosd () function
Virtual-storage-manager-2.2.0\source\vsm-dashboard\static\dashboard\js\addosd.js
function Addosd () {var osd_list = [];
var Osd_items = $ (". Osd-item");
if (Osd_items.length = = 0) {showtip ("error", "Please add the OSD");
return false; } for (Var i=0;i<osd_items.length;i++) {var OSD = {"SERVER_NAME": Osd_items[i].children[1].inne RHTML, "storage_group_name": osd_items[i].children[3].innerhtml, "osd_location": Osd_items[i].childr en[4].innerhtml, "Weight": osd_items[i].children[2].innerhtml, "journal": osd_items[i].children[5].i
nnerhtml, "Data": osd_items[i].children[6].innerhtml} osd_list.push (OSD); } var post_data = {"Disks": []}//generate the server data var server_list = [] for (Var i=0
; i<osd_list.length;i++) {var isexsitserver = false;
for (Var j=0;j<server_list.length;j++) {if (Osd_list[i].server_name = = Server_list[j].server_name) {
Isexsitserver = true; Break }} if (Isexsitserver = = false) {Server = {"SERVER_NAME": Osd_list[i].server_na
Me, "Osdinfo": []};
Server_list.push (server)}}//generate the OSD data for (Var i=0;i<osd_list.length;i++) {
for (Var j=0;j<server_list.length;j++) {if (Osd_list[i].server_name = = Server_list[j].server_name) { var osd = {"Storage_group_name": Osd_list[i].storage_group_name, "Osd_lo Cation ": osd_list[i].osd_location," weight ": Osd_list[i].weight," journal ": osd_list[ I].journal, "Data": Osd_list[i].data} server_list[j].osdinfo.push (OSD)
;
}}}//exe add osd post_data.disks = server_list;
Console.log (Post_data);
PostData ("Add_new_osd_action", post_data); }
1. Organize Osditem in OSD and Server
2. Execute add_new_osd_action
Check the device path from ADDOSD through PostData
Connection
/dashboard/vsm/devices-management/add_new_osd_action/
Code:
Source\vsm-dashboard\vsm_dashboard\dashboards\vsm\devices-management\views.py:add_new_osd_action
From Vsm_dashboard.api import VSM as Vsmapi ...
def add_new_osd_action (Request):
data = Json.loads (request.body)
print ' data----7777== ', Data
# Vsmapi.add_new_disks_to_cluster (Request,data)
vsmapi.add_batch_new_disks_to_cluster (request,data)
Status_json = {"Status": "OK"}
Status_data = Json.dumps (Status_json)
return HttpResponse (status_data)
From views.py's import header file, Vsmapi is source\vsm-dashboard\vsm_dashboard\api\vsm.py:add_batch_new_disks_to_cluster.
def add_batch_new_disks_to_cluster (Request, body):
return vsmclient (Request). osds.add_batch_new_disks_to_ Cluster (body)
Then go to source\python-vsmclient\vsmclient\v1\osds.py:osdmanager.add_batch_new_disks_to_cluster in vsmclient
In the Osdmanager.add_batch_new_disks_to_cluster function, the OSD is added to the Vsm-api in bulk via the rest request.
Vsm\vsm\api\v1\osds.py:controller.add_batch_new_disks_to_cluster
def add_batch_new_disks_to_cluster (self, req, body):
context = req.environ[' Vsm.context ']
log.info ("Batch_ Osd_add body=%s "% body"
ret = self.scheduler_api.add_batch_new_disks_to_cluster (context, body)
Log.info (" Batch_osd_add ret=%s "% ret)
return RET
The Controller.add_batch_new_disks_to_cluster will call the
source\vsm\vsm\scheduler\api.py:api.add_batch_new_disks_to_cluster->source\vsm\vsm\scheduler\rpcapi.py: Schedulerapi.add_batch_new_disks_to_cluster
def add_batch_new_disks_to_cluster (self, Ctxt, Body=none):
return Self.call (Ctxt, self.make_msg (' Add_batch_new_ Disks_to_cluster ', body=body))
This VSM-API process calls the get_available_disks of the scheduler process remotely, and goes to
Source\vsm\vsm\scheduler\ Manager.py:SchedulerManager.add_batch_new_disks_to_cluster, in this schedulermanager.add_batch_new_disks_to_cluster,
It adds the OSD required to get each server, and then adds the OSD by calling Schedulermanager.add_new_disks_to_cluster
def add_batch_new_disks_to_cluster (self, Context, body): "" ":p Aram Context::p Aram Body: {" D
Isks ": [{' server_id ': ' 1 ', ' osdinfo ': [{' storage_group_id ':
"Weight": "Journal": "Data":},{}]}, {' server_id ': ' 2 ', ' osdinfo ': [{
' storage_group_id ': "Weight":
"Journal": "Data":},{}]},
]}: Return: "" "Disks = Body.get (' disks ', []) Try:
For Disk_in_same_server in Disks:self.add_new_disks_to_cluster (context, Disk_in_same_server)
Except return {"message": "Data Error"} return {"message": "Success"}
def add_new_disks_to_cluster (self, Context, body):
server_id = body.get (' server_id ', None)
server_name = Body.get (' server_name ', None)
if server_id is not None:
server = db.init_node_get_by_id (context,id=server_id)
elif server_name is not None:
server = Db.init_node_get_by_host (context,host=server_name)
Self._agent_ Rpcapi.add_new_disks_to_cluster (context, body, server[' host ')
new_osd_count = Int (server["Data_drives_number") ] + len (body[' Osdinfo '))
values = {"Data_drives_number": New_osd_count}
Self._conductor_rpcapi.init_node_ Update (context,
server["id"],
values)
In Schedulermanager.add_new_disks_to_cluster, do the following:
1. Querying server information into the database
2. Call Source\vsm\vsm\agent\rpcapi.py:agentapi.add_new_disks_to_cluster to add the OSD to the Ceph cluster
3. Call Source\vsm\vsm\conductor\rpcapi.py:conductorapi.init_node_get_cluster_nodes to update the number of the server's OSD in the data.
Let's analyze the Source\vsm\vsm\agent\rpcapi.py:agentapi.add_new_disks_to_cluster and add the OSD to the Ceph cluster section.
def add_new_disks_to_cluster (self, context, body, host):
topic = rpc.queue_get_for (context, Self.topic, host)
res = Self.call (context,
self.make_msg (' Add_new_disks_to_cluster ',
body=body),
topic,
version= ' 1.0 ', timeout=6000)
In Agentapi.add_new_disks_to_cluster, the add_new_disks_to_cluster of the vsm-agent process is called by synchronizing the remote call source\vsm\vsm\agent\ Manager.py:AgentManager.add_new_disks_to_cluster