Use ganglia to monitor mongodb Clusters

Source: Internet
Author: User
Tags flushes

Use ganglia to monitor mongodb Clusters
A few days ago, I submitted a blog post on ganglia monitoring storm cluster. This article will introduce how to use ganglia to monitor mongdb clusters. Because we need to use ganglia to unify the world.
1. ganglia Extension Mechanism
To use ganglia to monitor mongodb clusters, you must first understand the ganglia extension mechanism. The ganglia plug-in provides us with two methods to expand the ganglia monitoring function:

1) Add the embedded (in-band) Plug-in through the gmetric command.

This is a common method. It mainly uses the cronjob method and the ganglia gmetric command to input data to gmond for unified monitoring. This method is simple, A small amount of monitoring can be used, but it is difficult to manage monitoring data in a unified manner for large-scale custom monitoring.

2) add some additional scripts to monitor the system, mainly through the C or python interface.

After ganglia3.1.x, a C or Python interface is added, through which data collection modules can be customized, and these modules can be directly inserted into gmond to monitor user-defined applications.
2. Monitor mongdb using python scripts
We use the python script to monitor mongodb clusters. After all, it is more convenient to expand through the python script. When we need to add monitoring information, we can add monitoring data to the corresponding py script, which is very convenient, high scalability and simple porting.
2.1 environment Configuration
To use the python script to implement ganglia monitoring extension, you must first specify modpython. whether the so file exists. This file is the dynamic link library that ganglia calls python. To develop the ganglia plug-in through the python interface, you must compile and install this module. The modpython. so file is stored in the lib (or lib64)/ganglia/directory under the ganglia installation directory. If it exists, you can write the following script. If it does not exist, you need to re-compile and install gmond. The "-- with-python" parameter is included during compilation and installation ".
2.2 write monitoring scripts
Open/etc/gmond In the ganglia installation directory. conf file. You can see include ("/usr/local/ganglia/etc/conf in client monitoring. d /*. conf ") indicates that the gmond service directly scans the monitoring configuration file under the directory, so we need to put the monitoring configuration script in/etc/conf. d/directory and name it XX. conf, so we will name the configuration script for monitoring mongdb as mongdb. conf
1) view the modpython. conf file
Modpython. conf is located in the/etc/conf. d/directory. The file content is as follows:

 
 
  1. Modules {
  2. Module {
  3. Name = "python_module" # Main module text
  4. Path = "modpython. so" # dynamic link library required by ganglia to expand python scripts
  5. Params = "/usr/local/ganglia/lib64/ganglia/python_modules" # location where python scripts are stored
  6. }
  7. }

  8. Include ("/usr/local/ganglia/etc/conf. d/*. pyconf") # ganglia extension stores the path of the configuration script
Therefore, we need to place the configuration script and py script in the corresponding directory to extend ganglia monitoring mongodb using python, and then restart the ganglia service to complete mongdb monitoring. The following describes how to write the script.
2) create a mongodb. pyconf script
Note that you must use the root permission to create and edit a script, which is stored in the conf. d directory. For details about how to collect mongdb parameters, refer.
 
 
  1. Modules {
  2. Module {
  3. Name = "mongodb" # Module name, which must be the same as the python script name stored in the path specified by "/usr/lib64/ganglia/python_modules ".
  4. Language = "python" # declare to use the python language
  5. # Parameter list. All parameters are transmitted as a dict (map) to the metric_init (params) function of the python script.
  6. Param server_status {
  7. Value = "mongo path -- host -- port 27017 -- quiet -- eval 'printjson (db. serverStatus ())'"
  8. }
  9. Param rs_status {
  10. Value = "mongo path -- host -- port 2701 -- quiet -- eval 'printjson (rs. status ())'"
  11. }
  12. }
  13. }

  14. # List of metric to be collected. Any metric can be expanded in one module.
  15. Collection_group {
  16. Collect_every = 30
  17. Time_threshold = 90 # maximum sending Interval
  18. Metric {
  19. Name = "mongodb_opcounters_insert" # name OF metric in the module
  20. Title = "Inserts" # title displayed on the graphic interface
  21. }
  22. Metric {
  23. Name = "mongodb_opcounters_query"
  24. Title = "Queries"
  25. }
  26. Metric {
  27. Name = "mongodb_opcounters_update"
  28. Title = "Updates"
  29. }
  30. Metric {
  31. Name = "mongodb_opcounters_delete"
  32. Title = "Deletes"
  33. }
  34. Metric {
  35. Name = "mongodb_opcounters_getmore"
  36. Title = "Getmores"
  37. }
  38. Metric {
  39. Name = "mongodb_opcounters_command"
  40. Title = "Commands"
  41. }
  42. Metric {
  43. Name = "mongodb_backgroundFlushing_flushes"
  44. Title = "Flushes"
  45. }
  46. Metric {
  47. Name = "mongodb_mem_mapped"
  48. Title = "Memory-mapped Data"
  49. }
  50. Metric {
  51. Name = "mongodb_mem_virtual"
  52. Title = "Process Virtual Size"
  53. }
  54. Metric {
  55. Name = "mongodb_mem_resident"
  56. Title = "Process Resident Size"
  57. }
  58. Metric {
  59. Name = "mongodb_extra_info_page_faults"
  60. Title = "Page Faults"
  61. }
  62. Metric {
  63. Name = "mongodb_globalLock_ratio"
  64. Title = "Global Write Lock Ratio"
  65. }
  66. Metric {
  67. Name = "mongodb_indexCounters_btree_miss_ratio"
  68. Title = "BTree Page Miss Ratio"
  69. }
  70. Metric {
  71. Name = "mongodb_globalLock_currentQueue_total"
  72. Title = "Total Operations Waiting for Lock"
  73. }
  74. Metric {
  75. Name = "mongodb_globalLock_currentQueue_readers"
  76. Title = "Readers Waiting for Lock"
  77. }
  78. Metric {
  79. Name = "mongodb_globalLock_currentQueue_writers"
  80. Title = "Writers Waiting for Lock"
  81. }
  82. Metric {
  83. Name = "mongodb_globalLock_activeClients_total"
  84. Title = "Total Active Clients"
  85. }
  86. Metric {
  87. Name = "mongodb_globalLock_activeClients_readers"
  88. Title = "Active Readers"
  89. }
  90. Metric {
  91. Name = "mongodb_globalLock_activeClients_writers"
  92. Title = "Active Writers"
  93. }
  94. Metric {
  95. Name = "mongodb_connections_current"
  96. Title = "Open Connections"
  97. }
  98. Metric {
  99. Name = "mongodb_connections_current_ratio"
  100. Title = "Open Connections"
  101. }
  102. Metric {
  103. Name = "mongodb_slave_delay"
  104. Title = "Replica Set Slave Delay"
  105. }
  106. Metric {
  107. Name = "mongodb_asserts_total"
  108. Title = "Asserts per Second"
  109. }
  110. }
From the above, you can find that the configuration file is written in the same way as the gmond. conf syntax. For more information, see gmond. conf.
3) create a mongodb. py script
Replace mongodb. the py file is stored in the lib64/ganglia/python_modules directory. You can see that many python scripts exist in this directory. For example: monitors disks, memory, networks, mysql, redis, and other scripts. You can use these python scripts to write mongodb. py. We open some of the scripts and we can see that each script has a function metric_init (params). We also mentioned that the parameters sent from mongodb. pyconf are passed to the metric_init function.

 
 
  1. #! /Usr/bin/env python
  2. Import json
  3. Import OS
  4. Import re
  5. Import socket
  6. Import string
  7. Import time
  8. Import copy

  9. NAME_PREFIX = 'mongodb _'
  10. PARAMS = {
  11. 'Server _ status': '/bin/mongo path -- host -- port 27017 -- quiet -- eval "printjson (db. serverStatus ())"',
  12. 'Rs _ status': '/bin/mongo path -- host -- port 27017 -- quiet -- eval "printjson (rs. status ())"'
  13. }
  14. METRICS = {
  15. 'Time': 0,
  16. 'Data ':{}
  17. }
  18. LAST_METRICS = copy. deepcopy (METRICS)
  19. METRICS_CACHE_TTL = 3
  20. Def flatten (d, pre = '', sep = '_'):
  21. "Flatten a dict (I. e. dict ['a'] ['B'] ['C'] => dict ['A _ B _c'])"
  22. New_d = {}
  23. For k, v in d. items ():
  24. If type (v) = dict:
  25. New_d.update (flatten (d [k], '% s % s' % (pre, k, sep )))
  26. Else:
  27. New_d ['% s % s' % (pre, k)] = v
  28. Return new_d

  29. Def get_metrics ():
  30. "" Return all metrics """
  31. Global METRICS, LAST_METRICS
  32. If (time. time ()-METRICS ['time'])> METRICS_CACHE_TTL:
  33. Metrics = {}
  34. For status_type in PARAMS. keys ():
  35. # Get raw metric data
  36. O = OS. popen (PARAMS [status_type])
  37. # Clean up
  38. Metrics_str = ''. join (o. readlines (). strip () # convert to string
  39. Metrics_str = re. sub ('\ w + \ (. *) \)', r "\ 1", metrics_str) # remove functions
  40. # Convert to flattened dict
  41. Try:
  42. If status_type = 'server _ status ':
  43. Metrics. update (flatten (json. loads (metrics_str )))
  44. Else:
  45. Metrics. update (flatten (json. loads (metrics_str), pre = '% s _' % status_type ))
  46. Failed t ValueError:
  47. Metrics = {}

  48. # Update cache
  49. LAST_METRICS = copy. deepcopy (METRICS)
  50. METRICS = {
  51. 'Time': time. time (),
  52. 'Data': metrics
  53. }
  54. Return [METRICS, LAST_METRICS]

  55. Def get_value (name ):
  56. "" Return a value for the requested metric """
  57. # Get metrics
  58. Metrics = get_metrics () [0]
  59. # Get value
  60. Name = name [len (NAME_PREFIX):] # remove prefix from name
  61. Try:
  62. Result = metrics ['data'] [name]
  63. Counter t StandardError:
  64. Result = 0
  65. Return result

  66. Def get_rate (name ):
  67. "" Return change over time for the requested metric """
  68. # Get metrics
  69. [Curr_metrics, last_metrics] = get_metrics ()
  70. # Get rate
  71. Name = name [len (NAME_PREFIX):] # remove prefix from name
  72. Try:
  73. Rate = float (curr_metrics ['data'] [name]-last_metrics ['data'] [name])/\
  74. Float (curr_metrics ['time']-last_metrics ['time'])
  75. If rate <0:
  76. Rate = float (0)
  77. Counter t StandardError:
  78. Rate = float (0)
  79. Return rate

  80. Def get_opcounter_rate (name ):
  81. "" Return change over time for an opcounter metric """
  82. Master_rate = get_rate (name)
  83. Repl_rate = get_rate (name. replace ('opcounters _ ', 'opcountersrepl _'))
  84. Return master_rate + repl_rate

  85. Def get_globalLock_ratio (name ):
  86. "Return the global lock ratio """
  87. Try:
  88. Result = get_rate (NAME_PREFIX + 'globallock _ locktime ')/\
  89. Get_rate (NAME_PREFIX + 'globallock _ totaltime') * 100
  90. Except t ZeroDivisionError:
  91. Result = 0
  92. Return result

  93. Def get_indexCounters_btree_miss_ratio (name ):
  94. "Return the btree miss ratio """
  95. Try:
  96. Result = get_rate (NAME_PREFIX + 'indexcounters _ btree_misses ')/\
  97. Get_rate (NAME_PREFIX + 'indexcounters _ btree_accesses ') * 100
  98. Except t ZeroDivisionError:
  99. Result = 0
  100. Return result

  101. Def get_connections_current_ratio (name ):
  102. "Return the percentage of connections used """
  103. Try:
  104. Result = float (get_value (NAME_PREFIX + 'ons ons _ current '))/\
  105. Float (get_value (NAME_PREFIX + 'ons ons _ available ') * 100
  106. Except t ZeroDivisionError:
  107. Result = 0
  108. Return result

  109. Def get_slave_delay (name ):
  110. "Return the replica set slave delay """
  111. # Get metrics
  112. Metrics = get_metrics () [0]
  113. # No point checking my optime if I'm not replicating
  114. If 'rs _ status_mystate' not in metrics ['data'] or metrics ['data'] ['rs _ status_mystate']! = 2:
  115. Result = 0
  116. # Compare my optime with the master's
  117. Else:
  118. Master = {}
  119. Slave = {}
  120. Try:
  121. For member in metrics ['data'] ['rs _ status_members ']:
  122. If member ['state'] = 1:
  123. Master = member
  124. If member ['name']. split (':') [0] = socket. getfqdn ():
  125. Slave = member
  126. Result = max (0, master ['optime'] ['T']-slave ['optime'] ['T'])/1000
  127. Failed t KeyError:
  128. Result = 0
  129. Return result

  130. Def get_asserts_total_rate (name ):
  131. "Return the total number of asserts per second """
  132. Return float (reduce (lambda memo, obj: memo + get_rate ('% sasserts _ % s' % (NAME_PREFIX, obj), ['regular', 'warning ', 'msg ', 'user', 'rollvers'], 0 ))

  133. Def metric_init (lparams ):
  134. "Initialize metric descriptors """
  135. Global PARAMS
  136. # Set parameters
  137. For key in lparams:
  138. PARAMS [key] = lparams [key]
  139. # Define descriptors
  140. Time_max = 60
  141. Groups = 'mongodb'
  142. Descriptors = [
  143. {
  144. 'Name': NAME_PREFIX + 'opcounters _ insert ',
  145. 'Call _ back': get_opcounter_rate,
  146. 'Time _ max ': time_max,
  147. 'Value _ type': 'float ',
  148. 'Units ': 'inserts/Sec ',
  149. 'Slope': 'both ',
  150. 'Format': '% F ',
  151. 'Description': 'inserts ',
  152. 'Groupup': groups
  153. },
  154. {
  155. 'Name': NAME_PREFIX + 'opcounters _ query ',
  156. 'Call _ back': get_opcounter_rate,
  157. 'Time _ max ': time_max,
  158. 'Value _ type': 'float ',
  159. 'Units ': 'queries/Sec ',
  160. 'Slope': 'both ',
  161. 'Format': '% F ',
  162. 'Description': 'querys ',
  163. 'Groupup': groups
  164. },
  165. {
  166. 'Name': NAME_PREFIX + 'opcounters _ Update ',
  167. 'Call _ back': get_opcounter_rate,
  168. 'Time _ max ': time_max,
  169. 'Value _ type': 'float ',
  170. 'Units ': 'updates/Sec ',
  171. 'Slope': 'both ',
  172. 'Format': '% F ',
  173. 'Description': 'updates ',
  174. 'Groupup': groups
  175. },
  176. {
  177. 'Name': NAME_PREFIX + 'opcounters _ delete ',
  178. 'Call _ back': get_opcounter_rate,
  179. 'Time _ max ': time_max,
  180. 'Value _ type': 'float ',
  181. 'Units ': 'deletees/Sec ',
  182. 'Slope': 'both ',
  183. 'Format': '% F ',
  184. 'Description': 'deletees ',
  185. 'Groupup': groups
  186. },
  187. {
  188. 'Name': NAME_PREFIX + 'opcounters _ getmore ',
  189. 'Call _ back': get_opcounter_rate,
  190. 'Time _ max ': time_max,
  191. 'Value _ type': 'float ',
  192. 'Units ': 'getmores/Sec ',
  193. 'Slope': 'both ',
  194. 'Format': '% F ',
  195. 'Description': 'getmores ',
  196. 'Groupup': groups
  197. },
  198. {
  199. 'Name': NAME_PREFIX + 'opcounters _ command ',
  200. 'Call _ back': get_opcounter_rate,
  201. 'Time _ max ': time_max,
  202. 'Value _ type': 'float ',
  203. 'Units ': 'commands/Sec ',
  204. 'Slope': 'both ',
  205. 'Format': '% F ',
  206. 'Description': 'commands ',
  207. 'Groupup': groups
  208. },
  209. {
  210. 'Name': NAME_PREFIX + 'backgroundflushing _ flushes ',
  211. 'Call _ back': get_rate,
  212. 'Time _ max ': time_max,
  213. 'Value _ type': 'float ',
  214. 'Units ': 'flushes/Sec ',
  215. 'Slope': 'both ',
  216. 'Format': '% F ',
  217. 'Description': 'flushes ',
  218. 'Groupup': groups
  219. },
  220. {
  221. 'Name': NAME_PREFIX + 'mem _ mapped ',
  222. 'Call _ back': get_value,
  223. 'Time _ max ': time_max,
  224. 'Value _ type': 'uint ',
  225. 'Units ': 'mb ',
  226. 'Slope': 'both ',
  227. 'Format': '% U ',
  228. 'Description': 'memory-mapped data ',
  229. 'Groupup': groups
  230. },
  231. {
  232. 'Name': NAME_PREFIX + 'mem _ virtual ',
  233. 'Call _ back': get_value,
  234. 'Time _ max ': time_max,
  235. 'Value _ type': 'uint ',
  236. 'Units ': 'mb ',
  237. 'Slope': 'both ',
  238. 'Format': '% U ',
  239. 'Description': 'process Virtual size ',
  240. 'Groupup': groups
  241. },
  242. {
  243. 'Name': NAME_PREFIX + 'mem _ resident ',
  244. 'Call _ back': get_value,
  245. 'Time _ max ': time_max,
  246. 'Value _ type': 'uint ',
  247. 'Units ': 'mb ',
  248. 'Slope': 'both ',
  249. 'Format': '% U ',
  250. 'Description': 'process Resident size ',
  251. 'Groupup': groups
  252. },
  253. {
  254. 'Name': NAME_PREFIX + 'extra _ info_page_faults ',
  255. 'Call _ back': get_rate,
  256. 'Time _ max ': time_max,
  257. 'Value _ type': 'float ',
  258. 'Units ': 'faults/Sec ',
  259. 'Slope': 'both ',
  260. 'Format': '% F ',
  261. 'Description': 'page Faults ',
  262. 'Groupup': groups
  263. },
  264. {
  265. 'Name': NAME_PREFIX + 'globallock _ ratio ',
  266. 'Call _ back': get_globalLock_ratio,
  267. 'Time _ max ': time_max,
  268. 'Value _ type': 'float ',
  269. 'Units ':' % ',
  270. 'Slope': 'both ',
  271. 'Format': '% F ',
  272. 'Description': 'Global Write Lock Ratio ',
  273. 'Groupup': groups
  274. },
  275. {
  276. 'Name': NAME_PREFIX + 'indexcounters _ btree_miss_ratio ',
  277. 'Call _ back': get_indexCounters_btree_miss_ratio,
  278. 'Time _ max ': time_max,
  279. 'Value _ type': 'float ',
  280. 'Units ':' % ',
  281. 'Slope': 'both ',
  282. 'Format': '% F ',
  283. 'Description': 'btree Page Miss Ratio ',
  284. 'Groupup': groups
  285. },
  286. {
  287. 'Name': NAME_PREFIX + 'globallock _ currentQueue_total ',
  288. 'Call _ back': get_value,
  289. 'Time _ max ': time_max,
  290. 'Value _ type': 'uint ',
  291. 'Units ': 'operations ',
  292. 'Slope': 'both ',
  293. 'Format': '% U ',
  294. 'Description': 'total Operations Waiting for lock ',
  295. 'Groupup': groups
  296. },
  297. {
  298. 'Name': NAME_PREFIX + 'globallock _ currentqueue_readers ',
  299. 'Call _ back': get_value,
  300. 'Time _ max ': time_max,
  301. 'Value _ type': 'uint ',
  302. 'Units ': 'operations ',
  303. 'Slope': 'both ',
  304. 'Format': '% U ',
  305. 'Description': 'readers Waiting for lock ',
  306. 'Groupup': groups
  307. },
  308. {
  309. 'Name': NAME_PREFIX + 'globallock _ currentQueue_writers ',
  310. 'Call _ back': get_value,
  311. 'Time _ max ': time_max,
  312. 'Value _ type': 'uint ',
  313. 'Units ': 'operations ',
  314. 'Slope': 'both ',
  315. 'Format': '% U ',
  316. 'Description': 'writers Waiting for lock ',
  317. 'Groupup': groups
  318. },
  319. {
  320. 'Name': NAME_PREFIX + 'globallock _ activeClients_total ',
  321. 'Call _ back': get_value,
  322. 'Time _ max ': time_max,
  323. 'Value _ type': 'uint ',
  324. 'Units ': 'clients ',
  325. 'Slope': 'both ',
  326. 'Format': '% U ',
  327. 'Description': 'total Active Clients ',
  328. 'Groupup': groups
  329. },
  330. {
  331. 'Name': NAME_PREFIX + 'globallock _ activeclients_readers ',
  332. 'Call _ back': get_value,
  333. 'Time _ max ': time_max,
  334. 'Value _ type': 'uint ',
  335. 'Units ': 'clients ',
  336. 'Slope': 'both ',
  337. 'Format': '% U ',
  338. 'Description': 'Active readers ',
  339. 'Groupup': groups
  340. },
  341. {
  342. 'Name': NAME_PREFIX + 'globallock _ activeClients_writers ',
  343. 'Call _ back': get_value,
  344. 'Time _ max ': time_max,
  345. 'Value _ type': 'uint ',
  346. 'Units ': 'clients ',
  347. 'Slope': 'both ',
  348. 'Format': '% U ',
  349. 'Description': 'Active Writers ',
  350. 'Groupup': groups
  351. },
  352. {
  353. 'Name': NAME_PREFIX + 'ons ons _ current ',
  354. 'Call _ back': get_value,
  355. 'Time _ max ': time_max,
  356. 'Value _ type': 'uint ',
  357. 'Units ': 'connections ',
  358. 'Slope': 'both ',
  359. 'Format': '% U ',
  360. 'Description': 'Open connections ',
  361. 'Groupup': groups
  362. },
  363. {
  364. 'Name': NAME_PREFIX + 'ons ons _ current_ratio ',
  365. 'Call _ back': get_connections_current_ratio,
  366. 'Time _ max ': time_max,
  367. 'Value _ type': 'float ',
  368. 'Units ':' % ',
  369. 'Slope': 'both ',
  370. 'Format': '% F ',
  371. 'Description': 'centage of Connections used ',
  372. 'Groupup': groups
  373. },
  374. {
  375. 'Name': NAME_PREFIX + 'slave _ delay ',
  376. 'Call _ back': get_slave_delay,
  377. 'Time _ max ': time_max,
  378. 'Value _ type': 'uint ',
  379. 'Units ': 'seconds ',
  380. 'Slope': 'both ',
  381. 'Format': '% U ',
  382. 'Description': 'replace Set Slave delay ',
  383. 'Groupup': groups
  384. },
  385. {
  386. 'Name': NAME_PREFIX + 'asserts _ total ',
  387. 'Call _ back': get_asserts_total_rate,
  388. 'Time _ max ': time_max,
  389. 'Value _ type': 'float ',
  390. 'Units ': 'asserts/Sec ',
  391. 'Slope': 'both ',
  392. 'Format': '% F ',
  393. 'Description': 'asserts ',
  394. 'Groupup': groups
  395. }
  396. ]
  397. Return descriptors

  398. Def metric_cleanup ():
  399. "Cleanup """
  400. Pass

  401. # The following code is for debugging and testing
  402. If _ name _ = '_ main __':
  403. Descriptors = metric_init (PARAMS)
  404. While True:
  405. For d in descriptors:
  406. Print ('% s = % s') % (d ['name'], d ['format']) % (d ['call _ back'] (d ['name'])
  407. Print''
  408. Time. sleep (METRICS_CACHE_TTL)
Functions that must be rewritten in the python extension script include metric_init (params) and metric_cleanup ()
The metric_init () function is called during module initialization. A metric description dictionary or dictionary list must be returned. mongdb. py returns the dictionary list.

The Metric dictionary is defined as follows:

D = {'name': '<your_metric_name>', # The name must be consistent with the name in the pyconf file.

'Call _ back': <call_back function>,

'Time _ max ': int (<your_time_max> ),

'Value _ type': '<string | uint | float | double> ',

'Units ':' <your_units> ',

'Slope': '<zero | positive | negative | both> ',

'Format': '<your_format> ',

'Description': '<your_description>'
}
The metric_cleanup () function is called when the module ends and no data is returned.
4) view monitoring statistics on the web end
After the script is compiled, restart the gmond service.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.