The online part of the real-time job is developed with Storm, in order to monitor the delay of the data, the log time is inserted into Redis when the storm processes the log, and then the Zabbix is used to monitor the delay. Because there are often new jobs on-line, manually configuring the monitoring items becomes cumbersome, in order to liberate productivity, still need to make automation.
Before adding the network card and partition monitoring when using the LLD function, and with its built-in macro variables, the new version of Zabbix is supported by the custom LLD, the implementation steps are as follows:
1. Set a discovery rule (Userparameter Key) in the template, invoke the script, return the JSON data specified by Zabbix (return the custom macro variable), and set the discovery correctly (such as filter, etc.)
Here you can see the data format specified by Zabbix in the official documentation and in conjunction with the agent log on the line.
143085:20141127:000548.967 requested [vfs.fs.discovery]143085:20141127:000548.967 sending back [{ "Data":[ { "{#FSNAME}": "\ \", "{#FSTYPE}": "Rootfs"}, { "{#FSNAME}": "\/proc\/sys\/ Fs\/binfmt_misc ", "{#FSTYPE}": "Binfmt_misc"}, { "{#FSNAME}": "\/data", "{#FSTYPE}": "Ext4"}]}]
For example, the key that returns the JSON data on the line:
userparameter=storm.delay.discovery,python2.6/apps/sh/zabbix_scripts/storm/storm_delay_discovery.py
and through
Zabbix_get-s 127.0.0.1-k Storm.delay.discovery
Verify the accuracy of the returned data
storm_delay_discovery.py content is as follows:
#!/usr/bin/pythonimport sysimport redisimport exceptionsimport traceback_hashtables = []_continue = true_alldict = {}_alllist = []class redisexception ( Exception): def __init__ (Self, errorlog): self.errorlog = errorlog def __str__ (self): return "error log is %s" % (Self.errorlog) def scan_one ( Cursor,conn): try: cursor_v = conn.scan (cursor) cursor_next = cursor_v[0] cursor_value = cursor_v[1] for&nBsp;line in cursor_value: if (Line.startswith ("Com-vip-storm") or line.startswith ("Stormdelay_")) and str (line) != "Stormdelay_riskcontroll": _hashtables.append (line) else: pass return cursor_next except exception,e: raise redisexception (str (e) ) def scan_all (conn): try: cursor1 = scan_one (' 0 ', conn) global _continue while _continue: cursor2 = scan_one (Cursor1,conn) if int (Cursor2) == 0: _continue = False else: cursor1 = cursor2 _continue = True except exception,e: raise redisexception ( STR (e)) Def hget_fields (conn,hashname): onedict = {} &nBsp; fields = conn.hkeys (Hashname) for field in fields: onedict["{#STORMHASHNAME}"] = hashname onedict["{#STORMHASHFIELD}"] = field _ Alllist.append (onedict) if __name__ == ' __main__ ': try: r=redis. Strictredis (host= ' xxxx ', port=xxx, db=0) scan_all (R) for hashtable in _hashtables: hget_fields (r,hashtable) _alldict["Data"] = _alllist &nbSp; print str (_alldict). Replace ("'", ' "') except exception,e: print -1
2. Set Item/graph/trigger Prototypes:
Here, for example, define item prototypes (you also need to define key), and the key parameter is a macro variable
For example free inodes on {#FSNAME} (percentage)--->vfs.fs.inode[{#FSNAME},pfree]
In this case, the macro variable returned in the item is used,
storm_delay[hget,{#STORMHASHNAME},{#STORMHASHFIELD}]
Finally, the template containing LLD is linked to the host.
Finally, with the Screen.create/screenitem.update API can be implemented to monitor the addition of/screen Add, update automation.
This article is from the "Food and Light Blog" blog, please make sure to keep this source http://caiguangguang.blog.51cto.com/1652935/1583536
Zabbix Custom LLD