Environment: Linux,python3
Function: Simulate Linux curl function, collect URL information
Example 1: Collecting information from the HBase Cluster Management page
#!/usr/bin/env python#-*-coding:utf-8-*-"" Collects the required information from the URL, which is used to capture the HBase Cluster Management page requests per second and number of region "import reimport Urllib.requestpagehandler = Urllib.request.urlopen ("http://127.0.0.1:60010/master-status?filter=general# Basestats ") content = Pagehandler.read (). Decode () Result=re.findall ('. *total (. *?) Used heap.* ', content,re. S) # # # # # # # # (. *?) Represents all characters except newline, non-greedy mode, re. s make. Matches all characters including line break msg = Re.findall (R ' <td> (\d+) </td> ', Result[0]) # # # #取出数值, hbase cluster requests per second and region number. Note that this side is removed and the string print (msg)
Example 2: Collect queue congestion data information from the Kafka management interface
#!/usr/bin/env python# -*- coding:utf-8 -*-' Kafka Management page, lag is listed as the number of messages blocked, and the value cannot be collected directly from the information returned by the URL. Need to be calculated by Logsize-offset ' ' import jsonimport urllib.requestpagehandler = Urllib.request.urlopen ("Http://127.0.0.1:8086/group/test_group") Content = pagehandler.read (). Decode ( ) m = json.loads (content) topic_dict = {}for i in m[' offsets ']: blocking_num = 0 #print (i[' topic '],i[' offset '],i[' logsize ']) blocking_num += (i[' logsize '] - i[' offset ') ### #计算队列堵塞量 if i[' topic '] in topic_dict: #### Store the results in the dictionary by Topic_name:blocking_num topic_dict[i[' topic ']] + = blocking_num else: topic_dict[i[' Topic ']] = blocking_num#print (topic_dICT) for key in topic_dict: if topic_dict[key] > 3000: print ("topic:", Key, ", Blocking msg num:", topic_dict[ Key])
[Python] using urllib to collect information from the page example