[Experience Exchange] Deploy Cadvisor + InfluxDB + grafana docker monitoring on Mesos Marathon

Source: Internet
Author: User
Tags docker ps grafana docker run influxdb cadvisor mesos marathon grafana docker

Google Cadvisor is a great tool for monitoring Docker containers, but it only shows real-time data by default and does not store historical data. In order to store and display historical data, custom displays, you can integrate Cadvisor with Influxdb, Grafana, a foreign expert Brian Christner wrote an article "How to setup Docker monitoring", Describes the deployment method.

Brian's approach is to manually run the Docker Run command to deploy, in order to be able to automatically deploy on the Mesos Marathon platform, I have made some modifications to his method, following my deployment process.

Note:

Readers need to know in advance the basic operations of Mesos, Marathon, InfluxDB and especially Grafana.

1. Setting up shared storage

In order to implement persistent storage for the InfluxDB database and Grafana configuration so that they do not lose historical data after redeployment, I enabled NFS4 as a shared storage,/data directory in InfluxDB container, Grafana container The A directory is mapped to the NFS4 shared storage.

1.1 NFS4 Server:

/var/nfsshare 172.31.17.0/24 (Rw,sync,no_root_squash,no_all_squash)

1.2 Mesos Slaves

172.31.17.74:/var/nfsshare//var/nfsshare/nfs Defaults 0 0

2. Pull the image file

Pull the following images on each mesos slave:

Tutum/influxdb

Google/cadvisor

Grafana/grafana

3. Set up DNS or hosts

172.31.17.34 influxdb.gkkxd.com
172.31.17.34 cadvisor-1.gkkxd.com
172.31.17.34 cadvisor-2.gkkxd.com
172.31.17.34 cadvisor-3.gkkxd.com
172.31.17.34 grafana.gkkxd.com

4. Deploying InfluxDB
    • Influxdb only need one instance;
    • The UI is published via MARATHON-LB's virtual host;
    • Data port 8086 is published via Serviceport to the slaves where marathon-lb is located;
    • Serviceport need to be set to a fixed value, such as: 28086, in order to cadvisor and Grafana connection;
    • Data Catalog/data map to NFS4 shared directory;
{  "ID":"Influxdb",  "instances": 1,  "CPUs": 0.5,  "Mem": 128,  "Constraints": [["hostname"," like","Slave[1-3]"]],  "Labels": {    "Haproxy_group":"External",    "Haproxy_0_vhost":"influxdb.gkkxd.com"  },  "Container": {    "type":"DOCKER",    "Docker": {      "Image":"172.31.17.36:5000/influxdb:latest",      "Network":"BRIDGE",      "portmappings": [        { "Containerport": 8083,"Hostport": 0,"Serviceport": 0,"Protocol":"TCP" },        { "Containerport": 8086,"Hostport": 0,"Serviceport": 28086,"Protocol":"TCP" }      ]    },    "volumes": [      {        "Containerpath":"/etc/localtime",        "Hostpath":"/etc/localtime",        "Mode":"RO"      },      {        "Containerpath":"/data",        "Hostpath":"/var/nfsshare/influxdb",        "Mode":"RW"      }    ]  }}

To set the firewall for the host marathon-lb:

{  "id": "INFLUXDB-FW",  "instances": 2,  "CPUs": 0.2,  "mem": +,  "cmd": "Firewall-cmd--add-port= 28086/tcp && Sleep 3 && curl-x DELETE master1:8080/v2/apps/influxdb-fw ",  " constraints ": [[" Hostname "," like "," slave[4-5] "]}

5. Create a monitoring database

Open http://influxdb.gkkxd.com, set Host and Port to influxdb.gkkxd.com and 28086, respectively:

Create a separate database for each mesos slave, respectively: cadvisor_1, cadvisor_2, cadvisor_3 ...

6. Deploying Cadvisor
    • Each mesos slave to deploy an instance;
    • The UI is published via MARATHON-LB's virtual host;
    • Set Storage_drive to Influxdb;
{"id": "cadvisor-6", "instances": 1, "CPUs": 0.5, "mem":, "Constraints": [["hostname", "like", "slave[6]"], "l Abels ": {" Haproxy_group ":" external "," Haproxy_0_vhost ":" Cadvisor-6.gkkxd.com "}," container ": {" type ":" Docke        R "," Docker ": {" image ":" 172.31.17.36:5000/cadvisor:latest "," Network ":" BRIDGE "," portmappings ": [    {"Containerport": 8080, "Hostport": 0, "Serviceport": 0, "protocol": "TCP"}] }, "volumes": [{"Containerpath": "/etc/localtime", "Hostpath": "/etc/localtime", "mode": "R O "}, {" Containerpath ":"/rootfs "," Hostpath ":"/"," mode ":" RO "}, {" Co Ntainerpath ":"/var/run "," Hostpath ":"/var/run "," mode ":" RW "}, {" Containerpath ":"/sys " , "Hostpath": "/sys", "mode": "RO"}, {"Containerpath": "/var/lib/docker", "Hostpat    H ":"/var/lib/docker "," mode ":" RO "},  {"Containerpath": "/cgroup", "Hostpath": "/cgroup", "mode": "RO"}] }, "args": ["-storage_driver", "Influxdb", "-storage_driver_host", "cadvisor.gkkxd.com:28086", "-storage_dri ver_db "," Cadvisor_6 "]}

To view the Cadvisor UI:

Http://cadvisor-6.gkkxd.com

7. Deploying Grafana
    • Only one instance needs to be deployed;
    • UI is published via MARATHON-LB virtual host;
    • Data Catalog/var/lib/grafana mapped to NFS4 shared storage for easy persistence of storage;
{  "id": "Grafana",  "instances": 1,  "CPUs": 0.5,  "mem": +,  "constraints": [["hostname", "like", "Slave[4-5]"],  "labels": {    "haproxy_group": "External",    "Haproxy_0_vhost": "Grafana.gkkxd.com"  } ,  "container": {    "type": "Docker",    "Docker": {      "image": "172.31.17.36:5000/grafana:latest", "      Network": "BRIDGE",      "portmappings": [        {"Containerport": +, "Hostport": 0, "Serviceport": 0, " Protocol ":" TCP "}      ]    },    " volumes ": [      {        " Containerpath ":"/etc/localtime ",        " Hostpath " : "/etc/localtime",        "mode": "RO"      },      {        "Containerpath": "/var/lib/grafana",        "Hostpath": "/ Var/nfsshare/grafana ",        " mode ":" RW "      }    ]  }}

8. Create a data analysis diagram

To open the Grafana UI:

http://grafana.gkkxd.com/

8.1 Setting up the data source:
    • Type: InfluxDB
    • url:http://influxdb.gkkxd.com:28086
    • Access:direct
    • Database: Select a slave, such as: cadvisor_1

Create graph:

9 Other questions 9.1 How to set alarms

Prometheus can be integrated into the future, I will carry out relevant tests;

9.2 How to get Docker container names on Mesos

When we create a monitor map for an app instance on Grafana, we often need to filter the relevant data through the "where Container_name= container name" condition, but Mesos Marathon deployed Docker The container name is named in the form Mesos-uuid (Docker PS view), and there are no obvious features to identify.

The following method can be used to view the Docker container name for an app ID:

To open the Mesos Administration page:

http://master1:5050

In the Mesos task, click the Sandbox behind the app ID you want to find,

Click StdOut to see the Docker container name for this app ID:

[Experience Exchange] Deploy Cadvisor + InfluxDB + grafana docker monitoring on Mesos Marathon

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.