After the initial completion of the Kubernetes cluster architecture, by building some monitoring components, we have been able to achieve
- Graphical monitoring of status information and resource conditions for each node,pod
- Scaling and scaling of replicateset through scale
- View the run log for each pod by kubectl logs or dashboard
However, the scale of the nodes in the distributed architecture is often very large, a typical production environment may have dozens of hundred minion nodes, in this case it is necessary to establish a set of centralized log monitoring and management system, in my early thinking, I also want to implement WebLogic log output to shared storage by Volumn plug-in to storage, but the problem with this approach is:
- We extend the Docker service by WebLogic Single-domain mode, which means that all log paths and names are consistent, i.e. they are stored in a uniform path to the container node of the pod (/u01/oracle/user_projects/ Domains/base_domain/servers/adminserver/logs/adminserver.log), there is a file conflict problem if you map to storage through Volumnmount.
- Unable to get information about pods and container
- Unable to get run information for other nodes in the cluster
It is therefore still necessary to look for platform-level architecture scenarios. In Kubernetes's official documentation, https://kubernetes.io/docs/concepts/cluster-administration/logging/
Kubernetes gives several log schemes, and gives the reference architecture of Cluster-level logging:
Kubernetes recommends this node-level logging-agent and provides two of them, one for Google Cloud platform Stackdriver Logging, and the other for Elasticsearch, Both are using FLUENTD as agents that run on nodes (log agents)
Using A node-level logging agent is the most common and encouraged approach for a Kubernetes cluster, because it creates O Nly one agent per node, and it doesn ' t require any changes to the applications running on the node. However, Node-level logging only works for Applications ' standard outputand standard error.
Kubernetes doesn ' t specify a logging agent, but the optional logging agents is packaged with the Kubernetes Release:stac Kdriver Logging for use with Google Cloud Platform, and Elasticsearch. You can find more information and instructions in the dedicated documents. Both use FLUENTD with custom configuration as a agent on the node.
Okay, here's our pits guide.
1. Preparatory work
- The Kubernetes code in GitHub is planted down to master local.
git clone https://github.com/kubernetes/kubernetes
- Configure ServiceAccount, this is because after the download of FLUENTD images need to use SSL to connect to the API Server, if not ready to modify and generate a new images, or need to be configured, configuration Guide reference
http://www.cnblogs.com/ericnie/p/6894688.html
- Configuring the Dns,kibana component requires DNS to find the Elasticsearch-logging service, and if you do not configure DNS, you need to modify the address in the Kibana-controller.yaml to a fixed service IP. Configuration Guide Reference
http://www.cnblogs.com/ericnie/p/6897142.html
Pits Guide to Kubernetes fluentd+elasticsearch+kibana log setup