GlusterFS + lagstash + elasticsearch + kibana 3 + Redis Log Collection Storage System Deployment 01

Source: Internet
Author: User
Tags syslog kibana logstash glusterfs gluster

Because of the company's data security and analysis needs, so the investigation of GlusterFS + Lagstash + elasticsearch + Kibana 3 + Redis integrated log Management application: Installation, configuration process, usage, etc. continue one, GlusterFS distributed files System Deployment: Description: The company wants to do the Web site business log and system log unified collection and management, after the MFS, Fastdfs and other Distributed File system research, and finally selected Glusterfs, because Gluster has high scalability, high performance, high availability, can scale the elasticity characteristics,      No meta-data server design so that Glusterfs no single point of failure, official website: www.gluster.org1.          System Environment Preparation: Centos 6.4 server: 192.168.10.101 192.168.10.102 192.168.10.188 192.168.10.189 Client: 192.168.10.103 Epel Source and Glusterfs source add Epel Source and Glusterfs source, Epel source contains Glusterfs, version older, relatively stable, this test          Use the latest version of 3.5.0. RPM-UVH http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm wget-p/etc/yum.repos.dhttp://      Download.gluster.org/pub/gluster/glusterfs/latest/centos/glusterfs-epel.repo 2.         Deployment Process Server-side installation: yum-y install Glusterfs glusterfs-fuseglusterfs-server chkconfig glusterd on          Service Glusterd Start Server configuration: 4 storage nodes to form a cluster, this article in the first node execution, only need to execute on any node OK. [[email protected] ~]# Gluster Peer probe192.168.10.102 probe successful [[email protected] ~]# Gluster peer probe192. 168.10.188 probe successful [[email protected] ~]# Gluster peer probe 192.168.10.189 probe Su Ccessful View the node information for the cluster: [[email protected] ~]# Gluster Peer status number of Peers:3 ho         stname:192.168.10.102 Uuid:b9437089-b2a1-4848-af2a-395f702adce8 State:peer in cluster (connected)         hostname:192.168.10.188 uuid:ce51e66f-7509-4995-9531-4c1a7dbc2893 State:peer in cluster (connected) hostname:192.168.10.189 uuid:66d7fd67-e667-4f9b-a456-4f37bcecab29 state:peer in cluster (connecte d) Create a volume named Test-volume with/data/gluster as the shared directory, the number of replicas is 2:sh cmd.sh "Mkdir/data/gluster" [[Email pro Tected] ~]# gluster volume create Test-volume replica 2192.168.10.101:/data/gluster 192.168.10.102:/data/ Gluster192.168.10.188:/data/gluster 192.168.10.189:/data/gluster creation of volume Test-volume has beensuccessful.         Please start the volume to access data. Boot volume: [[email protected] ~]# Gluster Volume Starttest-volume starting volume Test-volume has BEENSUCC Essful Viewing volume Status: [[email protected] ~]# gluster Volume info volume name:test-volume ty         Pe:distributed-replicate status:started Number of Bricks:2 x 2 = 4 transport-type:tcp Bricks:brick1:192.168.10.101:/data/gluster Brick2:192.168.10.102:/data/gluster brick3:192.168.      10.188:/data/gluster Brick4:192.168.10.189:/data/gluster3. Client Installation configuration: Installation: Yum-y installglusterfs glusterfs-fuse mount: mount-t glusterfs 192.168.10.10          2:/test-volume/mnt/(Mount any node) is recommended in this way. Mount-t nfs-o mountproto=tcp,vers=3192.168.10.102:/test-volume/log/mnt/(use NFS mount, note that the remote Rpcbind service must be turned on) echo "192. 168.10.102:/tEst-volume/mnt/glusterfs Defaults,_netdev 0 0 ">>/etc/fstab (Power on auto Mount) 4. Test check file correctness dd If=/dev/zero of=/mnt/1.img bs=1mcount=1000 # on Mount client generate test file cp/data/navy/mnt/# file         Copy to storage for downtime testing. Using Glusterfs-fuse mounts, even if the target server fails, it does not affect the use at all.          Use NFS to pay attention to mount options, otherwise the service-side failure can easily lead to file system halt live and affect the service! # One of the nodes is stopped storage services service Glusterd stop service GLUSTERFSD stop# in the Mount client delete test file rm-fv/mnt/navy# at this time on the server view, the service is stopped Navy is not deleted on the node. This starts the service: Serviceglusterd start# A few seconds later, Navy is automatically deleted. The new file has the same effect! 5. Operation and maintenance of common commands:
Delete Volume Gluster volume stop Test-Volume Gluster Volume delete test-volume move the machine out of the cluster Gluster peer detach192.168.10.102only 172 is allowed.28.0. 0 Network access glusterfs gluster volume set test-volumeauth.allow192.168.Ten.*Add a new machine to the volume (since the number of replicas is set to 2, add at least 2 (4、6、8.. Machine) Gluster peer probe192.168.10.105Gluster peer probe192.168.10.106Gluster Volume Add-brick test-volume192.168.10. the:/data/gluster192.168.10.106:/data/Gluster Shrink Volume # gluster need to move data to another location Gluster volume remove before shrinking the volume-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-Volume Start # View migration Status Gluster volume remove-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-Volume status # After migration is complete submit Gluster volume remove-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-Volume Commit Migration Volume # will 192.168.10.101 of data migrated to, first 192.168.10. 107 Join the cluster Gluster peer probe192.168.10.107Gluster Volume Replace-bricktest-volume192.168.10.101:/data/gluster/test-volume192.168.10.107:/data/gluster/test-Volume Start # View migration status Gluster volume replace-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.107:/data/gluster/test-Volume Status # Submitted Gluster volume replace after data migration is complete-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.107:/data/gluster/test-Volume Commit # If the machine 192.168.10. 101 A failure has failed to run, perform a forced commit and then ask Gluster to perform a synchronous gluster volume replace immediately-bricktest-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-volume Commit-Force gluster Volume heal Test-volumes full24007

 Two Log collection System Deployment Description Simple solution: The application of various parts of the system introduction Logstash: Do the System log collection, reproduced tools. At the same time, the integration of various log plug-ins, log query and analysis of the efficiency of a great help. Generally use shipper as log collection, indexer as log reprint. Logstash shipper collects log and forwards log to Redis storage Logstash Indexer reads data from Redis and forwards to Elasticsearchredis: it's a db,logstash. The shipper forwards the log to the Redis database for storage. Logstash Indexer reads data from Redis and forwards it to elasticsearch.  Elasticsearch: Open-source search engine framework, early deployment simple, easy to use, but later need to do the necessary optimization specific please refer to the blog http://chenlinux.com/categories.html#logstash-ref The Logstash section. Multi-data clustering for increased efficiency. Read data from Redis and forward to Kibana in Kibana: Open source Web presentation. Virtual Server Preparation: 192.168.10.143 logstash shipper192.168.10.144 logstash Indexer redis192.168.10.145 elasticsearch KiB Ana31. Three hosts to install JDK 1.7 recommended Oracle JDK 1.7+ version java-version set JAVA environment variables, such as VIM~/.BASHRC >> java_home=/usr/java/ Jdk1.7.0_55path= $PATH:/$JAVA _home/binclasspath=.: $JAVA _home/libjre_home= $JAVA _home/jreexport java_home PATH CLASSPATH jre_home>> source ~/.bashrc2. Installing Redis (192.168.10.144) wgethttp://download.redis.io/releases/ REDIS-2.6.16.TAR.GZTAR-ZXF redis-2.6.16.tar.gzcd redis-2.6.16make && make INSTALL./SRC/redis-server. /redis.conf start the Redis client to verify the installation ./src/redis-cli> keys * #列出所有的key 3. Install Elasticsearch (192.168.10.145) wgethttp:// Download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.13.zip   Unzipelasticsearch-0.90.13.zipelasticsearch decompression can be used very convenient, next we look at the effect, first start the ES service, switch to the Elasticsearch directory, run under the bin ELASTICSEARCHCD Elasticsearch-0.90.13bin/elasticsearch-f access the default 9200 port Curl-x GET http://localhost:9200 4. Install Logstash (192.168.10 .143, 192.168.10.144) wgethttp://download.elasticsearch.org/logstash/logstash/ Logstash-1.2.1-flatjar.jarlogstash download can be used, command line parameters can refer to Logstash flags, mainly agent #运行Agent模式-F configfile #指定配置文件web # Automatic Web Service-P PORT #指定端口, default 9292 5. The latest version of installing Kibana (192.168.10.145) Logstash has built-in Kibana, and you can also deploy Kibana separately. Kibana3 is a purely javascript+html client, so it can be deployed to any HTTP server. Wgethttp://download.elasticsearch.org/kibana/kibana/kibana-latest.zip Unzip Kibana-latest.zip cp-r kibana-latest/ Var/www/html can modify the Config.js to configure the address and index of the Elasticsearch. Modify the line to the downside.     Elasticsearch: "http://192.168.10.145:9200",  6. Consolidated configuration: 192.168.10.143 Logstash Shipper Configuration Collection Log vim/etc/logstash_shipper.conf input{file { Type = "Linux-syslog" Path = = ["/var/log/lastlog", "/var/log/syslog", "/var/log/lastlog"]} } output {redis {host = "192.168.10.144" port = "6379" Data_type =&gt ;"  List "key =" Syslog "}} start Logstash shipper Nohup Java–jar Logstash-1.2.1-flatjar.jar agent–f /etc/logstash_shipper.conf & After 10 seconds, output the following information: Using milestone 2input plugin ' file '. This plugin should is stable, but if you see Strangebehavior, please let us know! For more information on plugin milestones, Seehttp://logstash.net/docs/1.2.2/plugin-milestones {: Level=>:warn} Using Milestone 2 Output plugin ' Redis '. This plugin should is stable, but if you see strange behavior, please let usknow! For more information in plugin milestones, see Http://logstash.net/docs/1.2.2/plugin-milestones{: Level=>:warn} 192.168.10.144 logstash Indexer is configured as follows: Vim/etc/logstash_indexer.conf input{Redis {host = "   192.168.10.144 "data_type" list "port =" 6379 "key =" syslog "type =" Redis-input "}} Output {elasticsearch {host = "192.168.10.145" port = "9300"}} start Logstash indexer Nohup Java–jar logstash-1 .2.1-flatjar.jar agent–f/etc/logstash_indexer.conf & Output Ibid.                              7. Login Http://192.168.10.145/kibana to access the following

Third, the integration of distributed file storage and log collection system logic diagram is as follows:

To mount the glusterfs to the Web server, you need to write a log-structured program with logs written to the Glusterfs store.

GlusterFS + lagstash + elasticsearch + kibana 3 + Redis Log Collection Storage System Deployment 01

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.