Preliminary discussion on Elk-elasticsearch usage Summary

Source: Internet
Author: User

Preliminary discussion on Elk-elasticsearch usage Summary

2016/9/12

First, install 1, jdk  and   environment variable support jdk-1.7 above, recommended jdk-1.8 in environment variable configuration: java_home2, install 2 ways to download, recommended cache RPM package to local Yum Source 1) Direct use of rpmwget  https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.4.0/ ELASTICSEARCH-2.4.0.RPM2) using the Yum source [[email protected] ~]# vim /etc/yum.repos.d/ Elasticsearch.repo[elasticsearch-2.x]name=elasticsearch repository for 2.x packagesbaseurl =https://packages.elastic.co/elasticsearch/2.x/centosgpgcheck=1gpgkey=https://packages.elastic.co/ GPG-KEY-ELASTICSEARCHENABLED=1[[EMAIL&NBSP;PROTECTED]&NBSP;~]#&NBSP;YUM&NBSP;INSTALL&NBSP;ELASTICSEARCH3) Start Service [ [email protected] ~]# chkconfig elasticsearch on[[email protected] ~]# &NBSP;SERVICE&NBSP;ELASTICSEARCH&NBSP;START3, adjusting the configuration [[email protected] ~]# mkdir -p / data/elasticsearch[[email protected] ~]# chown elasticsearch:elasticsearch /data/ Elasticsearch[[email protected] ~]# grep ^[^#] /etc/elasticsearch/elasticsearch.yml cluster.name: es-test node.name:  Node-1path.data: /data/elasticsearchpath.logs: /var/log/elasticsearch[[email protected] ~] # service elasticsearch restart   second, use &NBSP;REST&NBSP;API1, capable of what check your  Cluster, node, and index health, status, and statisticsadminister your  cluster, node, and index data and metadataPerform CRUD  (Create,  read, update, and delete)  and search operations against your  indexesexecute advanced search operations such as paging, sorting,  filtering, scripting, aggregations, and many others using  curl  to operate   The way of api :curl -x<rest verb> <node>:<port>/<index>/<type>/< ID&GT;2, management 1) health status [[email protected] ~]# curl  ' Localhost:9200/_cat/health?v ' epoch       timestamp cluster status node.total node.data shards pri relo  init unassign pending_tasks max_task_wait_time active_shards_percent 1473669150  16:32:30  es-test green           1          1      0    0    0    0        0              0                   -       The status of           100.0% health  includes: Green, yellow,  red. green means everything is good  (cluster is fully functional), Yellow means  all data is available but some replicas are not yet  allocated  (cluster is fully functional) red means some data is not  available for whatever reason2) List node [[email protected] ~]# curl  ' Localhost:9200/_cat/nodes?v ' host      ip         heap.percent ram.percent load node.role master name    127.0.0.1 127.0.0.1            6           16 0.00 d        &NBSP;&NBSP;*&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;NODE-1&NBSP;3) List index [[Email protected] ~]# curl   ' Localhost:9200/_cat/indiceS?v ' health status index pri rep docs.count docs.deleted store.size  PRI.STORE.SIZE3, CRUD operations 1) CREATE INDEX [[email protected] ~]# curl -xput  ' localhost:9200/ Customer?pretty ' {   "acknowledged"  : true} created an index of "customer" and informed that a  pretty-print was used when returning   The way (JSON) to list the index again [[email protected] ~]# curl  ' localhost:9200/_cat/indices?v ' health  status index    pri rep docs.count docs.deleted store.size  pri.store.size yellow open   customer   5   1           0             0       650b            650b  Please compare the previous execution results. Please note that the  health  here is  yellow because we currently have only one ES node, no replicas, not high availability. 2) Create a  doc index for the above index &nbspThe;customer  type is:  "external"  ,id: 1 [[email protected] ~]# curl -xput   ' Localhost:9200/customer/external/1?pretty '  -d  ' {   ' name ':  ' John doe '} ' 3) Get  doc[[email protected] ~]# curl -XGET  ' Localhost:9200/customer/external/1? Pretty ' 4) rebuilding index [[email protected] ~]# curl -xput  ' LOCALHOST:9200/CUSTOMER/EXTERNAL/1? Pretty '  -d  ' {   ' name ':  ' Kelly doe '} ' 5 ' When an index is created, no ID is specified and will be randomly generated [[email protected]  ~]# curl -XPOST  ' Localhost:9200/customer/external?pretty '  -d  ' {   ' Name ": " Calvin kern "} ' 6) Delete index [[email protected] ~]# curl -xdelete  ' localhost : 9200/customer?pretty ' 7) Update  doc  data [[email protected] ~]# curl -xpost  ' Localhost:9200/customer/external/1/_update?pretty '  -d  ' {   "Doc": {  "name":  "Eric mood" &NBSP;}} ' Update and add:[[email protected] ~]# curl -xpost  ' Localhost:9200/customer/external/1/_update? Pretty '  -d  ' {   "Doc": {  "name":  "Eric mood",  "Age":  110  } ' 8) Delete index is too straightforward, how to delete only one of the doc? [[email protected] ~]# curl -xdelete  ' Localhost:9200/customer/external/1?pretty ' 4, View data 1) Import test data:wget https://github.com/bly2k/files/blob/master/accounts.zip?raw=true -o  accounts.zipunzip accounts.zipcurl -xpost  ' Localhost:9200/bank/account/_bulk?pretty '  -- data-binary  "@accounts. JSON" curl  ' localhost:9200/_cat/indices?v ' 2) Search view all data: Way One:curl  ' Localhost:9200/bank/_search?q=*&pretty ' Way two:curl -xpost  ' Localhost:9200/bank/_search?pretty '  -d  ' {   ' query ": { " Match_all ":  {} }} ' view specified data:curl -xpost  ' Localhost:9200/bank/_search?pretty '  -d  ' {   "query": {     "bool":  {       "must": [        {  "match":  {  "Age":  "Max"  } }      ],        "Must_not": [        {  "Match": {  " State ": " ID "&NBSP;}&NBSP;}&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;]&NBSP;&NBSP;&NBSP;&NBSP;}&NBSP;&NBSP;}} ' Filter data:curl -xpost  ' Localhost:9200/bank/_search?pretty '  -d  ' {   "query":  {      "bool": {       "must": {  "Match_all":  {} },       "Filter": {          "range": {           "balance": {              "GTE": 20000,              "LTE": 30000          }         }      }    }  }} ' Summary data:curl -xpost  ' Localhost:9200/bank/_search?pretty '  -d  ' {   "size":  0,    "Aggs": {     "Group_by_state": {       "Terms": {         "field":  "state"      &NBSP;&NBSP;}&NBSP;&NBSP;&NBSP;&NBSP;}&NBSP;&NBSP} ' remarks: Please refer to the official website Doczyxw, reference 1, official website https://www.elastic.co/guide/en /elasticsearch/reference/current/setup-repositories.htmlhttps://www.elastic.co/guide/en/elasticsearch/ reference/current/setup-configuration.htmlhttps://www.elastic.co/guide/en/elasticsearch/reference/current/ Setup-dir-layout.htmlhttps://www.elastic.co/guide/en/elasticsearch/reference/current/_exploring_your_ Cluster.html


Preliminary discussion on Elk-elasticsearch usage Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.