Basic usage of Elasticsearch and cluster construction

Source: Internet
Author: User
Tags solr

Basic usage of Elasticsearch and cluster construction

First, Introduction

Elasticsearch and SOLR are Lucene-based search engines, but Elasticsearch naturally supports distributed, While SOLR is a distributed version of Solrcloud after the 4.0 release, SOLR's distributed support requires zookeeper support.

Here's a detailed comparison of Elasticsearch and SOLR:

Ii. Basic Usage

A elasticsearch cluster can contain multiple indexes (indices), each of which can contain multiple types (types), each containing multiple documents, and then each document contains multiple fields, a document-oriented store, It's kind of a nosql one, too.

ES than the traditional relational database, some conceptual understanding:

Relational DB -> Databases -> Tables -> Rows -> ColumnsElasticsearch -> Indices   -> Types  -> Documents -> Fields

From creating a client to adding, deleting, querying, etc. basic usage:

1. Create Client

Public Elasticsearchservice (String ipAddress, int port) {        client = new Transportclient ()                . Addtransportaddress ( New Inetsockettransportaddress (ipAddress,                        Port));    }

Here is a transportclient.

Es under two client comparisons:

Transportclient: Lightweight client, using Netty thread pool, socket connected to ES cluster. itself does not join the cluster, only as the processing of requests.

Node client: The clients node itself is also an ES node, joined to the cluster, as with other elasticsearch nodes. Frequent opening and closing of such node clients creates "noise" in the cluster.

2. Create/delete index and type information

    CREATE index public void CreateIndex () {client.admin (). Indices (). Create (new Createindexrequest (IndexName))    . Actionget (); }//Clears all indexes public void Deleteindex () {Indicesexistsresponse indicesexistsresponse = Client.admin (). Indices (        ). Exists (new Indicesexistsrequest (new string[] {indexname})). Actionget ();                    if (indicesexistsresponse.isexists ()) {client.admin (). Indices (). Delete (new Deleteindexrequest (IndexName))        . Actionget (); }}//delete a type public void DeleteType () {client.preparedelete (). Setindex (IndexName) under Index. SetType (Ty    Pename). Execute (). Actionget (); }//define the mapping type for the index public void defineindextypemapping () {try {xcontentbuilder Mapbuilder = xcontentf            Actory.jsonbuilder (); Mapbuilder.startobject (). StartObject (TypeName). StartObject ("Properties"). S Tartobject (Idfieldname). Field ("Type", "Long"). Field ("Store", "yes"). EndObject (). StartObject (seqnumfieldname). Field ("Type", "l Ong "). Field (" Store "," yes "). EndObject (). StartObject (imsifieldname). Field (" Type "," string "). Field (" Inde X "," not_analyzed "). Field (" Store "," yes "). EndObject (). StartObject (imeifieldname). Field (" Type "," string "). Field (" Index "," not_analyzed "). Field (" Store "," yes "). EndObject (). StartObject (deviceidfieldname). fie LD ("Type", "string"). Field ("Index", "not_analyzed"). Field ("Store", "yes"). EndObject (). StartObject (Owna                    reafieldname). Field ("Type", "string"). Field ("Index", "not_analyzed"). Field ("Store", "yes"). EndObject () . StartObject (Teleoperfieldname). Field ("Type", "string"). Field ("Index", "not_analyzed"). Field ("Store", "yes").                EndObject (). StartObject (timefieldname). Field ("Type", "Date"). Field ("Store", "yes"). EndObject () . EndObject (). EndOBject (). EndObject ();                    Putmappingrequest putmappingrequest = requests. Putmappingrequest (IndexName). Type (TypeName)            . Source (Mapbuilder);        Client.admin (). Indices (). putmapping (Putmappingrequest). Actionget ();        } catch (IOException e) {log.error (e.tostring ()); }    }

The index Mapping (Mapping) for a type is customized here, and the default ES automatically handles mapping of the data type: a long, a double floating-point number for an integer type, a string mapping of strings, and a time of Date,true or False to Boolean.

Note: For strings, es defaults to "analyzed" processing, that is, to do word segmentation, remove stop words and other processing index. If you need to put a string as a whole to be indexed, this field needs to be set: Field ("Index", "not_analyzed").

Detailed reference:

3. Index data

    Bulk index data public void Indexhotspotdatalist (list

ES supports both batch and single data indexes.

4. Query for Data

    Get a small amount of data 100 private list<integer> getsearchdata (QueryBuilder querybuilder) {list<integer> IDs        = new Arraylist<> (); SearchResponse SearchResponse = Client.preparesearch (indexname). Settypes (TypeName). Setquery (QueryBuilder).        SetSize. Execute (). Actionget ();        Searchhits searchhits = Searchresponse.gethits ();            for (Searchhit searchhit:searchhits) {Integer id = (integer) searchhit.getsource (). Get ("id");        Ids.add (ID);    } return IDs; }//Get a lot of data private list<integer> getsearchdatabyscrolls (QueryBuilder querybuilder) {LIST&LT;INTEGER&G T        ids = new arraylist<> (); Get 100000 data at a time searchresponse Scrollresp = Client.preparesearch (indexname). Setsearchtype (Searchtyp        E.scan). Setscroll (New TimeValue (60000)). Setquery (QueryBuilder). SetSize (100000). Execute (). Actionget (); while (true) {for (SEarchhit searchHit:scrollResp.getHits (). Gethits ()) {Integer id = (integer) searchhit.getsource (). Get (IDF                Ieldname);            Ids.add (ID); } SCROLLRESP = Client.preparesearchscroll (Scrollresp.getscrollid ()). Setscroll (New TimeValue (600000)).            Execute (). Actionget ();            if (Scrollresp.gethits (). Gethits (). Length = = 0) {break;    }} return IDs; }

The QueryBuilder here is a query condition, ES support paging query to get the data, can also get a large amount of data at once, need to use scroll Search.

5. Aggregation (Aggregation Facet) query

    Get the data distribution < device ID, quantity > Public map<string, string> getdevicedistributedinfo (String starttim) for each device on the device list for a certain period of time E, String endTime, list<string> devicelist) {map<string, string> resultsmap = new HASHMAP&L        T;> ();        QueryBuilder Devicequerybuilder = Getdevicequerybuilder (devicelist);        QueryBuilder Rangebuilder = Getdaterangequerybuilder (StartTime, endTime);        QueryBuilder QueryBuilder = Querybuilders.boolquery (). Must (Devicequerybuilder). must (Rangebuilder); Termsbuilder Termsbuilder = aggregationbuilders.terms ("Deviceidagg"). Size (integer.max_value). Field (Device        Idfieldname); SearchResponse SearchResponse = Client.preparesearch (indexname). Setquery (QueryBuilder). AddAggregation (term        Sbuilder). Execute (). Actionget ();        Terms Terms = Searchresponse.getaggregations (). Get ("Deviceidagg"); if (terms! = null) {for (Terms.bucket entry:terms.geTbuckets ()) {Resultsmap.put (Entry.getkey (), String.valueof (Entry.getdoccount ()));    }} return resultsmap; }

Aggregation queries can query functions such as statistical analysis such as the distribution of data in a given month, the maximum, minimum, sum, average, etc. of a certain type of data.

Detailed reference:

Third, cluster configuration

Configuration file Elasticsearch.yml

Cluster name and node name:

#cluster. Name:elasticsearch

#node. Name: "Franz Kafka"

Whether to participate in the master election and whether to store data

#node. master:true

#node. data:true

Number of shards and replicas

#index. Number_of_shards:5
#index. number_of_replicas:1

The minimum number of nodes in the master election, which must be set to half of the total number of nodes in the cluster plus 1, i.e. n/2+1

#discovery. zen.minimum_master_nodes:1

Discovery Ping timeout time, congested network, poor network condition set a point higher


Note that the total number of nodes in the distributed system of the cluster N to an odd number!!

Iv. Elasticsearch Plug-in

1, Elasticsearch-head is a elasticsearch cluster management tool:./elasticsearch-1.7.1/bin/plugin-install Mobz/elasticsearch-head

2. Elasticsearch-sql: Use SQL syntax to query elasticsearch:./bin/plugin-u Releases/download/1.3.5/ SQL

GitHub Address: Https://

3, Elasticsearch-bigdesk is a cluster monitoring tool of elasticsearch, it can be used to view the various states of ES cluster.

Installation:./bin/plugin-install Lukas-vlcek/bigdesk


4, Elasticsearch-servicewrapper plug-in is Elasticsearch service plug-in,

After downloading the plugin in Https://, unzip and copy the service directory to the bin directory of the Elasticsearch directory.

You can then install, start, and stop Elasticsearch by executing the following statement:

SH elasticsearch Install

SH elasticsearch start

SH elasticsearch stop





Basic usage of Elasticsearch and cluster construction

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.