Brief Analysis of the client source code of Elasticsearch, elasticsearchclient

Source: Internet
Author: User
Tags addall

Brief Analysis of the client source code of Elasticsearch, elasticsearchclient
Problem

It makes us learn with problems more efficient.

1. Can the client automatically discover all nodes in the cluster when only one node is configured in the es cluster? How was it discovered?

2. How does the es client achieve load balancing?

3. After an elasticsearch node fails, how does the elasticsearch client remove it?

4 what is the difference between elasticsearch client node detection in two modes (SimpleNodeSampler and SniffNodesSampler?

Core
  • TransportClient es client external API class
  • TransportClientNodesService: class for maintaining node nodes
  • ScheduledNodeSampler regularly maintains normal node classes
  • NettyTransport for data transmission
  • NodeSampler node sniffer
Client initialization code
1  Settings.Builder builder = Settings.settingsBuilder()                                   .put("cluster.name", clusterName)                                   .put("client.transport.sniff", true);Settings settings = builder.build(); 2  TransportClient client = TransportClient.builder().settings(settings).build(); 3  for (TransportAddress transportAddress : transportAddresses) {    client.addTransportAddress(transportAddress);}

1. ES constructs basic configuration parameters in builder mode;

2. Construct the client through build, including constructing the client, initializing the ThreadPool, constructing the TransportClientNodesService, starting the scheduled task, and customizing the sniffing type;

3. Add available cluster addresses. For example, I only have one node in the cluster;

Build client call build API

Here, a simple description of dependency injection: Guice is used by Google for Java™The development of open source code dependency injection framework (for more information, see here). For details, refer to the following link:

Initialize TransportClientNodesService

In the previous figure, modules. createInjector instantiates TransportClientNodesService and injects it into TransportClient. We can see that most of the APIs in TransportClientNodesService are proxies through TransportClientNodesService.

Guice injection through Annotation

In: The cluster name and thread pool are injected with the following code: This Code uses the node sniffer type to sniff all nodes in the same cluster (SniffNodesSampler) or simply focus on the node configured in the configuration file (SimpleNodeSampler)

if (this.settings.getAsBoolean("client.transport.sniff", false)) {    this.nodesSampler = new SniffNodesSampler();} else {    this.nodesSampler = new SimpleNodeSampler();}

Features:

SniffNodesSampler: the client will discover other nodes in the cluster and create fully connect (what is fully connect? Later)
SimpleNodeSampler: ping all nodes in listedNodes. The difference is that light connect is created here;

TransportClientNodesService maintains the data structure of three nodes:

// nodes that are added to be discovered
1 private volatile List<DiscoveryNode> listedNodes = Collections.emptyList();
2 private volatile List<DiscoveryNode> nodes = Collections.emptyList();
3 private volatile List<DiscoveryNode> filteredNodes = Collections.emptyList();

1 indicates the nodes actively added to the configuration file;

2 indicates the node participating in the request;

3. Filtered nodes that cannot process requests;

How does the Client achieve load balancing?

For example, we find that each execute Operation retrieves a node from the nodes data structure and then obtains the node server through a simple rouund-robbin. The core code is as follows:

private final AtomicInteger randomNodeGenerator = new AtomicInteger();......private int getNodeNumber() {    int index = randomNodeGenerator.incrementAndGet();    if (index < 0) {        index = 0;        randomNodeGenerator.set(0);    }    return index;}

Then write data through the netty channel. The core code is as follows:

Public void sendRequest (final DiscoveryNode node, final long requestId, final String action, final TransportRequest request, TransportRequestOptions) throws IOException, TransportException {1 Channel targetChannel = nodeChannel (node, options ); if (compress) {options = TransportRequestOptions. builder (options ). withCompress (true ). build ();} byte status = 0; status = TransportStatus. setReques T (status); ReleasableBytesStreamOutput bStream = new ReleasableBytesStreamOutput (bigArrays); boolean addedReleaseListener = false; try {bStream. skip (NettyHeader. HEADER_SIZE); StreamOutput stream = bStream; // only compress if asked, and, the request is not bytes, since then only // the header part is compressed, and the "body" can't be extracted as compressed if (options. compress ()&&(! (Request instanceof BytesTransportRequest) {status = TransportStatus. setCompress (status); stream = CompressorFactory. defaultCompressor (). streamOutput (stream);} // we pick the smallest of the 2, to support both backward and forward compatibility // note, this is the only place we need to do this, since from here on, we use the serialized version // as the version to use also when the node receiv Ing this request will send the response with Version version = Version. smallest (this. version, node. version (); stream. setVersion (version); stream. writeString (action); ReleasablePagedBytesReference bytes; ChannelBuffer buffer; // it might be nice to somehow generalize this optimization, maybe a smart "paged" bytes output // that create paged channel buffers, but its tricky to know when to do it (wh Ere this option is // more explicit ). if (request instanceof BytesTransportRequest) {BytesTransportRequest bRequest = (BytesTransportRequest) request; assert node. version (). equals (bRequest. version (); bRequest. writeThin (stream); stream. close (); bytes = bStream. bytes (); ChannelBuffer headerBuffer = bytes. toChannelBuffer (); ChannelBuffer contentBuffer = bRequest. bytes (). toChannelBuffer (); buffer = ChannelBuffers. wrappedBuffer (NettyUtils. DEFAULT_GATHERING, headerBuffer, contentBuffer);} else {request. writeTo (stream); stream. close (); bytes = bStream. bytes (); buffer = bytes. toChannelBuffer ();} NettyHeader. writeHeader (buffer, requestId, status, version); 2 ChannelFuture future = targetChannel. write (buffer); ReleaseChannelFutureListener listener = new ReleaseChannelFutureListener (bytes); futur E. addListener (listener); addedReleaseListener = true; transportServiceAdapter. onRequestSent (node, requestId, action, request, options);} finally {if (! AddedReleaseListener) {Releasables. close (bStream. bytes ());}}}View Code

The most important steps are 1 and 2. The middle part is to process data and perform some necessary steps.

1 indicates getting a connection;

2 indicates that data is written through the obtained connection;

There will be new problems at this time

1 When did nodes write data?

2. When was the connection created?

When to write Nodes data

The core is to call doSampler. The Code is as follows:

Protected void doSample () {// the nodes we are going to ping include the core listed nodes that were added // and the last round of discovered nodes Set <DiscoveryNode> nodesToPing = Sets. newHashSet (); for (DiscoveryNode node: listedNodes) {nodesToPing. add (node) ;}for (DiscoveryNode node: nodes) {nodesToPing. add (node);} final CountDownLatch latch = new CountDownLatch (nodesToPing. size (); fina L ConcurrentMap <DiscoveryNode, ClusterStateResponse> clusterStateResponses = ConcurrentCollections. newConcurrentMap (); for (final DiscoveryNode listedNode: nodesToPing) {threadpool.executor(threadpool.names.managementcmd.exe cute (new Runnable () {@ Override public void run () {try (! TransportService. nodeConnected (listedNode) {try {// if its one of the actual nodes we will talk to, not to listed nodes, fully connect if (nodes. contains (listedNode) {logger. trace ("connecting to cluster node [{}]", listedNode); transportService. connectToNode (listedNode);} else {// its a listed node, light connect to it... logger. trace ("connecting to listed node (light) [{}]", listedNode); tr AnsportService. connectToNodeLight (listedNode) ;}} catch (Exception e) {logger. debug ("failed to connect to node [{}], ignoring... ", e, listedNode); latch. countDown (); return ;}// the core is here. At the beginning of initialization, there may be only one configured node, this address will be used to send a state status monitoring // "cluster: monitor/state" transportService. sendRequest (listedNode, ClusterStateAction. NAME, headers. applyTo (Requests. clusterStateRequest (). clear (). nodes (true ). loca L (true), TransportRequestOptions. builder (). withType (TransportRequestOptions. type. STATE ). withTimeout (pingTimeout ). build (), new topology <ClusterStateResponse> () {@ Override public ClusterStateResponse newInstance () {return new ClusterStateResponse () ;}@ Override public String executor () {return ThreadPool. names. SAME ;}@ Override public void handleResponse (ClusterStateResponse Response) {/* by callback, information like all nodes in the cluster will be returned in this place {"version": 27, "state_uuid": "YSI9d_HiQJ-FFAtGFCVOlw", "master_node ": "TXHHx-XRQaiXAxtP1EzXMw", "blocks" :{}, "nodes": {"7": {"name": "es03", "transport_address": "1.1.1.1: 9300 ", "attributes": {"data": "false", "master": "true" }}, "6": {"name": "common02", "transport_address ": "1.1.1.2: 9300", "attributes": {"master": "false" }}, "5 ": {" Name ":" es02 "," transport_address ":" 1.1.1.3: 9300 "," attributes ": {" data ":" false "," master ": "true" }}, "4": {"name": "common01", "transport_address": "1.1.1.4: 9300", "attributes": {"master ": "false" }}, "3": {"name": "common03", "transport_address": "1.1.1.5: 9300", "attributes": {"master ": "false" }}, "2": {"name": "es01", "transport_address": "1.1.1.6: 9300"," Ttributes ": {" data ":" false "," master ":" true "}}," 1 ": {" name ":" common04 "," transport_address ": "1.1.1.7: 9300", "attributes": {"master": "false" }}, "metadata": {"cluster_uuid": "_ na1x _", "templates" :{}, "indices" :{}}, "routing_table": {"indices" :{}}, "routing_nodes" :{ "unassigned ": [],} */clusterStateResponses. put (listedNode, response); latch. countDown ();} @ Override public void handleException (TransportException e) {logger.info ("failed to get local cluster state for {}, disconnecting... ", e, listedNode); transportService. disconnectFromNode (listedNode); latch. countDown () ;}});} catch (Throwable e) {logger.info ("failed to get local cluster state info for {}, disconnecting... ", e, listedNode); transportService. disconnectFromNode (listedNode); latch. CountDown () ;}}) ;}try {latch. await ();} catch (InterruptedException e) {return;} HashSet <DiscoveryNode> newNodes = new HashSet <> (); HashSet <DiscoveryNode> newFilteredNodes = new HashSet <> (); for (Map. entry <DiscoveryNode, ClusterStateResponse> entry: clusterStateResponses. entrySet () {if (! IgnoreClusterName &&! ClusterName. equals (entry. getValue (). getClusterName () {logger. warn ("node {} not part of the cluster {}, ignoring... ", entry. getValue (). getState (). nodes (). localNode (), clusterName); newFilteredNodes. add (entry. getKey (); continue;} // get all the data nodes in this place and write them to the nodes node for (ObjectCursor <DiscoveryNode> cursor: entry. getValue (). getState (). nodes (). dataNodes (). values () {newNodes. add (cursor. value) ;}} nodes = validateNewNodes (newNodes); filteredNodes = Collections. unmodifiableList (new ArrayList <> (newFilteredNodes ));}View Code

The call time is divided into two parts:

1 client. addTransportAddress (transportAddress );

2. ScheduledNodeSampler: requests to each node are performed every 5s by default;

When is the connection created?

Also called in doSampler, and finally created by NettryTransport

At this time, we found that if it is light, we will create a light connection, that is, otherwise we will create fully connect, which includes

  • Recovery: perform data recovery. The default number is 2;
  • Bulk: used for bulk requests. The default number is 3;
  • Med/reg: Typical search and single doc indexes. The default number is 6;
  • High: for example, sending the cluster state. The default number is 1;
  • Ping: ping between nodes. The default number is 1;

The corresponding code is:

public void start() {    List<Channel> newAllChannels = new ArrayList<>();    newAllChannels.addAll(Arrays.asList(recovery));    newAllChannels.addAll(Arrays.asList(bulk));    newAllChannels.addAll(Arrays.asList(reg));    newAllChannels.addAll(Arrays.asList(state));    newAllChannels.addAll(Arrays.asList(ping));    this.allChannels = Collections.unmodifiableList(newAllChannels);}

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.