[Command line] Curl Query public network export IPJuly 22, 2016 14:27:02Hits: 19022
Whether in the home or the office, or the company's host, many times are in the intranet, that is, many of them are through Nat Internet, and sometimes need to query the public network IP, if there is a browser, you can use Baidu, Google search ip this keyword to get the public network IP, What if it's under the command line? The following is the operation of
provided by Elastisearch can be used to register/delete/get the warmer of a particular name. Typically, a warmer contains a request to load a large amount of index data (for example, a sort operation for a particular field in a data search, or a query that uses some aggregate Sum,min,max functions) to achieve a warm-up effect.The specific invocation example is as follows (the following warmer is defined for Warmer,warmer with the name "test" as the index name is warmer_1):
: "1.6.0",
build_hash: "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",
build_timestamp: "2015-06-09t13:36:34z",
Build_snapshot:false,
lucene_version: "4.10.4"
},
tagline: "You Know, for Search"
}
Interface
ES provides standard RESTAPI interface to external, use all of his cluster operations: Cluster, node, index status, and statistics view manage clusters, nodes, indexes, and types perform curd operations (create, update, read, delete) and index perform advanced search funct
1. Terminology explanation
Cluster: Cluster
Node: Nodes
Index: Database similar to MySQL
Type: Table similar to MySQL
Doccument: Content
Shard: Data fragmentation, Shard maximum default doccument number is 2,147,483,519
Replicas: Number of copies of data, also used when querying, not just backup
2. Installation
(a) Elasticsearch run dependent JRE,JRE installation reference: Ubuntu Install JRE
(b) Download address: https://www.elastic.co/, click o
simple
Curl-l http://toolbelt.treasure-data.com/sh/install-redhat.sh | Sh
After the installation is complete, edit the configuration file
# vim/etc/td-agent/td-agent.conf
Start the FLUENTD service
# service Td-agent Start
III. installation and Deployment Kibana 3
Kibana 3 is a Web UI front-end tool developed using HTML and JavaScript.
Download wget http://download.elasticsearch.org/kibana/kibana/kibana-latest.zip
Decompression Unzip Kibana-lat
/elasticsearch/elasticsearch.ymlcluster.name:es-testnode.name: Node-1path.data:/data/elasticsearchpath.logs:/var/log/elasticsearch[[emailprotected]~] #serviceelasticsearchrestart second, use NBSP;RESTNBSP;API1, capable of what checkyour Cluster,node,andindexhealth,status,andstatisticsadministeryour cluster,node,andindexdataandmetadataPerformCRUD (Create, read,update,anddelete) andsearchoperationsagainstyour
This article is for the translation of official documents and personal understanding. When the author translates, the version of Elasticsearch (hereinafter referred to as ES) is 1.2.2. Please support original: http://www.cnblogs.com/donlianli/p/3836768.htmlI. Changes in statistical information-related ordersFor cluster state Cluster_state, node information Nodes_info, node statistics nodes_stats and index Information indices_stats command format are u
Recently in the log analysis of this piece, to use Logstash+elasticsearch+kibana to implement log import, filtering and visual management, official documentation is not detailed enough, the online articles are mostly either for the use of Linux systems, or the copying of other people's configuration is mostly impossible to run. It took a lot of effort to get rid of these three things, write a usage experience, nonsense not much to say, into the subjec
Elasticsearch. No rows of data are required. This will be a completely different way of thinking about the data, which is why Elasticsearch can perform complex full-text searches.Elasticsearch uses JSON (or JavaScript Object Notation) as the format for document serialization. JSON has been supported by most languages and has become a standard format in the NoSQL world. It is simple, concise and easy to rea
Curl (), file_get_contents (), snoopy.class.php these three remote pages to fetch or capture the tools used, the default is still invaded by snoopy.class.php, because he is more efficient and does not require server-specific configuration support, in the ordinary virtual host can be used, file_get_ Contents () is less efficient, often fails, curl () is highly efficient, supports multi-threading, but needs t
PHP curl can be used to crawl Web pages, analysis of Web data use, simple and easy-to-use, here to introduce its functions, such as not detailed description, put the code to see:
Only a few of the main functions are retained. To implement a mock login, which may involve session capture, then the front and back pages involve parameter provision form.
Libcurl main function is to use different protocols to connect and communicate with different servers
1.ElasticSearch Simple DescriptionA.elasticsearch is a Lucene-based search server with distributed multiuser capabilities, Elasticsearch is an open source project (Apache License terms) developed in Java, based on a restful web interface that enables real-time search, Stable, reliable, fast, high performance, easy to install and use, and its scale-out capability is very strong, do not need to restart the se
Curl is an open source file Transfer tool that works with URL syntax in command line mode
This article implements a Curl batch instance in PHP.
The code is as follows:
1Header("Content-type:text/html;charset=utf8");23/*get all a labels for two pages first*/4//Initialize two simple handle handles5$ch 1=curl_init ();6$ch 2=curl_init ();7Curl_setopt_array ($ch 1,Array(8Curlopt_url = ' http://www.sina.com.cn
NLog.Targets.ElasticSearchThe corresponding nlog.config file is like this, look at the bold font:This allows us to freely output non-anomalous logs to elasticsearch, such as the logs we recorded for the WEBAPI request:Devlogging is the index name that we have configured in the configuration file. We also use Nlog to record the file log.Search:The rest-based request is queried by ID:Http://localhost:9200/Such as:Http://192.168.0.103:9200/devlogging/lo
Routing Documents to shardsWhen you index a document, it is stored on a single primary shard. How does Elasticsearch know which shard the document belongs to? When you create a new document, how does it know if it should be stored on Shard 1 or Shard 2?The process cannot be random because we will retrieve the document in the future. In fact, it is determined by a simple algorithm:shard = hash(routing) % number_of_primary_shardsroutingThe value is an a
}, "_ttl" : { "enabled" : true, "store" : true, "default" : "5000" } } } }‘
Unfortunately, there are several problems:1.window does not have curl running environment by defaultWorkaround: Download a Curl-7.17.0-win32-nossl file and put the Curl.exe in the script directory2. Under the command line switch directory to the script directory, run Create_indices.bat error.Workaround: Through the error me
same network segment, uses this attribute to distinguish between different clusters, cluster.name the same group to build a clusterNode.name:node-1//node name, default randomly specifies a name in the name list, cannot be repeatedNode.master:true//Specifies whether the node is eligible to be elected node, the default is True,es is the first machine in the default cluster is master, and if this machine hangs, it will be re-elected masterNode.data:true//Specifies whether the node stores index dat
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.