Kibana is an open source analytics and visualization platform designed to work with Elasticsearch.
You use Kibana to search, view, and interact with the data stored in the Elasticsearch index.
You can easily perform advanced data analysis and visualize data in a variety of icons, tables, and maps.
Kibana makes it easy to understand large amounts of data. Its simple, browser-based interface enables you to quickly create and share dynamic dashboards that show real-time changes to Elasticsearch queries.
1. Installing Kibana
2. Kibana Configuration
3. Visit Kibana
Kibana is a Web application that you can access through 5601来. For example: localhost:5601 or http://YOURDOMAIN.com:5601
When Kibana is accessed, the default indexing mode is selected by default when the Discover page loads. The time filter is set to the last 15 minutes, and the search query is set to Match-all (\*)
3.1. Check the Kibana status
or http://192.168.101.5:5601/api/status return JSON format state information
4. Connect to Kibana with Elasticsearch
before you start using Kibana, you need to tell Kibana which Elasticsearch index you want to explore. The first time you visit Kibana, you are prompted to define an index pattern to match the names of one or more indexes.
(Tip: By default, Kibana connections allow Elasticsearch instances on localhost.) In order to connect to a different Elasticsearch instance, modify the URL Elasticsearch in Kabana.yml, and then restart Kibana. ）
In order to configure the Elasticsearch index that you want to access with Kibana:
1, access the Kibana UI. For example, localhost:56011 or http://YOURDOMAIN.com:5601
2. Specify an index pattern to match one or more of your elasticsearch indexes. When you specify your index pattern, any matching index will be displayed.
(Voice-over: * matches 0 or more characters; Specifies that the index is to match the index by default, which is exactly the match index name)
3. Click "Next Step" To select the index that you want to use to perform a time-based comparison of the containing timestamp fields. If your index does not have time-based data, then select the "I don t wantto use the" Filter "option.
4. Click the "Create index pattern" button to add the index mode. The first index mode is automatically configured as the default index, and later when you have multiple indexing modes, you can choose which one to set as the default. (Hint: Management > Index Patterns)
Now, Kibana is connected to your Elasticsearch data. Kibana shows a read-only list of fields that match the field of the indexed configuration.
You can explore your data interactively from the Discover page. You can access each document in each index that matches the selected index by default. You can submit query requests, filter the search structure, and view document data. You can also see the number of documents matching the query request, as well as the field value statistics. If you select an indexing mode that has a Time field configured, the distribution of the document will appear in the histogram at the top of the page.
5.1. Set the time filter
5.2. Search for data
You can enter query criteria in the search box to query the index of the current index pattern match. When querying, you can use the Kibana standard query Language (Lucene-based query syntax) or a fully JSON-based Elasticsearch query Language DSL. The Kibana query language can use AutoComplete and simplified query syntax as an experimental feature, which you can select under the Options menu in the query bar.
When you submit a query request, the histogram, document table, and field list are updated to reflect the search results. The total number of hits (documents that are matched to) is displayed in the toolbar. The first 500 hits are shown in the document table. By default, in reverse chronological order, the most recent document is displayed first. You can reverse the sort order by clicking on the "Time" column.
5.2.1. Lucene query syntax
The Kibana query language is based on the Lucene query syntax. Here are some tips that might help you:
- In order to perform a text search, you can simply enter a text string. For example, if you want to search the Web server's log, you can enter the keyword "safari" So you can search all the fields about "Safari"
- To search for a specific value for a specific field, you can prefix the name of the field. For example, if you enter "status:200", you will find a document with a value of 200 for all status fields
- In order to search for a range value, you can use the parentheses range syntax,[Start_value to End_value]. For example, in order to find a document with a status code of 4XX, you can enter status:[400 to 499]
- In order to specify changes to complex query conditions, you can use Boolean operators and, or , and not. For example, in order to find the status code is 4xx and the extension field is a PHP or HTML document, you can enter status:[400 to 499] and (extension:php or extension:html)
5.2.2. Kibana query Syntax Enhancements
New and simpler syntax
If you are familiar with Kibana's old Lucene query syntax, then you should not be unfamiliar with this new syntax. The fundamentals remain the same, and we simply improve on something that makes the query language easier to use.
response:200 will match the value of the Response field is 200 document
A string quoted in quotation marks is called a phrase search. For example,message: "Quick brown fox " will search for the phrase "quick brown fox" in the Message field. If there is no quotation marks, they will be matched to all documents containing the words, regardless of their order. This means that it will be matched to "quick brown fox" and will not match "quick fox Brown". (voice-over: quotation marks are caused as a whole)
The query parser will no longer be split based on spaces. Multiple search items must be delimited by explicit Boolean operators. Note that the Boolean operators are case-insensitive.
In Lucene,response:200 extension:php is equivalent to response:200 and extension:php. This matches the response field value to match 200 and the Extenion field value matches the PHP document.
If we change the middle to or, then response:200 or extension:php will match the Response field match 200 or the extension field matches the PHP document.
By default, and has a higher priority than or.
response:200 and extension:php or extension:css will match response is 200 and extension is PHP, or match extension is CSS and response any
Parentheses can change this priority
response:200 and (extension:php or EXTENSION:CSS) will match response is 200 and extension is a PHP or CSS document
There is also a shorthand way to:
response: (or 404) The document that matches the Response field is 200 or 404. Character values can also be multiple, for example:Tags: (Success and info and security)
You can also use not
Not response:200 will match response not 200 document
response:200 and not (extension:php or extension:css) will match response is 200 and extension is not PHP or CSS documents
Range retrieval is a little different from Lucene
Instead of byte:>1000, we use byte > 1000
>=, <, <= are valid operators
response:* will match all documents that exist in the response field
A wildcard query is also possible. machine.os:win* will match the Machine.os field to the document that starts with win, and values such as "Windows 7" and "Windows 10" will be matched to.
Wildcards also allow us to search more than one field at a time, for example, suppose we have Machine.os and machine.os.keyword two fields, we want to search both fields have "Windows 10", then we can write "machine.os*: Windows Ten "
5.2.3. Refreshing search results
5.3. Filter by field
The above is the control list of which fields to display, but also one way is to view the document data when the small icon like a book
Deletion is also possible.
We can also edit a DSL query statement for filtering filtering, for example
5.4. View document Data
5.5. View the document context
5.6. View field Data statistics
visualize allows you to create visualizations of the data in your Elasticsearch index. You can then build dashboard to show the relevant visualizations.
Kibana visualization is based on the Elasticsearch query. By using a series of elasticsearch aggregation to extract and process your data, you can create images to line up the trends, peaks, and lows you need to know.
6.1. Create a visualization
In order to create a visual view:
1th Step: Click on the "visualize" button in the left navigation bar
2nd step: Click the "Create New Visualization" button or the Plus (+) button
3rd Step: Select a visualization type
4th step: Specify a search query to retrieve the visual data
5th step: Select the Y-axis aggregation action in the Visual builder. For example, Sum,average,count, etc.
6th step: Set X axis
See here for more details
The Kibana dashboard displays a collection of visualizations and searches. You can schedule, adjust, and edit dashboard content, and then save the dashboard to share it.
7.1. Build a Dashboard
1th Step: Click "Dashboard" on the navigation bar
2nd step: Click "Create New Dashboard" or "plus (+)" button
3rd Step: Click the "ADD" button
4th step: To add a visualization, select one from the visualization list, or click the "Add New Visualization" button to create a new
5th step: In order to add a saved query, click on the "Saved Search" tab and select one from the list
6th step: When you have finished adding and adjusting the contents of the dashboard, go to the top menu bar, click "Save" and enter a name.
By default, the Kibana dashboard uses a light theme. To use a dark theme, click Options and select Use Dark theme. To set the dark theme to default, go to manage >management > Advanced and set Dashboard:defaultdarktheme to ON.
Elasticsearch console Print Log [2018-08-15t14:48:26,874][info][o.e.c.m.metadatacreateindexservice] [px524ts] [. MONITORING-KIBANA-6-2018.08.15] Creating index, cause [Auto (Bulk API)], templates [. Monitoring-kibana], shards /, mappings [Doc]kibana Console print log [03:26:53.605] [Info][license][xpack] imported license information from Elasticsearch for the [monitoring] cluster:mode:basic | Status:active
9. Other related
the Logstash "