After good elk, sometimes found Kibana statistics, the data type is wrong, this time and elasticsearch mapping, although we can use Logstash to modify the data type in ES, such as float or int or string. But there are no double types, and even if you convert, you will find that the data written to ES is defined by the ES mapping table. Next we will learn to modify ES mapping, of course, there are many mapping functions, such as defining whether the index is participle, the number of shards and replicas and so on.
1) What isMapping
ES's mapping are very similar to data types in static languages: Declare a variable of type int, which can only store data of type int later. Similarly, a mapping field of type number can only store data of type number.
Mapping has some other meanings compared to the data types in the language, and mapping not only tells ES what type of value is in a field, it also tells Es how to index the data and whether the data can be searched.
When your query does not return the corresponding data, your mapping is likely to have a problem. When you are in doubt, check your mapping directly.
2) AnatomyMapping
A mapping consists of one or more analyzer, and one analyzer is made up of one or more filter. When ES indexes a document, it passes the contents of the field to the corresponding Analyzer,analyzer and then to the respective filters.
The function of filter is easy to understand: a filter is a way to convert data, enter a string, this method returns another string, such as a string to lowercase method is a good example of a filter.
An analyzer consists of a sequence of filter elements, and the process of performing the analysis is called sequentially by a filter filter, and the ES Store and index results are finally obtained.
In summary, mapping's role is to execute a series of instructions to convert the input data into searchable index entries.
3) Actual conversion mapping data type
We use Logstash to pass in an index, this time the index is loaded as the default mapping, as follows
{
- "Parking_total": {
- "Mappings": {
- "Parking_total": {
- "Properties": {
- "@timestamp": {
- " type ": " date ",
- "format": "strict_date_optional_time| | Epoch_millis "
- "@version": {
},
- "active": {
},
- "Host": {
},
- "Kafka": {
- "Properties": {
- "Consumer_group": {
},
- "Msg_size": {
},
- "Offset": {
},
- "Partition": {
},
- "topic": {
}
}
},
- "Logdate": {
},
- "Message": {
},
- "path": {
},
- "total": {
},
- ' type ': {
}
}
}
}
}
}
We are going to modify the total and active 2 fields, and we need to delete all the indexes when we modify mapping. Here, let's do this.
Next, we first create this index mapping because the index does not have time, mapping can also exist, in fact, when the index is deleted mapping will not be deleted, of course, when created, if there are no fields defined, mapping automatically by default to define.
{"Mappings":
{
"Parking_total": {
"Properties": {
"@timestamp": {
' Type ': ' Date '
},
"Message": {
' Type ': ' String '
},
"Total": {
' type ': ' Double '
},
"Active": {
' type ': ' Double '
}
}
}
}
}
This will be successful when you re-import the data.
By:v
The mapping of Elasticsearch