We often calculate the distance between two individuals in the classification and clustering operations, which is not a problem for continuous numbers (numric), but for nominal (norminal) categories, it is difficult to calculate distances. Even if the category corresponds to a number, for example {' A ', ' B ', ' C ', and [0,1,2], we cannot consider a and b,b and C distances to be 1, while a and C distances are 2. The single-hot coding is precisely to deal with the measure of this distance, which considers the distance between each category to be the same. The method corresponds to a class with a vector, for example {' A ', ' B ', ' C '} corresponding to [1,0,0],[0,1,0],[0,0,1], and note that the Euclidean distance between categories is now the same.
Now we explain the use of the Onehotencoder function
1 data=np.array ([[1,0,3.25],2 [0,0,5.2],3 [2,1,3.6]]) 4 enc=onehotencoder (Categorical_features=np.array ([0,1]), n_values=[3,2])5 enc.fit (data) 6 data=enc.transform (data). ToArray ()7print(data)
Run the result as
[0. 1. 0. 1. 0. 3.25] [1. 0. 0. 1. 0. 5.2] [0. 0. 1. 0. 1. 3.6]]
Categorical_features is a column index that requires a single hot code, N_values is the number of categories under each column in the corresponding categorical_features, that is, the original column expands the number of new columns. Note that the two values can be unspecified, and the Fit_transform function can be used directly, and the program will count the number of categories in each column. However, only valid for integers, the floating-point number will be converted to an integer after the statistics, that is, 3.5 and 3.6 default is 3, that is, the same class. If you specify these two parameters, you need to ask for the data that is not converted, and the columns must be {0,1,2,3,4 ...} To encode, and not to {1,10,100,200 ...} This kind of random way to encode. Otherwise, there will be an array out of bounds error
Introduction to Single-Heat code Onehotencoder