Recent research on this one thing-the limit learning machine.
In many problems, I often encounter two problems, one is classification, the other is regression. To put it simply, the classification is to label a bunch of numbers, and the regression is to turn a number into a number.
Here we need to deal with the general dimension of the data is relatively high, in dealing with these two types of problems, the simplest way is weighted. The weight of the data on the dimensions that affect the final result is large, which affects the small weight of the smaller points. In fact, the data of these dimensions, which affect the small, are not completely useless for the model we have built. At least they guarantee the stability and robustness of our entire model.
Until now I have not said what is elm (Extreme learning machine), because, it is still a great controversy. As far as I am concerned, from the experimental results, all variants of the Elm are not as good as the original Elm results, and the best results are not SVR (support vector regression).
The network structure of the Elm is the same as that of the BP network in the tow layer, except that the weights of the connections between the neurons are not calculated in the same way.
The network structure diagram is as follows:
Learning notes for the Extreme Learning machine (Extreme learning machines)