Yesterday, I ran to play with GE's family day. It would be nice to engage in welfare for large industrial companies ..
After returning to the topic, I finally had a clearer understanding of the core function.
At this stage, I personally think that, based on the original data, it is enough to calculate the internal product values of two data vectors in the feature space.
What does it mean? This allows you to use a linear method to discover non-linear relationships in data and ensure the generalization performance of machine learning ~
I also made a PPT Kernel Methods Quick Start the day before yesterday and discussed it with you. I think it was quite successful ~
According to the second chapter of Kernel Methods for pattern analysis, it is quite a beginner, but reading it can also give people who do not know this field an intuitive understanding of the core method. Put it here for your convenience ~
Why is it so painful for me ..
Update 2008.10.21:The important point missing in the PPT is that most linear learning machines have dual representations, and a key attribute of dual representations is that data is only used as an item in the Gram matrix!