Robot Learning Cornerstone (Machine learning foundations) Learn Cornerstone Job three q13-15 C + + implementation

Source: Internet
Author: User
Tags rand

Hello everyone, I am mac Jiang, today and everyone to share the coursera-ntu-machine learning Cornerstone (Machines learning foundations)-Job three q6-10 C + + implementation. Although there are many great gods in many blogs have given the implementation of Phython, but given the C + + implementation of the article is significantly less, here for everyone to provide a C + + implementation of the idea! Although my code can get the correct answer, but there may be some ideas or details are wrong, if you bo friends found, please timely message correction, thank you! Again, Bo owners to provide the implementation of the code is not to let you pass the test, but for students who have difficulties in learning to provide a solution, I hope that my article on your study has some help!

The source of this article: http://blog.csdn.net/a1015553840/article/details/51084922


1 question 13th

(1) Test instructions: Given the target fuction expression, he is using a circle to do the two-dollar classification. Our job is to randomly generate 1000 points on x=[-1,1]x[-1,1], use F (x1,x2) to calculate its value, and then add 10% noise to the base (two-tuple noise is the inverse of the 10% sample's Y value). If you do not do Feacher transform direct use of the data to do linear regression, using the obtained parameters to do a linear classifier, ask how much the Ein is obtained. Run an average of 1000 times.

(2) Analysis: The first to randomly generate training samples and add noise, for C + + How to randomly generate random samples and noise in our previous code has already said, here no longer repeat.

Second, we use training samples to calculate linear regression.

Finally, we use the resulting linear regression parameter w as the parameter of the two-tuple classifier, calculate sign (w*x) to get the predicted value, calculate his 0/1 error with Y, get error rate ein

Hint: Because the linear regression needs to use to seek pseudo-inverse operation, here requires the inverse matrix, here if oneself writes the inverse matrix The algorithm is more complex, can call the already written C + + library directly. More commonly used C + + library has Eigen,cuda and so on, I use is eigen. If you are too complicated, you can use MATLAB directly to achieve the subject, the operation will be much simpler.

For the use of Eigen has been written by many predecessors, better links are: http://blog.csdn.net/augusdi/article/details/12907341. In fact, we use only a few of these very basic operations, The most important thing is to find the inverse matrix and the operation of the generalized inverse matrix, you may wish to look at.

For the 13th question of the code and a very similar to the 14th, as long as the 14th question of the featuretransform operation to remove it, so here is no longer attached.

(3) Answer: 0.5


2. Question 14th

(1) Test instructions: In the 13th question, the direct use of logistic regression to do classification is very unsatisfactory, the error rate of 50%, no practical significance. But we can do the feature transformation first, the correct rate will be much higher. The operation of the feature transformation is also very simple, the class teacher has said, here is no longer tired.

(2) Analysis: The specific code implementation is as follows. Among them, (Xtest,ytest) is the 15th question only needs to carry on the operation, for the brevity all writes together.

#include "stdafx.h" #include <iostream> #include a common matrix operation library using namespace Eigen under <eigen/eigen>//c++; using namespace std; #define X1MIN-1//define the maximum minimum value for the first dimension # define X1max 1#define x2min-1//define the maximum minimum value for the second dimension # define X2max 1#define N 1000//defines the number of samples int sign (double x) {if (x <= 0) Return-1;else return 1;} void Getranddata (matrix<double,n,3> &X,Matrix<double,N,1> &y) {//In (X1min,x1max) X (X2min, X2max) interval initialization point for (int i = 0; i < N; i++) {X (i,0) = 1.0; X (i,1) = double (x1max-x1min) * RAND ()/rand_max-(x1max-x1min)/2.0; X (i,2) = double (x2max-x2min) * RAND ()/rand_max-(x2max-x2min)/2.0;y (i,0) = sign (x (i,1) *x (i,1) + x (i,2) *x (i,2)-0.6); }}void getnoise (matrix<double,n,3> &X,Matrix<double,N,1> &y) {//Add noise for (int i =0; i < N; i++) if ( RAND ()/rand_max < 0.1) y (i,0) =-Y (i,0);} void transform (matrix<double,n,3> &X,Matrix<double,N,6> &z) {//convert X space to Z space for (int i = 0; i < N; i++ ) {Z (i,0) = 1; Z (i,1) = X (i,1); Z (i,2) = X (i,2); Z (i,3) = X (i,1) * X (i,2); Z (i,4) = x (i,1) * x (i,1); Z (i,5) = x (i,2) * x (i,2);}} void Linearregression (matrix<double,n,6> &Z,Matrix<double,N,1> &y,Matrix<double,6,1> &weight) {//Logistic regression calculation, get parameter Weightweight = (Z.transpose () *z). Inverse () * z.transpose () * y;} Double Calcue (matrix<double,n,6> &Z,Matrix<double,N,1> &y,Matrix<double,6,1> &weight ) {//calculate e_indouble e_in = 0.0;  matrix<double,n,1> temp = Z * weight;for (int i = 0; i < N; i++) if (double) sign (temp (i,0))! = Y (i,0)) E_in++;return Double (e_in/n);} void Main () {int seed[1000];//seed double Total_ein = 0.0;double total_eout = 0.0; matrix<double,n,3> x;//x composition Matrix matrix<double,n,3> xtest;//test sample matrix<double,n,6> z;//z composition matrix <double,N,6> ztest;//test Sample matrix<double,n,1> y;//y composition vector matrix<double,n,1> ytest;//test sample matrix< double,6,1> weight;//parameter weightmatrix<double,6,1> totalweight;totalweight<<0,0,0,0,0,0;for (int i = 0; i < N; i++)//1000 times, 1 seeds are required each time, so use RA firstnd initialize seed seed[i] = rand (); for (int k =0; k < N; k++) {Srand (seed[k]);//one seed per test getranddata (x, y);//Get Random sample getnoise (x, y );//Add Noise getranddata (xtest,ytest); getnoise (xtest,ytest); transform (x,z); transform (xtest,ztest); Linearregression ( Z,y,weight);//linear regression calculation parameters Weighttotal_ein + = Calcue (z,y,weight);//compute each E_IN error and Total_eout + = Calcue (ztest,ytest,weight); Totalweight + = weight;cout<< "k=" <<k<< ", Ein =" <<calcue (z,y,weight) << ", eout =" << Calcue (Ztest,ytest,weight) <<endl;} cout<< "Average e_in:" <<total_Ein/1000.0<<endl;cout<< "Average e_out:" <<total_eout/ 1000.0<<endl;cout<<totalweight/1000;}

(3) Answer: last item. In fact, with the brain to know is the last one, should be f (x1,x2) =sign (x1^2+x2^2-0.6) is a circle, then the obtained affirmation is almost a circle. Plus the noise can deviate slightly from the original circle, but not too much.


15. Question 15th


(1) Test instructions: On the basis of the optimal W obtained in 14, we generate 1000 test samples and calculate the error using the method of producing the training sample. Repeat 1000 times averaging

(2) Implementation: has been given in 14 questions

Average e_in: Average error in training samples

Average e_out: Test sample Average error

The bottom six rows of data are the six parameters of W of the 14th question

(3) Answer: 0.1


The source of this article: http://blog.csdn.net/a1015553840/article/details/51084922



Robot Learning Cornerstone (Machine learning foundations) Learn Cornerstone Job three q13-15 C + + implementation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.