Algorithm-AdaBoost stock price rise and fall forecast model based on MACD

Source: Internet
Author: User

Solemn statement: The stock market is risky, the investment needs to be cautious, use this model to make a firm offer to take risks at your own risk



MACD is an indicator, the specific usage is macd>0 bullish, the opposite is bearish, really is this it? Since all the technical indicators are based on the statistics of historical data, indicators lag is inevitable, sometimes the MACD is clearly greater than 0, the share price still falls, sometimes MACD is less than 0, the share price still rises. Based on the adaboost algorithm, a linear threshold classifier based on MACD is proposed as a classifier, which avoids the complexity of training classifier, and verifies that this method is effective by looking for the threshold and bias of the lowest error rate in the limited solution space.


Here are a few simple things to say about what the adaboost algorithm is. The core idea of the algorithm is to use a large number of classifiers for decision-making, using a certain method to weighting the results of the classifier, so as to obtain a classifier with strong classification ability. It should be noted that in the calculation of the error rate of the classifier E, the error rate should be calculated for each category, the error rate of multiple categories, here are two categories, up and down, respectively, with 1 and 1. The error rate for each category is <0.5 at the same time, so that the weak classifier is qualified. Well, gossip less, the program is completely written in Java, and did not do anything to optimize, first run out the results again.


First we need a weak classifier, which is defined as follows:

Package Implementation;import java.util.list;public/** * You can inherit this class according to your own needs, implement your own weak classifier, only need to implement two methods can * @author Zhangshiming */abstract class Weakclassifier{public static final int right = 1;public static final int wrong = 0;public do Uble weight;//alphapublic Final Double calculateerrorpositive (double[][] inputx, double[] inputy, int[] rightOrWrong) {  Double errortimes = 0;//The number of prediction errors Double pnum = 0;for (int i = 0; i < inputx.length; i++) {if (inputy[i] = = 1) {Pnum++;int res = Predict (Inputx[i], inputy[i]); if (res = = wrong) {errortimes++;} Rightorwrong[i] = res;}} Return errortimes/pnum;//error rate}public final double calculateerrornegative (double[][] inputx, double[] inputy, int[] Right Orwrong) {Double errortimes = 0;//number of prediction errors Double nnum = 0;for (int i = 0; i < inputx.length; i++) {if (inputy[i] = = 1) {Nnum ++;int res = predict (Inputx[i], inputy[i]); if (res = = wrong) {errortimes++;} Rightorwrong[i] = res;}} Return errortimes/nnum;//error rate}public final double calculateerror (double[][] inputx, double[] inputy,Int[] rightorwrong) {Double errortimes = 0;//The number of prediction errors for (int i = 0; i < inputx.length; i++) {int res = predict (Inputx[i], Inputy[i]); if (res = = wrong) {errortimes++;} Rightorwrong[i] = res;} Return errortimes/inputy.length;//error rate}public final double calculateir (double[][] inputx, double[] inputy, int[] RightOr Wrong,list<double[]> irlist) {Double Sumir = 0;for (int i = 0; i < inputx.length; i++) {if (Inputy[i] > 0 &&am P Rightorwrong[i] = = weakclassifier.right) {Sumir + = Irlist.get (i) [0];} if (Inputy[i] < 0 && Rightorwrong[i] = = Weakclassifier.wrong) {Sumir + = Irlist.get (i) [0];}} return Sumir;} Prediction correctly returns right, error returned wrongpublic final int predict (double[] x, double y) {Double res = predict (x);//system.out.println (RES) ; SIF (res = = y) {return right;} Else{return wrong;}} Public abstract double Predict (double[] x);p ublic abstract void Train (double[][] inputx, double[] inputy, double[] weights );}


This weak classifier is an abstract class, and you are responsible for implementing the two abstract methods of training and prediction. If you understand the training, then the parameters are easy to understand.


Next we inherit this class to implement our threshold classifier:


Package Implementation;class Threadholdweakclassifier extends Weakclassifier{public double mthreadhold;public double mbias;private static final int maxbias = 20;private static final int minbias =-maxbias;private static Double Lastthreadho LD = -10000;private static Double Lastbias = -10000; @Overridepublic double Predict (double[] x) {if (Mthreadhold * x[0] + MB IAS >= 0) {return 1;} else{return-1;}} @Overridepublic String toString () {return "bias=" + Mbias + "threadhold=" + mthreadhold;} @Overridepublic void Train (double[][] inputx, double[] inputy, double[] weights) {Double max = 0, min = 0;double Step = 0. 1;//find Max minfor (int i = 0; i < inputx.length; i++) {Double val = inputx[i][0];if (val > Max) {max = val;} if (Val < min) {min = val;}} Test Threadholdif (Lastthreadhold = = -10000) {lastthreadhold = min + Step;lastbias = Minbias;} Else{if (Lastbias = = -10000) {Lastbias = Minbias;} Else{lastbias + = 0.1;if (Lastbias > Maxbias) {lastbias = Minbias;lastthreadhold + Step;}}} Mthreadhold = Lastthreadhold;mbias = Lastbias;//system.out.println ("bias =" + Mbias + "Threashhold =" + Mthreadhold);}} 

The implementation is very simple, that is, each training generates a threshold and bias, according to the given range with step step for the poor, of course, you can also use linear regression or other methods to generate their own weak classifier.




Then we are going to implement our AdaBoost algorithm, the algorithm is longer, need to explain, look at the comment:


Package Implementation;import Java.util.arraylist;import Java.util.list;public class Adaboost{private double[][] MINPUTX = null;//Sample Private double[] Minputy = null;//Sample label private double[] mweights = null;//sample weight private int msamplenum =- 1;private list<weakclassifier> mweakclassifierset = new arraylist<weakclassifier> ();p ublic Adaboost () {} Public Adaboost (double[][] X, double[] Y) {setinput (x, Y);//constructor, initialize training sample, and label 1,-1}public Adaboost (double[][] input) {if ( input = = NULL | |    Input.length = = 0) {New runtimeexception ("No input data, please check!");} Final int cols = input[0].length-1;double[][] X = new Double[input.length][cols];d ouble[] Y = new Double[input.length];f or (int i = 0; i < input.length; i++) {for (int j = 0; J < Input[i].length, J + +) {if (J < input[i].length-1) {X[i][j] = Input[i][j];} Else{y[i] = Input[i][j];}} SetInput (X, Y);} public void SetInput (double[][] X, double[] Y) {if (X = = NULL | | y = = null) {throw new RuntimeException ("Input X or input Y can not be null, please check! ");} if (x.length! = y.length) {throw new RuntimeException ("Input X or input Y belongs to different dimension, please check!");} MINPUTX = X;minputy = Y;msamplenum = Minputx.length;mweights = new Double[msamplenum];} private void Initweights () {for (int i = 0; i < Msamplenum; i++) {mweights[i] = 1.0/msamplenum;}} Public double predict (double[] x) {//combination of multiple weak classifiers by AdaBoost method Double res = 0;if (mweakclassifierset.size () = = 0) {throw new RuntimeException ("No weak classifiers!");} for (int i = 0; i < mweakclassifierset.size (); i++) {res + = Mweakclassifierset.get (i). Weight *mweakclassifierset.get (i) . Predict (x);} return res;} private void Updateweights (int[] rightorwrong, double alpha) {//Update sample weights, divided samples always have a great weight, The reader may, by its own weight, specially train these easily divided samples double Z = 0;for (int i = 0; i < rightorwrong.length; i++) {if (rightorwrong[i] = = Weakclassifier. right) {Mweights[i] *= math.exp (-alpha);} else if (rightorwrong[i] = = Weakclassifier.wrong) {mweights[i] *= math.exp (alpha);} Else{throw New RuntimeException ("Unknown Right or wrong flag, check! ");} Z + = Mweights[i];} Weighted normalization for (int i = 0; i < rightorwrong.length; i++) {mweights[i]/= Z;}} This method is the core, which is to look for qualified if classifier, and save in a list public void trainweakclassifiers (int epoch,list<double[]> irlist) {if (epoch <= 1) {throw new RuntimeException ("Training epoch must is greater than 1, please check!");} System.out.println ("Start training ..."); Initweights ();//Initialize sample weights for (int i = 0; i < epoch; i++) {Weakclassifier weakclassifier = new Threadholdweakclassifier (); Weakclassifier.train (MInputX, MInputY, mWeights) ; int[] Rightorwrong = new int[msamplenum];//1 Right, 0 wrongdouble Errorp = weakclassifier.calculateerrorpositive (mInput X, Minputy,rightorwrong);//Calculate positive sample error rate Double Errorn = weakclassifier.calculateerrornegative (Minputx, Minputy, Rightorwrong);//Calculate negative sample error rate double error = weakclassifier.calculateerror (MINPUTX, Minputy,rightorwrong);// Calculate the overall error rate double Sumir = Weakclassifier.calculateir (MINPUTX, minputy,rightorwrong,irlist);//Calculate Profit margin//system.out.println ("Perror = "+ Errorp +" nerror = "+ Errorn +" error = "+ error); if (Errorp > 0.5 | | Errorn > 0.5) {continue;//The classifier that does not meet the error rate discards}//if (Sumir <=0) {//continue;//}//system.out.println ("perror =" + Errorp + "Nerror =" + Errorn);d ouble alpha = Math.log ((1-error)/error)/2;weakclassifier.weight = alpha;//Save if classifier weights Updatewei Ghts (Rightorwrong, Alpha); Mweakclassifierset.add (Weakclassifier); System.out.println ("Epoch" + i + "got one weak classifier, haha" + weakclassifier.tostring () + "error=" + Error + "ir= "+ Sumir);} System.out.println ("Train finish!!" + mweakclassifierset.size () + "weak classifier (s) was trained!"); /for (int i = 0; i < mweakclassifierset.size (); i++) {//threadholdweakclassifier TWC = (threadholdweakclassifier) MWeakC Lassifierset.get (i);////system.out.println ("Thread hold =" + Twc.mthreadhold);/}}}


After training, each classifier has its own parameters, thresholds, biases, weights, etc., can be combined.

Then look at the main code:

Package Implementation;import Java.io.bufferedreader;import Java.io.file;import java.io.FileNotFoundException; Import Java.io.filereader;import java.util.arraylist;import java.util.list;import java.util.random;/** *  AdaBoost An implementation framework, you can train your own weak classifier according to your own needs * @author zhangshiming * @email [email protected] * */public class Main {public static void Main (string[] args) throws Exception {int epoch = 110000;//training times//initialization training sample, last column label FileReader FR = new Filereade R (New File ("C:\\input.txt")); BufferedReader bfr = new BufferedReader (FR); String linestr = Null;boolean flag = true; list<double[]> traindata = new arraylist<double[]> (); list<double[]> trainrate = new arraylist<double[]> (); list<double[]> testData = new arraylist<double[]> (); list<double[]> testrate = new arraylist<double[]> (); while ((Linestr = Bfr.readline ()) = null) {if ( Linestr.startswith ("test")) {flag = false;continue;} String[] s = linestr.split ("\ t");d ouble[] Darr = new double[2];d arr[0] =Double.valueof (s[1]);d arr[1] = double.valueof (s[2]);d ouble[] Rarr = new Double[1];rarr[0] = double.valueof (s[3]); if ( Flag) {//read in input data traindata.add (Darr); Trainrate.add (Rarr);} else{//read into test data testdata.add (Darr); Testrate.add (Rarr);}} Bfr.close (); final int trainrows = Traindata.size (); final int testrows = testdata.size ();d ouble[][] X = new Double[trainrow S][traindata.get (0). length];d ouble[][] testinput = new Double[testrows][testdata.get (0). length];for (int i = 0; i < Trainrows; i++) {X[i][0] = Traindata.get (i) [0]; X[I][1] = Traindata.get (i) [1];} for (int i = 0; i < testrows; i++) {testinput[i][0] = Testdata.get (i) [0];testinput[i][1] = Testdata.get (i) [1];} Traindata.clear (); Testdata.clear (); if (testinput = = NULL | | testinput.length = = 0) {New runtimeexception ("No input data,    Please check! ");}    Final int cols = testinput[0].length-1;    Double testx[][] = new Double[testinput.length][cols]; Double testy[] = new Double[testinput.length];for (int i = 0; i < testinput.length; i++) {for (int j =0; J < Testinput[i].length; J + +) {if (J < testinput[i].length-1) {Testx[i][j] = testinput[i][j];} Else{testy[i] = Testinput[i][j];}} Adaboost Adaboost = new Adaboost (X); Adaboost.trainweakclassifiers (Epoch, trainrate);d ouble testerrortimes = 0;double Total = 0;double TestErrorTimes1 = 0;double Total1 = 0;double ir = 0;double IR1 = 0;for (int i = 0; i < testx.length; i+ +) {Double res = adaboost.predict (Testx[i]);/****************** *******************/if (Testy[i] > 0 && Res >= 0) {ir + = Testrate.get (i) [0]; System.out.println ("Doing:" + testrate.get (i) [0]); total++;} if (Testy[i] < 0 && Res >= 0) {ir + = Testrate.get (i) [0]; System.out.println ("Doing:" + testrate.get (i) [0]); testerrortimes++;total++;} /****************** *******************/if (Testy[i] > 0 && Res < 0) {Ir1 + = Testrate.get (i) [0]; SYSTEM.OUT.PRINTLN ("Anti-do:" + testrate.get (i) [0]); testerrortimes1++;total1++;} if (Testy[i] < 0 && Res < 0) {Ir1 + = Testrate.get (i) [0]; SYSTEM.OUT.PRINTLN ("Anti-do:" + Testrate.get (i) [0]); total1++;} System.out.println ("Ir= on test Data" + IR + "ir1=" + ir1); System.out.println ();} System.out.println (); System.out.println ("Ir= on test Data" + IR + "error=" + (testerrortimes/total*100)); System.out.println ("ir1= on test Data" + Ir1 + "error=" + (testerrortimes1/total1*100));}}

The body code is also fairly easy to understand, which is to read the data into the text file, and then call AdaBoost to start training.

Here is the format of the text file data:

2015/06/11-16.51-1-3.088
2015/06/12-22.534 1 6.735
2015/06/15 2.576-1-7.579
2015/06/16-21.514-1-5.374
2015/06/17-28.798 1 2.545
2015/06/18-11.445-1-0.939
2015/06/19-8.888-1-2.741
2015/06/23-11.124 1 0.404
2015/06/24-3.842 1 6.566
TestData
2015/06/25 17.84-1-8.508
2015/06/26-8.278-1-11.105
2015/06/29-27.741-1-11.139




TestData after the data will be read into the test sample, the previous data for training, the first column is the date, the second column is yesterday and the day before the MACD difference, the third column is the day, 1 rise, 1 Fall, the fourth column is the day



Well, I chose to use this ticket as a training, the output is probably the result:

The following is the profit margin after each operation, is doing reverse contrast

Anti-do:-0.17
The ir=-90.90899999999996 ir1=5.994999999999985 on the test data

Anti-do: 1.917
The ir=-90.90899999999996 ir1=7.911999999999985 on the test data

Anti-do: 4.077
The ir=-90.90899999999996 ir1=11.988999999999985 on the test data

Anti-do:-1.956
The ir=-90.90899999999996 ir1=10.032999999999985 on the test data

Anti-do:-2.08
The ir=-90.90899999999996 ir1=7.952999999999985 on the test data

Anti-do:-1.093
The ir=-90.90899999999996 ir1=6.859999999999985 on the test data

Anti-do: 4.113
The ir=-90.90899999999996 ir1=10.972999999999985 on the test data

Anti-do: 1.038
The ir=-90.90899999999996 ir1=12.010999999999985 on the test data

Anti-do:-2.789
The ir=-90.90899999999996 ir1=9.221999999999985 on the test data

The following is the total IR rate on the test data
The ir=-90.90899999999996 error=54.0 on the test data
The ir1=9.221999999999985 error=49.24242424242424 on the test data


IR is doing, IR1 is to do, that is, if the model output res>0 is doing, buy, res<0 is to do, it should be sold, but we buy, this is the reverse do

As you can see, in this stock from last year's bear market until now, if you do it then you should maintain the stability of the proceeds, on the contrary, if you are doing, is the model to buy signals you buy, you will eat a big loss, you see a loss of 90%


About doing and anti-doing, remember our threshold classifier, we set the threshold *MACD difference + bias >0 the next day up, and the second day of the fall, but if we do in turn, still can get this error rate model, in short, do and do, which is effective to use which (in a period of time).


Explain the reverse and do it again:

Doing: If the model output 1, the representative model think tomorrow can rise, then we buy the end of today, tomorrow close sell, profit and loss occurs.

Anti-DO: If the model output-1, representing the model think tomorrow will fall, then we buy the end of today, tomorrow close sell, profit and loss occurs.

Other conditions are wrong, and no profit or loss occurs.

Algorithm-AdaBoost stock price rise and fall forecast model based on MACD

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.