It should be difficult to use WEKA for a m training set:
1. Increase the memory size. In fact, WEKA can not only use physical memory, but also occupy virtual memory. If the available memory of Java is set to 2 GB, if the physical memory of the machine is only 1 GB, the operating system will automatically divide a block on the hard disk as the virtual memory as needed. However, this process is generally slow at this time, so this method is not recommended.
2 sampling. A part of data is randomly extracted from the training set for training. In binary classification, when the number of samples reaches several thousand, the prediction is accurate. If thousands of samples are not accurate, either the classification algorithm used is not suitable, or the input variables in the data cannot predict the target variables.
I tried the "kddcup. data_10_percent" dataset in KDD 99, which contains nearly 0.5 million pieces of data. There are more than 70 MB after being made into an ARFF file. In explorer, it takes only a few seconds to load data, and it takes only a few seconds to extract 1% of samples.
3. Incremental Learning ). The so-called incremental learning is simply to read a piece of training data and correct the model, instead of reading all the training data to get the model. The incremental learning algorithm is supported in WEKA knowledgeflow. Currently, WEKA has five algorithms that can work in this way: naivebayesupdateable, ib1, ibk, and LWR. In addition, racedincrementallogitboost allows any regression-based algorithms to incrementally learn classification tasks.
Note that sometimes data is not in the ARFF format, but c45, CSV, and other formats. In this case, manually converting data to ARFF will save a lot of memory and make it easier to detect errors in the dataset.
From: http://john2007.javaeye.com/blog/267181