Normally to perform supervised learning you need both types of data sets:
In one dataset (your ' gold standard ') you had the input data together with correct/expected output, this dataset is usual Ly duly prepared either by humans or by collecting some the data in semi-automated. But it's important, and the expected output for every data row here, because you need for supervised learning.
The data is going to apply your model to. In many cases this is the data where you're interested for the output of your model and thus you don ' t has any ' expected "Output here yet.
While performing machine learning your do the following:training phase:you present your data from your ' gold standard "And train your model, by pairing the input with expected output. Validation/test phase:in order to estimate how well your model have been trained (that's dependent upon the size of your Data, the value of would like to predict, input etc) and to estimate model properties (mean error for numeric predictors, Classification errors for classifiers, recall and precision for ir-models etc.) Application Phase:now You apply your freshly-developed model to the Real-world data and get the results. Since you normally don ' t has any reference value in this type of data (otherwise, why would do need your model?), you CA N only speculate about the quality of your model output using the results of your validation phase.
The validation phase is often split into a parts:in the first part of you just look at your models and select the best per Forming approach using the validation data (=validation) then you estimate the accuracy of the selected approach (=test).
Hence the separation to 50/25/25.
If you don't need to choose an appropriate model from several rivaling approaches, you can just re-partition your Set that you basically has only training set and test set, without performing the validation of your trained model. I personally partition them 70/30 then.