Problem : 

With each iteration during the model training process, the model may ultimately learn the

train and test data itself.  In such case our model will perfectly fit the train data   and the test data, however when compared with the real data, it will perform poorly.

Hence it is very important that we separately hold out a sample  test set, while use the train set and validation set to train the model.

Real Case Example :  Kaggle

Kaggle often employs this technique. It helds out a test set and provides train and test set.And as often seen, model that performed well on validation set, when at the end of the competition tested against the test set, found to perform poorly and were downgraded in the score board. It is most probably because the participants are often misled into  assuming the accuracy with over-fitting, which might happen due to their  numerous iteration that cause their model to learn the test data too.

Hence, in order to  be extra sure and for a proper experimental design , it is important to held out the test set, while training the model on the train and validation set.

Often, is seen, such as in the case of Kaggle competition that , often good scoring models in the test data, when tested against the KAggle’s held out data performs poorly and falls down in the rank.

Advertisements