Decision Tree Trading Anatomy: Video 4: Cross validation and Parameter tuning.
Video 1: Anatomy of a decision tree.
How decision tree works.
Code a binary classification binary tree.
Video 2: Trading strategy using regression tree.
Video 3: Ensemble methods.
Video 4: Cross validation and Parameter tuning.
Section 2 Ai techniques.
Video 5: Bagging.
Video 6: Random substance.
Video 7: Boosting.
Video 8: Randomized search and grid search.
Cross-validation gives a measure of out-of-sample accuracy by averaging over several random partitions of the data into training and test samples. It is often used for parameter tuning by doing cross-validation for several (or many) possible values of a parameter and choosing the parameter value that gives the lowest cross-validation average error.
So the process itself doesn’t give you a model or parameter estimates, but you can use it to help choose between alternatives.
However, if you use cross validation for parameter tuning, the out-samples in fact become part of your model. So you need another independent sample to correctly measure the final model’s performance.
Employed for measuring model performance, cross validation can measure more than just the average accuracy:
A second thing you can measure with cross validation is the model stability with respect to changing training data: cross validation builds lots of “surrogate” models that are trained with slightly differing training sets. If the models are stable, all these surrogate models are equivalent, if training is unstable, the surrogate models vary a lot.