How to Evaluate machine learning Models, part 4:hyperparameter Tuning

Source: Internet
Author: User
Tags cos mathematical functions

How to Evaluate machine learning Models, part 4:hyperparameter Tuning

In the realm of machine learning, hyperparameter tuning is a "meta" learning task. It happens to is one of my favorite subjects because it can appear like black magic, yet its secrets is not impenetrable. In this post, I'll walk through what's hyperparameter tuning, why it's hard, and what's kind of smart tuning methods is b Eing developed to does something about it.

First, let ' s clarify some basic concepts. Machine learning models is basically mathematical functions that represent the relationship between different aspects of Data. For instance, a linear regression model uses a line to represent the relationship between "features" and "target." The formula looks like this:

WTx=y,

where x is a vector that represents features of the data and y are a scalar variable that rep Resents the target (some numeric quantity, we wish to learn to predict).

This model assumes the relationship betweenxand< Span id= "mathjax-span-20" class= "Mrow" >y  is Linear. The Variable w  is A weight vector that represents the normal vector for the line; It specifies the slope of the line. This is what's known as a model parameter, and it is learned during the training phase. "Training a model" involves using a optimization procedure to determine the best model parameter that "fits" the data. (Krishna ' S blog post on Parallel sgd gives a great introduction on optimization methods for model training.)

What is a hyperparameter? Why is it important?

There is another set of parameters known as hyperparameters, sometimes also knowns as "nuisance parameters." These is values that must is specified outside of the training procedure. Vanilla linear regression doesn ' t has any hyperparameters. But variants of linear regression does. Ridge regression and Lasso both add a regularization term to linear regression; The weight for the regularization term is called the regularization parameter. Decision Trees has hyperparameters such as the desired depth and number of leaves in the tree. Support vectors machines require setting a misclassification penalty term. kernelized SVM require setting kernel parameters like the width for RBF kernels. The list goes on.

This type of Hyperparameter controls the capacity of the model, i.e., how flexible the model was, how many degrees of freedom it have in fitting the data. Proper control of model capacity can preventoverfitting, which happens when the model is too flexible, and the training PR Ocess adapts too much to the training data, thereby losing predictive accuracy on new test data. So a proper setting of the hyperparameters are important.

Another type of hyperparameters comes from the training process itself. For instance, stochastic gradient descent optimization requires a learning rate or a learning schedule. Some optimization methods require a convergence threshold. Random forests and boosted decision trees require knowing the number of total trees. (Though this could also is classified as a type of regularization hyperparameter.) These also need to BES set to reasonable values in order for the training process to find a good model.

Hyperparameter Tuning

Hyperparameter settings could has a big impact on the prediction accuracy of the trained model. Optimal hyperparameter settings often differ for different datasets. Therefore they should is tuned for each dataset. Since the training process doesn ' t set the hyperparameters, there needs to is a meta process that tunes the Hyperparameter S. mean by Hyperparameter tuning.

Hyperparameter tuning is a meta-optimization task. As the figure shows, each trial of a particular hyperparameter setting involves training a model--an inner optimization pr Ocess. The outcome of hyperparameter tuning are the best hyperparameter setting, and the outcome of model training are the best mod El parameter setting.

Illustration of the Hyperparameter tuning machine.

For each proposed hyperparameter setting, the inner model training process comes up with a model for the DataSet and OU Tputs evaluation results on hold-out or cross validation datasets. After evaluating a number of hyperparameter settings, the Hyperparameter tuner outputs the setting that yields the best PE Rforming model. The last step was to train a new model on the entire datasets (training and validation) under the best hyperparameter settin G. Here is the pseudo-code. (The training and validation step can be conceptually replaced with a cross-validation step.)

Hyperparameter_tuning (Training_data, Validation_data, hp_list):
Hp_perf = []
foreach hp_setting in hp_list:
m = Train_model (Training_data, hp_setting)
Validation_results = Eval_model (M, Validation_data)
Hp_perf.append (Validation_results)
best_hp_setting = Hp_list[max_index (hp_perf)]
Best_m = Train_model (Training_data.append (validation_data), best_hp_setting)
Return (best_hp_setting, best_m)
hyperparameter Tuning Algorithms

Conceptually, hyperparameter tuning is a optimization task, just like model training.

However, these, quite different in practice. When training a model, the quality of a proposed set of model parameters can be written as a mathematical formula (usually called the loss function). When tuning hyperparameters, however, the quality of those hyperparameters cannot is written down in a closed-form formula , because it depends on the outcome of a blackbox (the model training process).

Hyperparameter tuning is much harder. Up until a few years ago, the only available methods were grid search and random search. The last few years, there's been increased interest in auto-tuning. Several groups has worked on the problem, published papers, and released new tools.

Grid Search

Grid search, True to it name, picks out a grid of hyperparameter values, evaluates every one of them, and returns the win Ner. For example, if the hyperparameter are the number of leaves in a decision tree, then the grid could be 10, 20, 30, ..., 100. For regularization parameters, it's common to use exponential scale:1e-5, 1e-4, 1e-3, ... 1. Some guess work was necessary to specify the minimum and maximum values. So sometimes people run a small grid, see if the optimum lies at either end point, and then expand the grid in that direct Ion. This is called manual grid search.

Grid search is dead simple to set up and trivial to parallelize. It is the most expensive method in terms of total computation time. However, if run in parallel, it's fast in terms of wall clock time.

Random Search

I love movies where the underdog wins, and I Love machine learning papers where simple solutions is shown to be Surpri singly effective. This was the storyline of  "Random Search for hyperparameter optimization"  by Bergstra and Bengio. Random search is a slight variation on grid search. Instead of searching over the entire grid, random search is only evaluates a random sample of points on the grid. This makes the random search a lot cheaper than grid search. Random search wasn ' t taken very seriously before. This is because it doesn ' t search through all of the grid points, so it cannot possibly beat the optimum found by grid search. But then came along Bergstra and Bengio. They showed that, in surprisingly many instances, random search performs on as well as grid search. All in all, trying random points sampled from the grid seems to be good enough. 

In hindsight, there are a simple probabilistic explanation for the result:for any distribution over a sample space with a Finite maximum, the maximum of the random observations lies within the top 5% of the true maximum, with 95% probability. That could sound complicated, but it's not. Imagine the 5% interval around the true maximum. Now imagine, we sample points from the this space and see if any of the IT lands within that maximum. Each random draw have a 5% chance of landing in that interval, if we drawNPoints independently, then the probability-all of them miss the desired interval are< Span id= "mathjax-span-29" class= "Mrow" > ( 1− 0.05) n . So the probability, at least one's them succeeds in hitting the interval was 1 minus that quantity. We want at least a. Probability of success. To figure out the number of draws we need, just solve For n  in the equation:

1−(1−0.05)n>0.95.

We get n>=. ta-da!

the moral of the story was: if the Close-to-optimal region of Hyperparameters occupies at least 5% of the grid surface, Then the random search with trials would findthat region with high probability.

With it utter simplicity and surprisingly reasonable performance, random search is my to-go method for Hyperparameter Tun Ing. It's trivially parallelizable, just like grid search, but it takes much fewer tries and performance almost as well most of The time.

Smart Hyperparameter Tuning

Smarter tuning methods is available. Unlike the "dumb" alternatives of grid search and random search, smart hyperparameter tuning is much less parallelizable. Instead of generating all the candidate points upfront and evaluating the batch in parallel, smart tuning techniques pick A few hyperparameter settings, evaluate their quality, then decide where to sample next. This is an inherently iterative and sequential process. It is not very parallelizable. The goal is to make fewer evaluations overall and save on the overall computation time. If Wall Clock time is your goal, and you can afford multiple machines, then I suggest sticking to random search.

Buyer Beware:smart search algorithms require computation time to figure out where to place the next set of samples. Some algorithms require much more time than others. Hence It only makes sense if the evaluation procedure--the inner optimization box--takes much longer than the process of E valuating where to sample next. Smart search algorithms also contain parameters of their own that need to be tuned. (hyper-hyperparameters?) Sometimes tuning the hyper-hyperparameters is crucial to make it faster than random search.

Recall that Hyperparameter tuning was difficult because we cannot write down the actual mathematical formula for the FUNCTI On we ' re optimizing. (The technical term for the function, that's being optimized is response surface.) Consequently, we don ' t have the derivative of this function, and therefore most of the mathematical optimization tools tha T we know and love, such as the Newton method or SGD, cannot be applied.

I'll highlight three smart tuning methods proposed in recent Years:derivative-free optimization, Bayesian Optimizati On, and random forest smart tuning. Derivative-free methods employ heuristics to determine where to sample next. Bayesian optimization and random forest Smart tuning both model the response surface with another function, then sample mo Re points based on what the model says. 

Jasper Snoek, Hugo Larochelle, and Ryan p. adams used Gaussian processes to model the response function and Someth ING called expected improvement to determine the next proposals. Gaussian processes is trippy; They specify distributions over *functions*. When the one samples from a Gaussian process, the one generates an entire function. Training a Gaussian process adapts this distribution over the data at hand, so it generates functions that is more l Ikely to model all of the data at once. Given the current estimate of the Gaussian process, one can compute the amount of expected improvement of any point over t He current optimum--the expected improvement. They showed that this procedure of modeling the hyperparameter response surface and generating the next set of proposed HY Perparameter settings can beat the evaluation cost of manual tuning.

Frank Hutter, Holger H. Hoos and Kevin leyton-brown suggested training a random forest of regression trees to APPR Oximate the response surface. New points is sampled based on where the random forest considers to be the optimal regions. They call this SMAC (sequential model-based algorithm Configuration). Word on the street is, this method works better than Gaussian processes for categorical hyperparameters.

Derivative-free optimization, as the name suggests, is a branch of mathematical optimization for situations where there is No derivative information. Notable derivative-free methods includegenetic algorithm and Nelder-mead. Essentially, the algorithms boil down to:try a bunch of random points, approximate the gradient, find the most likely sea RCH direction, and go there. A few years ago, Misha Bilenko and I tried nelder-mead for hyperparameter tuning. We found the algorithm delightfully easy-implement and no less efficient that Bayesian optimization.

Other posts in the This series

Part 1:orientation

Part 2a:classification Metrics

Part 2b:ranking and regression metrics

Part 3:validation and offline testing

software Packages

Grid Search and Random Search:graphlab Create, Scikit-learn.

Bayesian optimization using Gaussian processes:spearmint (from Jasper et al.)

Random Forest Tuning:smac (from Hutter et al.)

Hyper Gradient:hypergrad (from Maclaurin et al.)

Further reading

Random Search for Hyper-parameter optimization, by James Bergstra and Yoshua Bengio. Journal of machine learning, 2012.

Algorithms for Hyper-parameter optimization, by James Bergstra, Rémi Bardenet, Yoshua Bengio and Balázs Kégl. Neural Information Processing Systems, 2011.

Practical Bayesian optimization of machine learning algorithms, by Jasper Snoek, Hugo Larochelle and Ryan p. Adams. Neural information processing Systems, 2012.

Sequential model-based optimization for general algorithm Configuration, by Frank Hutter, Holger H. Hoos and Kevin leyton- Brown. Learning and Intelligent optimization, 2011.

Lazy paired Hyper-parameter tuning, by Alice Zheng and Misha Bilenko. International Joint Conference on Artificial Intelligence, 2013.

Introduction to Derivative-free optimization, by Andrew R. Conn, Katya Scheinberg and Luis N. vincente. Mps-siam Series on Optimization, 2009.

gradient-based Hyperparameter optimization through reversible learning, by Dougal Maclaurin, David Duvenaud and Ryan P. Adams. ARXIV, 2015.

How to Evaluate machine learning Models, part 4:hyperparameter Tuning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.