Introduction to Gradient descent algorithm (along with variants) in machine learning

Source: Internet
Author: User
Tags theano

Introduction

Optimization is always the ultimate goal whether you be dealing with a real life problem or building a software product. I, as a computer science student, always fiddled with optimizing my code to the extent that I could brag on its fast ex Ecution.

Optimization basically means getting the optimal output for your problem. If you read the recent article on optimization, you would be acquainted with how optimization plays an important role in O ur real-life.

Optimization in machine learning have a slight difference. Generally, while optimizing, we know exactly, we have data looks like and what areas we want to improve. But the machine learning we had no clue how our "new data" looks like, let alone try to optimize on it.

So in machine learning, we perform optimization on the training data and check it performance on a new validation data.

Broad Applications of optimization

There is various kinds of optimization techniques which is applied across various domains such as

    • mechanics– For eg:in deciding the surface of aerospace design
    • economics– For Eg:cost minimization
    • physics– For eg:optimization time in quantum computing

Optimization have many more advanced applications like deciding optimal route for transportation, shelf-space optimization, etc.

Many popular machine algorithms depend upon optimization techniques such as linear regression, k-nearest neighbors, neural Networks, etc. The applications of optimization is limitless and is a widely researched topic in both academia and industries.

In this article, we'll look at a particular optimization technique called Gradient descent. It's the most commonly used optimization technique when dealing with machine learning.

Table of Content
    1. What is Gradient descent?
    2. Challenges in executing Gradient descent
    3. Variants of Gradient descent algorithm
    4. Implementation of Gradient descent
    5. Practical tips on applying gradient descent
    6. Additional Resources

1. What is Gradient descent?

To explain Gradient descent I "ll use the classic mountaineering example.

Suppose you were at the top of a mountain, and you had to reach a lake which was at the lowest point of the mountain (a.k.a Valley). A Twist is, blindfolded and you has zero visibility to see where you are headed. So, what approach would you take to reach the lake?

Source

The best-of-the-ground near your and observe where the land tends to descend. This would give an idea in what direction you should take your first step. If you follow the descending path, it's very likely you would reach the lake.

To represent this graphically, notice the below graph.

Source

Let us now maps this scenario in mathematical terms.

Suppose we want to find out the best parameters (θ1) and (θ2) for our learning algorithm. Similar to the analogy above, we see we find Similar mountains and valleys when we plot our ' cost space '. Algorithm would perform when we choose a particular value for a parameter.

In the y-axis, we have the cost J (theta) against our parametersθ1 andθ2 on x-axis and Z-axis respectively. Here, Hills is represented by the Red region, which has high cost, and valleys is represented by Blue region, which has LO W cost.

Now there is many types of gradient descent algorithms. They can classified by and methods mainly:

    • On the basis of data ingestion
      1. Full Batch Gradient descent algorithm
      2. Stochastic Gradient descent algorithm

In full batch gradient descent algorithms, your use of whole data at once-compute the gradient, whereas in stochastic you t Ake a sample while computing the gradient.

    • On the basis of differentiation techniques
      1. First Order differentiation
      2. Second Order Differentiation

Gradient descent requires calculation of Gradient by differentiation of the cost function. We can either use first order differentiation or second order differentiation.

2. Challenges in executing Gradient descent

Gradient descent is a sound technique which works in the very most of the cases. But there is many cases where gradient descent does not work properly or fails to work altogether. There is three main reasons when this would happen:

    1. Data challenges
    2. Gradient challenges
    3. Implementation challenges

2.1 Data Challenges
    • If the data is arranged in a, it poses a non-convex optimization problem. It is the very difficult to perform optimization using gradient descent. Gradient descent only works for problems which has a well defined convex optimization problem.
    • Even when optimizing a convex optimization problem, there is numerous minimal. The lowest point was called global minimum, whereas rest of the points is called local minima. Our aim was to go to the global minimum while avoiding local minima.
    • There is also a saddle point problem. The IS-a point in the the data where the gradient is zero-is-optimal point. We don ' t have a specific-a-avoid-to-a-point and are still an active area of the.

2.2 Gradient Challenges
    • If the execution is not do properly while using gradient descent, it may leads to problems like vanishing gradient or exp Loding gradient problems. These problems occur when the gradient is too small or too large. And because of this problem the algorithms does not converge.

2.3 Implementation Challenges
    • Most of the neural network practitioners don ' t generally pay attention to implementation, but it's very important to look At the resource utilization by networks. For eg:when implementing gradient descent, it's very important to note how many resources you would require. If the memory is too small for your application and then the network would fail.
    • Also, its important to keep track of things like floating point considerations and hardware/software prerequisites.

3. Variants of Gradient descent algorithms

Let us look at the most commonly used gradient descent algorithms and their implementations.

3.1 Vanilla Gradient descent

This is the simplest form of gradient descent technique. Here, vanilla means pure/without any adulteration. Its main feature is, we take small steps in the direction of the minima by taking gradient of the cost function.

Let's look at the its pseudocode.

Update = learning_rate * Gradient_of_parametersparameters = parameters-update

Here, we see that we have a update to the parameters by taking gradient of the parameters. and multiplying it by a learning rate, which was essentially a constant number suggesting how fast we want to go the Minimu M. Learning rate is a hyper-parameter and should being treated with care when choosing its value.

Source

3.2 Gradient descent with Momentum

Here, we tweak the above algorithm in such a-on-the-I-pay-heed to the prior step before taking the next step.

Here ' s a pseudocode.

Update = learning_rate * gradientvelocity = previous_update * momentumparameter = parameter + velocity–update

Here, we update is the same as, the vanilla gradient descent. But we introduce a new term called velocity, which considers the previous update and a constant which are called momentum.

Source

3.3 Adagrad

Adagrad uses adaptive technique for learning rate updation. In this algorithm, on the basis of how the gradient have been changing for all the previous iterations we try to change the Learning rate.

Here ' s a pseudocode

Grad_component = previous_grad_component + (gradient * gradient) Rate_change = Square_root (grad_component) + Epsilonadapted_learning_rate = learning_rate * Rate_change
Update = adapted_learning_rate * Gradientparameter = parameter–update

In the above code, epsilon are a constant which is used to keep rate of change of learning rate in check.

3.4 ADAM

ADAM is one more adaptive technique which builds on adagrad and further reduces it downside. In the other words, you can consider this as momentum + Adagrad.

Here ' s a pseudocode.

Adapted_gradient = Previous_gradient + ((gradient–previous_gradient) * (1–beta1)) Gradient_component = (gradient_change –previous_learning_rate) Adapted_learning_rate = previous_learning_rate + (gradient_component * (1–BETA2))
Update = adapted_learning_rate * Adapted_gradientparameter = parameter–update

Here Beta1 and Beta2 is constants to keep changes in gradient and learning rate in check

There is also second order differentiation method like L-bfgs. You can see the implementation of this algorithm in SciPy library.

4. Implementation of Gradient descent

We'll now look at a basic implementation of gradient descent using Python.

Here we'll use the gradient descent optimization to find the best parameters in our deep learning model on an application O F image recognition problem. Our problem are an image recognition, to identify digits from a given X-image. We have a subset of images for training and the rest for testing our model. In this article we'll take a look at how we define gradient descent and see how our algorithm performs. Refer This article for an end-to-end implementation using Python.

Here's the main code for defining vanilla gradient descent,

params = [Weights_hidden, Weights_output, Bias_hidden, Bias_output]
def SGD (cost, params, lr=0.05):  grads = T.grad (Cost=cost, wrt=params)  updates = []  for P, g in zip (params, GRA DS):    updates.append ([P, P-g * LR])  return updatesupdates = SGD (cost, params)

Now we are break it understand it better.

We defined a function SGD with arguments as cost, params and LR. These represent J (θ) as seen previously,θi.e. The parameters of our deep learning algorithm and we learning rate. We Set the default learning rate as 0.05 and the This can is changed easily as per our preference.

def SGD (cost, params, lr=0.05):

We then defined gradients of our parameters with respect to the cost function. Here we used Theano Library to find gradients and we imported Theano as T

Grads = T.grad (Cost=cost, Wrt=params)

And finally iterated through all, the parameters to find, the updates for all possible parameters. You can see the we use the vanilla gradient descent here.

For P, g in zip (params, grads):    updates.append ([P, p-g * LR]

We can use this function to then find the best optimal parameters for our neural network. On using this function, we find, we neural network does a good enough job in finding the digits with our image as seen Below

Prediction is:  8

In the This implementation, we see the using gradient descent we can get optimal parameters for our deep learning Algorith M.

5. Practical tips on applying gradient descent

Each of the above mentioned gradient descent algorithms has their strengths and weaknesses. I ' ll just mention some quick tips which might help you choose the right algorithm.

    • for rapid prototyping, use adaptive techniques like Adam/adagrad. These help on getting quicker results with much less efforts. As here, you don ' t require much hyper-parameter tuning.
    • to get the best results, you should use vanilla gradient descent or momentum. Gradient descent are slow to get the &nbs p;desired results, but these results is mostly better than adaptive techniques.
    • If Your data is small and can are fit in a single iteration, you can use the 2nd order techniques like L-BFGS. This is because 2nd order techniques was extremely fast and accurate, but was only feasible if data is small enough
    • There also an emerging method (which I haven ' t tried but looks promising) to use learned features to predict Learnin G rates of gradient descent. Go through this paper for more details.

Now there is many reasons why a neural network fails to learn. But it helps immensely if the can monitor where your algorithm is going wrong.

When applying gradient descent, you can look at these points which might is helpful in circumventing the problem:

    • Error rates –you should check the training and testing error after specific iterations and make sure both of the M decreases. If that's not the case, there might be a problem!
    • Gradient flow in hidden layers– Check if the network doesn ' t show a vanishing gradient problem or exploding gradient problem.
    • Learning Rate –which You should the check when using adaptive techniques.

6. Additional Resources
    • Refer This paper on overview of gradient descent optimization algorithms.
    • cs231n Course material on gradient descent.
    • Chapter 4 (numerical optimization) and Chapter 8 (optimization for deep learning models) of the Deep learning book

End Notes

I hope you enjoyed reading this article. After going through this article, you'll be a adept with the basics of gradient descent and its variants. I have also given a practical tips for implementing them. Hope you found them helpful!

If you had any questions or doubts, feel free to post them in the comments below.

(EXT) Introduction to Gradient descent algorithm (along with variants) in machine learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.