Introduction:
Gradient Descent is the most used algorithm in Machine Learning. In this article, you will learn how to implement the Gradient Descent algorithm in python. Gradient Descent is a method of minimizing the cost function by an iterative method. In this method, we assume initial weights(theta) and go on minimizing these weights by learning rate. Here the learning rate defines how the weights have to be changed so that the cost function reach the minimum. We go on changing weights until we get minimum value.
What is a Gradient?
The gradient represents the slope of the tangent of the graph of the function. More precisely, the gradient points in the direction of the greatest rate of increase of the function, and its magnitude is the slope of the graph in that direction.
The formula of Gradient Descent Algorithm is :
Algorithm:
- Start
- Assume initial weights theta1 and theta2 and learning rate.
- Find new weights ntheta1 and ntheta2.
- Update theta1 and theta2 with computed new theta’s.
- Repeat Step 3 and Step 4 until you get minimum value.
- Stop.
Code:
x=[1,2,3,4,5] y=[2,3,4,5,6] def s1(x,y,theta1,theta2): m = len(x) err = 0 for i in range(m): err = err+(theta1+theta2*x[i]-y[i]) return err def s2(x,y,theta1,theta2): m = len(x) err = 0 for i in range(m): err = err+(theta1+theta2*x[i]-y[i])*x[i] return err alpha=0.01 theta1 = 50 theta2 = 50 theta1n = theta1-alpha*s1(x,y,theta1,theta2) theta2n = theta2-alpha*s2(x,y,theta1,theta2) while((abs(theta1-theta1n)>0.001) or (abs(theta2-theta2n)>0.001)): theta1 = theta1n theta2 = theta2n theta1n = theta1-alpha*s1(x,y,theta1,theta2) theta2n = theta2-alpha*s2(x,y,theta1,theta2) xn = int(input("Training complete... \n Please Enter test value --> ")) yn = theta1+theta2*xn print ("The value of Y predicted is ",yn)