Gradient Descent
Gradient descent is an optimization algorithm used to minimize a cost function in machine learning. It works by iteratively adjusting the parameters of the model in the direction of the steepest decrease in the cost function.
Mathematically, gradient descent can be explained as follows:
Let’s say we have a cost function J(θ) where θ represents the parameters of the model. The goal of gradient descent is to find the values of θ that minimize J(θ).
At each iteration, the gradient descent algorithm updates the values of θ as follows:
θ = θ — η * ∇J(θ)
where η is the learning rate (a small positive number) and ∇J(θ) is the gradient of the cost function J(θ), which represents the direction of the steepest decrease in J(θ).
For example, suppose we have a simple linear regression model with a single input variable x and a single output variable y. The cost function J(θ) for this model can be defined as the mean squared error between the predicted and actual values of y.
The gradient of J(θ) with respect to the parameters θ0 and θ1 can be calculated as follows:
∇J(θ0, θ1) = [∂J(θ0, θ1)/∂θ0, ∂J(θ0, θ1)/∂θ1]
where ∂J(θ0, θ1)/∂θ0 and ∂J(θ0, θ1)/∂θ1 represent the partial derivatives of J(θ0, θ1) with respect to θ0 and θ1, respectively.
The gradient descent algorithm then updates the values of θ0 and θ1 at each iteration as follows:
θ0 = θ0 — η * ∂J(θ0, θ1)/∂θ0 θ1 = θ1 — η * ∂J(θ0, θ1)/∂θ1
This process continues until the change in J(θ) is small enough or a maximum number of iterations is reached. The final values of θ0 and θ1 represent the optimal parameters for the model.