Skip to content

llcorrea/gradient_descent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 

Repository files navigation

Gradient Descent algorithm

This code implements a Gradient Descent (GD) from scratch in order to minimize a loss function, known as mean of squared error (MSE), through a linear regression problem. However, Gradient Descent can be employed in different Machine Learning (ML) algorithms and scenarios.

Basically, the optimizer is used in ML training step to find, as an optimization process, a set of weights and biases that have low loss, on average, across the entire training dataset.

To demonstrate the usage of the GD and its applicability to univariate and multivariate linear regression problem, we are going to explore the dataset of Marketing and Sales from Kaggle, which contains information about TV, influencer, radio, and social media ads budget related to sales.

Dataset from Kaggle: https://www.kaggle.com/harrimansaragih/dummy-advertising-and-sales-data

About

Implementation of the Gradient Descent optimizer

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published