Bragadeesh’s Substack

Bragadeesh’s Substack

L1 & L2 Regularization: The Spice and Sugar in the Machine Learning Recipe

Bragadeesh's avatar
Bragadeesh
Jan 03, 2024
∙ Paid
Share

Greetings, curious minds!

Imagine you’re crafting the most exquisite dish. You’ve got your main ingredients: data (the veggies), a model (the cooking method), and predictions (the final taste). But something’s amiss. It’s palatable but lacks that chef’s kiss perfection. What you need are spices (L1 regularization) and sugar (L2 regularization) to balance and enhance the flavors, ensuring your dish neither too bland (underfitting) nor too overpowering (overfitting). Let’s cook up some knowledge, shall we?

L1 Regularization: The Spice of Sparsity

In the culinary world of machine learning, L1 regularization is like adding a dash of spice to your dish. It introduces a penalty equal to the absolute value of the magnitude of coefficients. Mathematically, it’s expressed as the sum of the absolute values of the coefficients.

L1=λ∑∣wi​∣

Here, λ is our chef’s secret: the regularization parameter, determining how much we want to penalize our coefficients, and wi​ represents the coefficients themselves.

Keep reading with a 7-day free trial

Subscribe to Bragadeesh’s Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Bragadeesh
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture