Machine Learning
Model Regularization
A technique used to prevent overfitting by adding a penalty for larger coefficients in a model.
Expanded definition
Regularization techniques, such as L1 (Lasso) and L2 (Ridge) regularization, help control model complexity by adding a penalty term to the loss function. This discourages overly complex models that fit the noise in the training data, promoting simpler models that generalize better to unseen data. Regularization is essential in ensuring that models maintain good performance across different datasets.
Related terms
Explore adjacent ideas in the knowledge graph.