There exists couple of techniques in machine learning, which is used to reduce the over-fitting of the model .Of them regularization is one of it, which we will discuss further in detail.

1 . **Early Termination :** It is the basic way to prevent over-fitting i.e we go on training the model until there is no more increase in the accuracy. As soon as there is no more increase in accuracy, we stop training

**2. Regularization : **The process of converting a highly irregular curve into a regular curve i.e a highly over-fitting model to a regular model is called regularization.

Such poor, highly over-fitting curves are characterized by weights that have very large or very small values. Therefore, one way to reduce over-fitting is by preventing the eights from getting too large or too small. This is the fundamental motivation behind motivation for regularization.

We can think of the regularization as stretch

pants. i.e they are flexible enough to suit to any shape and size.

The idea is to add another term to the loss that penalizes large weights. In Deep Learning network, we achieve it by adding a L2 norm to the loss multiplied by some constant, to the loss function.

Fig. Showing Addition of a term to the loss function to penalize large weights

1

Ref :

- https://msdn.microsoft.com/en-us/magazine/dn904675.aspx

### Like this:

Like Loading...

*Related*