Open In App

# Types of padding in convolution layer

Let’s discuss padding and its types in convolution layers. In convolution layer we have kernels and to make the final filter more informative we use padding in image matrix or any kind of input array. We have three types of padding that are as follows.

1. Padding Full : Let’s assume a kernel as a sliding window. We have to come with the solution of padding zeros on the input array. This is a very famous implementation and will be easier to show how it works with a simple example, consider x as a filter and h as an input array. x[i] = [6, 2] h[i] = [1, 2, 5, 4] Using the zero padding, we can calculate the convolution. You have to invert the filter x, otherwise the operation would be cross-correlation. First step, (now with zero padding): = 2 * 0 + 6 * 1 = 6 Second step: = 2 * 1 + 6 * 2 = 14 Third step: = 2 * 2 + 6 * 5 = 34 Fourth step: = 2 * 5 + 6 * 4 = 34 Fifth step: = 2 * 4 + 6 * 0 = 8 The result of the convolution for this case, listing all the steps above, would be: Y = [6 14 34 34 8]

## Python3

 `# importing numpy``import` `numpy as np` `x ``=` `[``6``, ``2``]``h ``=` `[``1``, ``2``, ``5``, ``4``]` `y ``=` `np.convolve(x, h, "full")``print``(y) `

Output:

`[ 6 14 34 34  8]`
1. Padding same : In this type of padding, we only append zero to the left of the array and to the top of the 2D input matrix.

## Python3

 `# importing numpy``import` `numpy as np` `x ``=` `[``6``, ``2``]``h ``=` `[``1``, ``2``, ``5``, ``4``]` `y ``=` `np.convolve(x, h, "same")``print``(y)`

Output:

`[ 6 14 34 34]`
1. Padding valid : In this type of padding, we got the reduced output matrix as the size of the output array is reduced. We only applied the kernel when we had a compatible position on the h array, in some cases you want a dimensionality reduction.

## Python3

 `# importing numpy``import` `numpy as np` `x ``=` `[``6``, ``2``]``h ``=` `[``1``, ``2``, ``5``, ``4``]` `y ``=` `np.convolve(x, h, "valid")``print``(y)`

Output:

`[14 34 34]`

### There are two types of padding in convolution layers:

Valid Padding: In valid padding, no padding is added to the input feature map, and the output feature map is smaller than the input feature map. The convolution operation is performed only on the valid pixels of the input feature map.

Same Padding: In same padding, padding is added to the input feature map such that the size of the output feature map is the same as the input feature map. The number of pixels to be added for padding can be calculated based on the size of the kernel and the desired output feature map size. The convolution operation is performed on the padded input feature map.

The most common padding value is zero-padding, which involves adding zeros to the borders of the input feature map. This helps in reducing the loss of information at the borders of the input feature map and can improve the performance of the model.

In addition to zero-padding, other types of padding, such as reflection padding and replication padding, can also be used. Reflection padding involves reflecting the input feature map along its borders, while replication padding involves replicating the pixels of the input feature map along its borders.

The choice of padding type depends on the specific requirements of the model and the task at hand. In general, same padding is preferred when we want to preserve the spatial dimensions of the feature maps, while valid padding is preferred when we want to reduce the spatial dimensions of the feature maps.

Overall, padding is an important technique in convolutional neural networks that helps in preserving the spatial dimensions of the feature maps and can improve the performance of the model.