Open In App

Keras.Conv2D Class

Keras Conv2D is a 2D Convolution Layer, this layer creates a convolution kernel that is wind with layers input which helps produce a tensor of outputs.

Kernel: In image processing kernel is a convolution matrix or masks which can be used for blurring, sharpening, embossing, edge detection, and more by doing a convolution between a kernel and an image.



The Keras Conv2D class constructor has the following arguments:

keras.layers.Conv2D(filters, kernel_size, strides=(1, 1),
  padding='valid', data_format=None, dilation_rate=(1, 1),
  activation=None, use_bias=True, kernel_initializer='glorot_uniform',
  bias_initializer='zeros', kernel_regularizer=None,
  bias_regularizer=None, activity_regularizer=None,
  kernel_constraint=None, bias_constraint=None)

Now let us examine each of these parameters individually:
filters



model.add(Conv2D(32, (3, 3), padding="same", activation="relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))
  • Here we are learning a total of 32 filters and then we use Max Pooling to reduce the spatial dimensions of the output volume.
  • As far as choosing the appropriate value for no. of filters, it is always recommended to use powers of 2 as the values.
  • kernel_size

    model.add(Conv2D(32, (7, 7), activation="relu"))

    strides

    model.add(Conv2D(128, (3, 3), strides=(1, 1), activation="relu"))
    model.add(Conv2D(128, (3, 3), strides=(2, 2), activation="relu"))

    padding

    model.add(Conv2D(32, (3, 3), padding="valid"))

    You can instead preserve spatial dimensions of the volume such that the output volume size matches the input volume size, by setting the value to the “same”.

    model.add(Conv2D(32, (3, 3), padding="same"))

    data_format

    dilation_rate

    activation

    model.add(Conv2D(32, (3, 3), activation="relu"))

    OR

    model.add(Conv2D(32, (3, 3)))
    model.add(Activation("relu"))

    use_bias

    kernel_initializer

    bias_initializer

    kernel_regularizer, bias_regularizer and activity_regularizer

    from keras.regularizers import l2
    ...
    model.add(Conv2D(128, (3, 3), activation="relu"),
        kernel_regularizer=l2(0.0002))

    kernel_constraint and bias_constraint

    Here is a simple code example to show you the working of different parameters of Conv2D class:




    # build the model
    model = Sequential()
    model.add(Conv2D(32, kernel_size =(5, 5), strides =(1, 1),
                     activation ='relu'))
    model.add(MaxPooling2D(pool_size =(2, 2), strides =(2, 2)))
    model.add(Conv2D(64, (5, 5), activation ='relu'))
    model.add(MaxPooling2D(pool_size =(2, 2)))
    model.add(Flatten())
    model.add(Dense(1000, activation ='relu'))
    model.add(Dense(num_classes, activation ='softmax'))
      
    # training the model
    model.compile(loss = keras.losses.categorical_crossentropy,
                  optimizer = keras.optimizers.SGD(lr = 0.01),
                  metrics =['accuracy'])
      
    # fitting the model
    model.fit(x_train, y_train,
              batch_size = batch_size,
              epochs = epochs,
              verbose = 1,
              validation_data =(x_test, y_test),
              callbacks =[history])
      
    # evaluating and printing results
    score = model.evaluate(x_test, y_test, verbose = 0)
    print('Test loss:', score[0])
    print('Test accuracy:', score[1])
    
    

    Understanding the Code:

    Similar Code using Functional API




    #build the model
    inputs = Input(shape = ())
    conv1 = Conv2D(32, kernel_size = (5,5), strides = (1,1), activation = 'relu'))(inputs)
    max1 = MaxPooling2D(pool_size=(2,2), strides=(2,2)))(conv1)
    conv2 = Conv2D(64, (5,5), activation = 'relu'))(max1)
    max2 = MaxPooling2D(pool_size=(2,2)))(conv2)
    flat = Flatten()(max2)
    den1 = Dense(100, activation = 'relu'))(flat)
    out1 = Dense(num_classes, activation ='softmax'))(den1)
      
    model = Model(inputs = inputs, output =out1 )
      
      
    # training the model 
    model.compile(loss = keras.losses.categorical_crossentropy,
                  optimizer = keras.optimizers.SGD(lr = 0.01),
                  metrics =['accuracy'])
      
    # fitting the model 
    model.fit(x_train, y_train,
              batch_size = batch_size,
              epochs = epochs,
              verbose = 1,
              validation_data =(x_test, y_test),
              callbacks =[history])
      
    # evaluating and printing results 
    score = model.evaluate(x_test, y_test, verbose = 0)
    print('Test loss:', score[0])
    print('Test accuracy:', score[1])
    
    

    Summary:


    Article Tags :