Open In App

Few-shot learning in Machine Learning

What is a Few-shot learning?

Few-shot learning is a type of meta-learning process. It is a process in which a model possesses the capability to autonomously acquire knowledge and improve its performance through self-learning. It is a process like teaching the model to recognize things or do tasks, but instead of overwhelming it with a lot of examples, it only needs a few. Few-shot learning focuses on enhancing the model’s capability to learn quickly and efficiently from new and unseen data.

If you want a computer to recognize a new type of car and you show a few pictures of it instead of hundreds of cars. The computer uses this small amount of information and recognizes similar cars on its own. This process is known as few-shot learning.



Few-shot learning

Terminologies related to Few-shot learning

  1. k-ways: It depicts the number of classes a model needs to distinguish or recognize. (2-way means the model needs to classify or generate examples for 2 classes).
     
  2. k-shots: It denote the number of samples per class available during training or evaluation. (1- shot means 1 sample for each class is provided to the model).
                         S= (number of ways*number of shots)
  1. Query Set: Query set refers to additional samples of data. It is also known as a target set. The model uses the query set to evaluate its performance and generalize to new examples. It consists of examples from the same categories present in the support set but distinct from it. 
     
  2. Task: Support set + Query set.
    It is a recognition or classification problem given to the model for which it is trained to solve.
     
  3. Training: It is the process of exposing the model to different tasks and support sets. It generalizes the model to classify new tasks efficiently. The motive is to enable the model to perform well on new and unseen tasks of the same categories given in the support set.
     
  4. Test: It involves the process of evaluation in which new unseen tasks are given to the model using query sets that were not seen during the time of training and it is expected by the model to recognize and classify them accurately based on the prior knowledge gained by the support set provided.
     

In few-shot learning, a Model is a pair of identical networks that converge into a node called a similarity function. And it terminates to a sigmoid function returning output if the query is similar or different.

Since we are working on a pair of networks, it is called “Siamese Network”.



Working of Few-shot learning

Working on few-shot learning

  1. Let us take a sample support dataset of two images ‘i ‘ & ‘j ‘.
  2. The pair of identical neural networks is represented as ‘f ‘.
  3. The embedding produced by the neural networks is represented by ‘f(i)’ and ‘f(j)’ respectively. These are 64-dimensional vectors.
  4. The similarity function takes these two numbers and computes the vector difference.
  5. The similarity function then takes the sum of these values to give us a number
  6. After that, we pass this number to the sigmoid function. It has an edge-weighted ‘w’ and a bias ‘b’.

    Y here, represents the probability of being similar.
  7. If the probability is less than the threshold value ‘t’, the output would be 0 i.e., The images are of Different classes.
    If the probability is equal to or greater than threshold value ‘t’, the output would be 1 i.e., The images belong to the same class.

Variations In Few-shot learning

Different Algorithms for implementation

Real-World Applications of few shot learning

Medical Imaging using few-shot learning

Advantages of Few-shot learning

Disadvantages of few-shot learning


Article Tags :