Open In App

Multilingual Google Meet Summarizer – Python Project

Improve
Improve
Like Article
Like
Save
Share
Report

At the start of 2020, we faced the largest crisis of the 21st century – The COVID-19 pandemic. Amidst the chaos, the generation eventually found a way to get the job done by introducing automation in every other aspect of life. After the hit of the pandemic, we have encountered a rise of 87% in video conferencing tools for daily communications. The communications ranging from online meetups, college lectures, business meets, almost everything got hosted over to the internet, which being virtual, encroached the chances of unfruitful communications. In fact, the data collected from employees of all domains, show that people often miss important points because they find taking minutes of those meetings a time-consuming, distracting, and really boring task, and over 37 billion dollars is wasted over these unproductive meetings. So there arises the need for automatic Text Summarisation.

Multilingual Google Meet Summarizer Python Poject

The purpose of this project is to transcribe online meetings and summarize them by applying hardcore machine learning techniques, in order to produce minutes of the meeting and multilingual summarized audio to gain the user’s, a better understanding of the topic.

The features proposed by the chrome extension are –

  • Meet Transcription
  • Summarization using Extractive and Abstractive Models.
  • Multilingual Audio generation.

Additional Features Offered –

  • Summarization of NPTEL/ MOOC lectures
  • As a Twitter Tweet Shortener.

Tools and Technologies

  • Frontend: Reactjs, Material-UI, Bootstrap, HTML, CSS, Javascript
  • Backend: Django, Django Rest API
  • Database: SQLite
  • ML Libraries: NLTK, Torch

Prerequisites

Knowledge of Python, NLP Libraries, use of Rest API, and good work experience in Web Development using Reactjs.

Control Flow 

  • The user logs on to the website and enables the chrome extension.
  • The chrome extension extracts the audio from the Google meet concerning every speaker and transcribes it via the chrome extension.
  • This extracted transcript is further sent to the backend where hardcore Machine Learning techniques are applied for the text summarization.
  • This ultra-processed text is then directed towards the translator to translate the extract into the user’s desired language
  • This transcript can then be downloaded or even heard on the dashboard according to the user’s wish.

So this is how Multilingual Google Meet Summarizer contributes to the pandemic-driven automation.

Step B Step Implementation

1. Chrome Extension

The main task of the chrome extension is to extract the google meet caption from DOM elements of Google Meet. It makes use of the caption container generated by the inbuilt feature of Google Meet – Turn on Captions. The Meet is Transcribed as Follows –

  1. With the help of selenium, the Xpath of the “Turn On Captions” button is traced out.
  2. The code automatically activates the Google Meet captions by auto-clicking the Captions button.
  3. Then we trace out the Xpath of the caption container and extract the auto-generated text rolling inside the container.
  4. Finally, we append the text in a string with the speaker’s name and timestamp.

The complete text is then sent to the backend for processing.

2. Frontend  and Backend 

  • The first step will be to create an authentication system for users to be able to log in and save their meeting transcripts in the database.
    • Create a user model in Django.
    • Implement JWT(JSON Web Token) in Django using the djangorestframework_simplejwt app.
    • Create respective REST Views for token generation and authentication.
    • One can also use social authentication using JWT instead of email registration depending on preference.
  • The next step will be to create a database for storing user meet transcript information. Create a standard SQL Relation on similar lines as given below.
    • Transcript(transcript_id, owner_name, transcript_date, hostname, transcript_title, meet_duration, content).
    • Additional fields and schemas can also be used depending on the CRUD functionalities to be provided to the user.
  • Create REST API for saving a new Transcript in the database. This API will be used by the chrome extension which will send its transcript along with meta information like timestamp, hostname, etc.
    The API processes the received information and stores it in the database.
  • Create separate APIs for each NLP model which is to be used for summarization. The front end will send the transcript to this API. The API uses the NLP model and returns the summarized text to the frontend. The API simply runs the ML NLP Script which is described below. Create API for translation of the text into multiple languages.

Python3




'''
Translation Code
'''
 
from googletrans import Translator
 
LANGUAGE_CODES = {
    'ENGLISH': 'en',
    'HINDI': 'hi',
    'MARATHI': 'mr',
    'ARABIC': 'ar',
    'BENGALI': 'bn',
    'CHINESE': 'zh-CN',
    'FRENCH': 'fr',
    'GUJARATI': 'gu',
    'JAPANESE': 'ja',
    'KANNADA': 'kn',
    'MALAYALAM': 'ml',
    'NEPALI': 'ne',
    'ORIYA': 'or',
    'PORTUGUESE': 'pt',
    'PUNJABI': 'pa',
    'RUSSIAN': 'ru',
    'SPANISH': 'es',
    'TAMIL': 'ta',
    'TELUGU': 'te',
    'URDU': 'ur'
}
 
 
def translate_utility(inp_text, inp_lang, op_lang):
    inp_lang, op_lang = inp_lang.upper(), op_lang.upper()
    translator = Translator()
    text_to_translate = translator.translate(
        inp_text, src=LANGUAGE_CODES[inp_lang], dest=LANGUAGE_CODES[op_lang])
    op_text = text_to_translate.text
    return(op_text)


3. ML Algorithm

Python3




'''
NLTK MODEL CODE
'''
 
# Tokenizing Sentences
from nltk.tokenize import sent_tokenize 
 
# Tokenizing Words
from nltk.tokenize import word_tokenize 
import nltk
from string import punctuation
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('punkt')
 
# Cleaning text that is got from meet transcript
def clean(text):
    sample = text.split('**')
    sample.pop(0)
    clean_text = ""
    i = 0
    for t in sample:
        if i % 2 != 0:
            clean_text += str(t)
        i += 1
    return clean_text
 
 
# Finding list of stopwords ( Stopwords are
# those which do not add meaning to sentence)
stop_words = set(stopwords.words("english"))
 
# Tokenize
def Wtokenize(text):
    words = word_tokenize(text)
    return words
 
 
# Frequency table will be storing frequency of each word
# appearing in input text after removing stop words
# Need: It will be used for finding most relevant sentences
# as we will be applying this dictionary on every sentence
# and find its importance over other
def gen_freq_table(text):
    freqTable = dict()
    words = Wtokenize(text)
     
    for word in words:
        word = word.lower()
        if word in stop_words:
            continue
        if word in freqTable:
            freqTable[word] += 1
        else:
            freqTable[word] = 1
    return freqTable
 
# Sentence Tokenize
def Stokenize(text):
    sentences = sent_tokenize(text)
    return sentences
 
# Storing Sentence Scores
def gen_rank_sentences_table(text):
   
    # dictionary storing value for each sentence
    sentenceValue = dict()
     
    # Calling function gen_freq_table to get frequency
    # of words
    freqTable = gen_freq_table(text)
     
    # Calling  list of sentences after tokenization
    sentences = Stokenize(text)
 
    for sentence in sentences:
        for word, freq in freqTable.items():
            if word in sentence.lower():
                if sentence in sentenceValue:
                    sentenceValue[sentence] += freq
                else:
                    sentenceValue[sentence] = freq
    return sentenceValue
 
 
def summary(text):
    sum = 0
    sentenceValue = gen_rank_sentences_table(text)
    for sentence in sentenceValue:
        sum += sentenceValue[sentence]
    avg = int(sum / len(sentenceValue))
    summary = ""
    sentences = Stokenize(text)
    for sentence in sentences:
        if (sentence in sentenceValue) and (sentenceValue[sentence] > (1.2 * avg)):
            summary += " " + sentence
    return summary
 
 
def mainFunc(inp_text):
   
    # getting text cleaned
    if("**" not in inp_text):
        text = inp_text
    else:
        cleaned_text = clean(inp_text)
        text = cleaned_text
    summary_text = summary(text)
    print("\nModel Summary: ", summary_text)
 
    return summary_text


Output

Project Application in Real-Life

  • Used in business virtual meetings
  • Used by Students for getting concise notes from a lecture
  • An aid to visually impaired
  • Also as a Twitter Tweet Shortener

Team Member

  1. Tejas Sudhir Tapas
  2. Yash Agrawal
  3. Atul Thakre
  4. Ayush Kedia
  5. Yash Telkhade


Last Updated : 13 Sep, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads