In this method, we are going to implement an offline speech to text functionality in our project. It can work both Online and Offline. When there is no internet connectivity, it will use the pre-stored language model from our mobile device, so it didn’t recognize much clearly but give good results. When it is Online it recognizes all the words correctly. Note that we are going to implement this project using the Kotlin language.
Note: The offline method will not work on those devices whose API Version is less than 23.
Step by Step Implementation
Step 1: Create a New Project
To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Kotlin as the programming language.
Step 2: Adding Permission
To access the mobile device microphone, we have to add RECORD_AUDIO permission in our AndroidManifest.xml file like below:
Step 3: Modify the colors.xml file
Add Below lines in the colors.xml file.
Step 4: Working with the activity_main.xml file
Go to the activity_main.xml file and refer to the following code. Below is the code for the activity_main.xml file.
Step 5: Working with the MainActivity.kt file
Go to the MainActivity.kt file and refer to the following code.
Checking Audio Permission:
To get started, we first need to allow the app to access microphone permission. This function will check if the app is able to access the microphone permission or not. If the permission is not granted then it will open the settings directly and from there the user can allow the microphone permission manually. This offline speech to text is not supported for lower API versions i.e., below 23, so here we are first checking the mobile API version by using Build.VERSION.SDK_INT, and here Build.VERSION_CODES.M will return the constant value of M i.e., 23. Replace the package name from the code with your package name(You can find your package name from the AndroidManifest.xml file)
The Function which Handles Speech to Text:
This is the main function of our project which handles speech. We have to first create an object of SpeechRecognizer class of current Context i.e., this (If we are using any Fragments, AlertDialog, etc, there we can replace this with context). Then we have to create an intent and attach EXTRA_LANGUAGE_MODEL and LANGUAGE_MODEL_FREE_FORM to the intent. In setRecognitionListener() method we have to override all the necessary functions like below. To get the speech result, we have to use onResults() method and storing the array list output from the Bundle. The element at the first index will give the output of the speech. We can also use useful functions like onBeginningOfSpeech() which runs first before it started listening and onEndOfSpeech() which runs after the result.
Below is the final code for the MainActivity.kt file. Comments are added inside the code to understand the code in more detail.