Open In App

Flutter – On-Device Machine Learning using ML Kit

Last Updated : 08 Feb, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

Machine Learning is being used widely in projects today. It requires knowledge of Machine Learning to create models, for application developers who want to use Machine Learning in the project but don’t know how to create a model. In Flutter, certain developments have been done by Google to help developers create the application they want. Although there is a package google_ml_kit however it is only available for android yet. In this article, we will use ML Kit.

ML Kit:

ML Kit is created by Google for developers to build mobile applications that involve machine learning easily. It can be used for text recognition, face and pose detection, barcode scanning, etc. We are going to create an application that detects items in the Image and labels them.

Firstly, let us see the application we are going to create in this tutorial – 

Features:

  1. Capture the Image
  2. Preprocess Image
  3. Identify items with label, index, and confidence.

Install the dependency:

Include the google_ml_kit package in pubspec.yaml file of the app, in order to use ML Kit features by Google.

Dart




google_ml_kit: ^0.3.0


To use the camera feature we first need to create a screen that will capture the image. To include the camera feature we need to use the camera library of Flutter.

Dart




camera: ^0.8.1


Configure both libraries by running pub get.

Create camera_screen.dart:

Import the camera library in camera_screen.dart, and initialize the CameraController. We are also finding the number of cameras available and selecting the first camera from the option to take pictures.

Dart




late final CameraController _controller;
  
void _initializeCamera() async {
    final CameraController cameraController = CameraController(
      cameras[0],
      ResolutionPreset.high,
    );
    _controller = cameraController;
  
    _controller.initialize().then((_) {
      if (!mounted) {
        return;
      }
      setState(() {});
    });
  }


Call the method inside initState(), to start the camera when the app starts:

Dart




@override
void initState() {
  _initializeCamera();
  super.initState();
}


Before moving to the next screen we need to dispose of the _controller to avoid any memory leak:

Dart




@override
void dispose() {
  _controller.dispose();
  super.dispose();
}


Now, to take pictures, we are creating another function _takePicture(), when the button will be clicked to capture an image this function will be invoked. It will check whether the CameraController is initialized or not. It will return the ImagePath where the image will be stored in the device after capturing. If any error occurs, it will throw an exception. After capturing the image, it will redirect us to another screen, detail_screen.dart. So, create another file detail_screen.dart to show the results.

Dart




Future<String?> _takePicture() async {
    if (!_controller.value.isInitialized) {
      print("Controller is not initialized");
      return null;
    }
  
    String? imagePath;
  
    if (_controller.value.isTakingPicture) {
      print("Processing is progress ...");
      return null;
    }
  
    try {
        
      // Turning off the camera flash
      _controller.setFlashMode(FlashMode.off);
        
      // Returns the image in cross-platform file abstraction
      final XFile file = await _controller.takePicture();
        
      // Retrieving the path
      imagePath = file.path;
    } on CameraException catch (e) {
      print("Camera Exception: $e");
      return null;
    }
  
    return imagePath;
  }


Full Code for camera_screen.dart

Dart




import 'package:camera/camera.dart';
import 'package:flutter/material.dart';
import 'package:flutter_mlkit_vision/main.dart';
  
import 'detail_screen.dart';
  
class CameraScreen extends StatefulWidget {
  @override
  _CameraScreenState createState() => _CameraScreenState();
}
  
class _CameraScreenState extends State<CameraScreen> {
  late final CameraController _controller;
  
  // Initializes camera controller to preview on screen
  void _initializeCamera() async {
    final CameraController cameraController = CameraController(
      cameras[0],
      ResolutionPreset.high,
    );
    _controller = cameraController;
  
    _controller.initialize().then((_) {
      if (!mounted) {
        return;
      }
      setState(() {});
    });
  }
  
  // Takes picture with the selected device camera, and
  // returns the image path
  Future<String?> _takePicture() async {
    if (!_controller.value.isInitialized) {
      print("Controller is not initialized");
      return null;
    }
  
    String? imagePath;
  
    if (_controller.value.isTakingPicture) {
      print("Processing is progress ...");
      return null;
    }
  
    try {
        
      // Turning off the camera flash
      _controller.setFlashMode(FlashMode.off);
        
      // Returns the image in cross-platform file abstraction
      final XFile file = await _controller.takePicture();
        
      // Retrieving the path
      imagePath = file.path;
    } on CameraException catch (e) {
      print("Camera Exception: $e");
      return null;
    }
  
    return imagePath;
  }
  
  @override
  void initState() {
    _initializeCamera();
    super.initState();
  }
  
  @override
  void dispose() {
      
    // dispose the camera controller when navigated
    // to a different page
    _controller.dispose();
    super.dispose();
  }
  
  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('GeeksForGeeks'),
      ),
      body: _controller.value.isInitialized
          ? Stack(
              children: <Widget>[
                CameraPreview(_controller),
                Padding(
                  padding: const EdgeInsets.all(20.0),
                  child: Container(
                    alignment: Alignment.bottomCenter,
                    child: ElevatedButton.icon(
                      icon: Icon(Icons.camera),
                      label: Text("Click"),
                      onPressed: () async {
                          
                        // If the returned path is not null navigate
                        // to the DetailScreen
                        await _takePicture().then((String? path) {
                          if (path != null) {
                            Navigator.push(
                              context,
                              MaterialPageRoute(
                                builder: (context) => DetailScreen(
                                  imagePath: path,
                                ),
                              ),
                            );
                          } else {
                            print('Image path not found!');
                          }
                        });
                      },
                    ),
                  ),
                )
              ],
            )
          : Container(
              color: Colors.black,
              child: Center(
                child: CircularProgressIndicator(),
              ),
            ),
    );
  }
}


Work on detail_screen.dart:

First, we need to process the image, get imageSize to be displayed on the screen.

Dart




Future<void> _getImageSize(File imageFile) async {
    final Completer<Size> completer = Completer<Size>();
  
    final Image image = Image.file(imageFile);
    image.image.resolve(const ImageConfiguration()).addListener(
      ImageStreamListener((ImageInfo info, bool _) {
        completer.complete(Size(
          info.image.width.toDouble(),
          info.image.height.toDouble(),
        ));
      }),
    );
  
    final Size imageSize = await completer.future;
    setState(() {
      _imageSize = imageSize;
    });
  }


Initialize imageLabeler from GoogleMLKit in the class.

Dart




final imageLabeler = GoogleMlKit.vision.imageLabeler();


Now, we will create another function _recognizeImage(), first, we will get image size then get the image file. We will store all the image labels in list labels of type List<ImageLabel>. Then, through for loop, we will retrieve label, index, and confidence for each Image label and store them in their respective lists – imagesData, indexData, and confidenceData.

Dart




void _recognizeImage() async {
    _getImageSize(File(_imagePath));
    final inputImage = InputImage.fromFilePath(_imagePath);
    final List<ImageLabel> labels = await imageLabeler.processImage(inputImage);
  
    for (ImageLabel label in labels) {
        
      // retrieve label,index, and confidence from each label
      final String item = label.label;
      final int index = label.index;
      final double confidence = label.confidence;
      imagesData.add(item);
      indexData.add(index.toString());
      confidenceData.add(confidence.toString());
    }
  }


Now, it’s time to display results on the screen.

Full Code for detail_screen.dart:

Dart




import 'dart:async';
import 'dart:io';
import 'package:flutter/material.dart';
import 'package:google_ml_kit/google_ml_kit.dart';
  
class DetailScreen extends StatefulWidget {
  final String imagePath;
  
  const DetailScreen({required this.imagePath});
  
  @override
  _DetailScreenState createState() => _DetailScreenState();
}
  
class _DetailScreenState extends State<DetailScreen> {
  late final String _imagePath;
  final imageLabeler = GoogleMlKit.vision.imageLabeler();
  Size? _imageSize;
  List<String> imagesData = [];
  List<String> indexData = [];
  List<String> confidenceData = [];
  
  // Fetching the image size from the image file
  Future<void> _getImageSize(File imageFile) async {
    final Completer<Size> completer = Completer<Size>();
  
    final Image image = Image.file(imageFile);
    image.image.resolve(const ImageConfiguration()).addListener(
      ImageStreamListener((ImageInfo info, bool _) {
        completer.complete(Size(
          info.image.width.toDouble(),
          info.image.height.toDouble(),
        ));
      }),
    );
  
    final Size imageSize = await completer.future;
    setState(() {
      _imageSize = imageSize;
    });
  }
  
  void _recognizeImage() async {
    _getImageSize(File(_imagePath));
    final inputImage = InputImage.fromFilePath(_imagePath);
    final List<ImageLabel> labels = await imageLabeler.processImage(inputImage);
  
    for (ImageLabel label in labels) {
      final String item = label.label;
      final int index = label.index;
      final double confidence = label.confidence;
      imagesData.add(item);
      indexData.add(index.toString());
      confidenceData.add(confidence.toString());
    }
  }
  
  @override
  void initState() {
    _imagePath = widget.imagePath;
      
    // Initializing the Image Labeler
    final imageLabeler = GoogleMlKit.vision.imageLabeler();
    _recognizeImage();
    super.initState();
  }
  
  @override
  void dispose() {
      
    // Disposing the imageLabeler when not used anymore
    imageLabeler.close();
    super.dispose();
  }
  
  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text("Image Details"),
      ),
      body: _imageSize != null
          ? Stack(
              children: [
                Container(
                  width: double.maxFinite,
                  color: Colors.black,
                  child: AspectRatio(
                    aspectRatio: _imageSize!.aspectRatio,
                    child: Image.file(
                      File(_imagePath),
                    ),
                  ),
                ),
                Align(
                  alignment: Alignment.bottomCenter,
                  child: Card(
                    elevation: 8,
                    color: Colors.white,
                    child: Padding(
                      padding: const EdgeInsets.all(16.0),
                      child: Column(
                        mainAxisSize: MainAxisSize.min,
                        crossAxisAlignment: CrossAxisAlignment.start,
                        children: <Widget>[
                          Padding(
                            padding: const EdgeInsets.only(bottom: 8.0),
                            child: Text(
                              "IdentifiedItems  Index   Confidence",
                              style: TextStyle(
                                fontSize: 20,
                                fontWeight: FontWeight.bold,
                              ),
                            ),
                          ),
                          Container(
                            height: 120,
                            child: SingleChildScrollView(
                              child: imagesData != null
                                  ? ListView.builder(
                                      shrinkWrap: true,
                                      physics: BouncingScrollPhysics(),
                                      itemCount: imagesData.length,
                                      itemBuilder: (context, index) {
                                        return Row(
                                          mainAxisAlignment:
                                              MainAxisAlignment.spaceBetween,
                                          children: [
                                            Text(imagesData[index]),
                                            Text(indexData[index]),
                                            Text(confidenceData[index])
                                          ],
                                        );
                                      })
                                  : Container(),
                            ),
                          ),
                        ],
                      ),
                    ),
                  ),
                ),
              ],
            )
          : Container(
              color: Colors.blue,
              child: Center(
                child: CircularProgressIndicator(),
              ),
            ),
    );
  }
}


Now, call CameraScreen in main.dart.

Dart




import 'package:camera/camera.dart';
import 'package:flutter/material.dart';
  
import 'camera_screen.dart';
  
// Global variable for storing the list of cameras available
List<CameraDescription> cameras = [];
  
Future<void> main() async {
    
  // Fetch the available cameras before initializing the app.
  try {
    WidgetsFlutterBinding.ensureInitialized();
    cameras = await availableCameras();
  } on CameraException catch (e) {
    debugPrint('CameraError: ${e.description}');
  }
  runApp(MyApp());
}
  
class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      debugShowCheckedModeBanner: false,
      title: 'Flutter MLKit Vision',
      theme: ThemeData(
        primarySwatch: Colors.green,
      ),
      home: CameraScreen(),
    );
  }
}


Output:



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads