Open In App

Augmented Faces with ARCore in Android

Improve
Improve
Like Article
Like
Save
Share
Report

Augmented Faces permit the application to naturally distinguish various regions of an individual’s face, and utilize those areas to overlay resources, for example, surfaces and models in a way that appropriately matches the contours and regions of an individual face. ARCore is a stage for building Augmented reality applications on Android. Augmented Face is a subsystem of ARCore that permits your application to:  

  • Naturally, recognize various areas of any individual’s identified face, and utilize those regions to overlay resources, for example, surfaces and models in a way that appropriately matches the contours and regions of an individual face.
  • Utilize the 468-point face mesh that is given by ARCore to apply a custom texture over a distinguished face.

For example, we can create effects like animated masks, glasses, virtual hats, perform skin retouching, or the next Snapchat App.

How Does it All Work?

Augmented faces don’t require uncommon or special hardware, such as a depth sensor. Rather, it uses the phone’s camera and machine learning to provide three snippets of data:

  1. Generates a Face Mesh: a 468 points dense 3D face mesh, which allows you to pan detailed textures that accurately follow facial moments.
  2. Recognizes the Pose: points on a person’s face, anchored based on the generated Face Mesh, which is useful for placing effects on or close to the temple and nose.
  3. Overlays and position textures and 3D models based on the face mesh generated and recognized regions.

How is ARCore Able to Provide a 3D face Mesh from a 2D Image without any Depth Hardware?

It uses machine learning models that are built on top of the TensorFlow Lite platform to achieve this and the crossing pipeline is optimized to run on the device in real-time. It uses a technique called transfer learning wherein we train the neural network for two objectives, one, to predict 3D vertices and to predict 2D contours. To predict 3D vertices, we train it with a synthetic 3D data set and use this neural network as a starting point for the next stage of training.

In this next stage, it uses the annotated data set, annotated real-world data set to train the model for 2D contour prediction. The resulting network not only predicts 3D vertices from a synthetic data set but can also perform well from 2D images. To make sure the solution works for everyone ARCore developers train the network with geographically diverse data sets so that it works for all types of faces, wider faces, taller faces, and all types of skin colors. 

And to enable these complex algorithms on mobile devices, we have multiple adaptive algorithms built into the ARCore. These algorithms sense dynamically how much time it has taken to process previous images and adjust accordingly various parameters of the pipeline. It uses multiple ML models, one optimized for higher quality and one optimized for higher performance when computing the resources is really challenging. It also adjusts pipeline parameters such as inference rates so that it skips a few images, and instead replace that with interpolation data. With all these techniques, what you get is a full-frame rate experience for your user. So it provides face mesh and region poses at full camera frame rate while handling all these techniques internal to the ARCore.

Identifying an Augmented Face Mesh

To appropriately overlay textures and 3D models on an identified face, ARCore provides detected regions and an augmented face mesh. This mesh is a virtual depiction of the face and comprises the vertices, facial regions, and the focal point of the user’s head. At the point when a user’s face is identified by the camera, ARCore performs the following steps to generate the augmented face mesh, as well as center and region poses:

  • It distinguishes the center pose and a face mesh.
    • The center pose, situated behind the nose, is the actual center point of the user’s head (in other words, inside the skull).
    • The face mesh comprises of many vertices that make up the face and is characterized relative to the center pose.

  • The AugmentedFace class utilizes the face mesh and center pose to distinguish face regions present on the client’s face. These regions are:
    • Right brow (RIGHT_FOREHEAD)
    • Left temple (LEFT_FOREHEAD)
    • Tip of the nose (NOSE_TIP)

The Face mesh, center pose, and face region poses are utilized by AugmentedFace APIs as positioning points and regions to place the resources in your app.

468 point face texture mesh

Reference Terminologies

  • Trackable: A Trackable is an interface which can be followed by ARCore and something from which Anchors can be connected to.
  • Anchor:  It describes a fixed location and orientation in the real world. To stay at a fixed location in physical space, the numerical description of this position will update as ARCore’s understanding of the space improves. Anchors are hashable and may for example be used as keys in HashMaps.
  • Pose: At the point when you need to state wherein the scene you need to put the object and you need to specify the location in terms of the scene’s coordinates. The Pose is the means by which you state this.
  • Session: Deals with the AR framework state and handles the session lifecycle. This class offers the primary passage to the ARCore API. This class permits the user to make a session, configure it, start or stop it and, above all, receive frames that allow access to the camera image and device pose.
  • Textures: Textures are especially helpful for Augmented Faces. This permits you to make a light overlay that lines up with the locales of the identified face(s) to add to your experience.
  • ArFragment: ARCore utilizes an ArFragment that provides a lot of features, for example, plane finding, permission handling, and camera set up. You can utilize the fragment legitimately in your activity, however at whatever point you need custom features, for example, Augmented Faces, you should extend the ArFragment and set the proper settings. This fragment is the layer that conceals all the compound stuff (like OpenGL, rendering models, etc) and gives high-level APIs to load and render 3D models.
  • ModelRenderable: ModelRenderable renders a 3D Model by attaching it to a Node.
  • Sceneform SDK: Sceneform SDK is another library for Android that empowers the quick creation and mix of AR experiences in your application. It joins ARCore and an amazing physically-based 3D renderer.

Example Project

We are going to create Snapchat, Instagram, and TikTok like face filters. A sample GIF is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language. 

Augmented Faces with ARCore in Android Sample GIF

Step 1: Create a New Project

To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language.

Step 2: Adding the assets file used in this example

Add any 3D model in sampledata/models folder. We can do this by creating a new folder in the project file directory or directly from the Android Studio. The allowed 3D model extensions are .fbx, .OBJ, .glTF. There are many free models available on the internet. You can visit, here or more. You can download the assets used in this example from here. Please refer to this article to create a raw folder in android studio. Then just copy and paste the fox_face.sfb file to the raw folder. Similarly, copy and paste the fox_face_mesh_texture.png file to the drawable folder. 

Step 3: Adding dependencies to the build.gradle(:app) file

Add the following dependencies to the build.gradle(:app) file. 

// Provides ARCore Session and related resources.

implementation ‘com.google.ar:core:1.16.0’

// Provides ArFragment, and other UX resources.

implementation ‘com.google.ar.sceneform.ux:sceneform-ux:1.15.0’

// Alternatively, use ArSceneView without the UX dependency.

implementation ‘com.google.ar.sceneform:core:1.8.0’

Add the following code snippet to the build.gradle file. This is required(only once) to convert .fbx asset into .sfb and save that in the raw folder. Or you can add them by yourself as done in step 2.

// required(only once) to convert .fbx asset into .sfb

// and save that in raw folder

sceneform.asset(‘sampledata/models/fox_face.fbx’,

                        ‘default’,

                        ‘sampleData/models/fox_face.sfa’,

                        ‘src/main/res/raw/fox_face’)

Step 4: Adding dependencies to the build.gradle(:project) file

Add the following dependencies to the build.gradle(:project) file. 

// Add Sceneform plugin classpath to Project

// level build.gradle file

classpath ‘com.google.ar.sceneform:plugin:1.15.0’

Step 5: Working with the AndroidManifest.xml file

Add the following line to the AndroidManifest.xml file.

// Both “AR Optional” and “AR Required” apps require CAMERA permission.

<uses-permission android:name=”android.permission.CAMERA” />

// Indicates that app requires ARCore (“AR Required”). Ensures app is only

// visible in the Google Play Store on devices that support ARCore.

// For “AR Optional” apps remove this line. →

<uses-feature android:name=”android.hardware.camera.ar”  android:required=”true”/>

<application>

   …

   // Indicates that app requires ARCore (“AR Required”). Causes Google

   // Play Store to download and install ARCore along with the app.

   // For an “AR Optional” app, specify “optional” instead of “required”.

   <meta-data android:name=”com.google.ar.core” android:value=”required” />

    …

 </application>

Below is the complete code for the AndroidManifest.xml file.

XML




<?xml version="1.0" encoding="utf-8"?>
    package="com.example.arsnapchat">
 
    <uses-feature
        android:name="android.hardware.camera"
        android:required="true" />
 
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.INTERNET" />
 
    <uses-feature
        android:name="android.hardware.camera.ar"
        android:required="true" />
    <uses-feature android:name="android.hardware.camera.autofocus" />
     
    <uses-feature
        android:glEsVersion="0x00020000"
        android:required="true" />
 
    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme">
        <meta-data
            android:name="com.google.ar.core"
            android:value="required" />
        <activity android:name=".MainActivity">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
 
                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>
    </application>
 
</manifest>


 
Step 6: Modify the activity_main.xml file

We have added a fragment to the activity_main.xml file. Below is the code for the activity_main.xml file. 

XML




<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">
 
    <fragment
        android:id="@+id/arFragment"
        android:name="com.example.arsnapchat.CustomArFragment"
        android:layout_width="match_parent"
        android:layout_height="match_parent" />
 
</androidx.constraintlayout.widget.ConstraintLayout>


Note: 

Please add your package name to this attribute.

android:name=”com.example.arsnapchat.CustomArFragment”

Step 7: Create a new Java class

Create a new class and name the file as CustomArFragment that extends ArFragment. Below is the code for the CustomArFragment.java file.

Java




import android.os.Bundle;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.FrameLayout;
import androidx.annotation.Nullable;
import com.google.ar.core.Config;
import com.google.ar.core.Session;
import com.google.ar.sceneform.ux.ArFragment;
import java.util.EnumSet;
import java.util.Set;
 
public class CustomArFragment extends ArFragment {
    @Override
    protected Config getSessionConfiguration(Session session) {
        Config config = new Config(session);
 
        // Configure 3D Face Mesh
        config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
        this.getArSceneView().setupSession(session);
        return config;
    }
 
    @Override
    protected Set<Session.Feature> getSessionFeatures() {
        // Configure Front Camera
        return EnumSet.of(Session.Feature.FRONT_CAMERA);
    }
 
    // Override to turn off planeDiscoveryController.
    // Plane traceable are not supported with the front camera.
    @Override
    public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
        FrameLayout frameLayout = (FrameLayout) super.onCreateView(inflater, container, savedInstanceState);
        getPlaneDiscoveryController().hide();
        getPlaneDiscoveryController().setInstructionView(null);
        return frameLayout;
    }
}


Step 8: Modify the MainActivity.java file

Below is the code for the  MainActivity.java file. Comments are added inside the code to understand the code in more detail.

Java




import android.os.Bundle;
import android.widget.Toast;
import androidx.appcompat.app.AppCompatActivity;
import com.google.ar.core.AugmentedFace;
import com.google.ar.core.Frame;
import com.google.ar.core.TrackingState;
import com.google.ar.sceneform.rendering.ModelRenderable;
import com.google.ar.sceneform.rendering.Renderable;
import com.google.ar.sceneform.rendering.Texture;
import com.google.ar.sceneform.ux.AugmentedFaceNode;
import java.util.Collection;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
 
public class MainActivity extends AppCompatActivity {
    private ModelRenderable modelRenderable;
    private Texture texture;
    private boolean isAdded = false;
    private final HashMap<AugmentedFace, AugmentedFaceNode> faceNodeMap = new HashMap<>();
 
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
 
        CustomArFragment customArFragment = (CustomArFragment) getSupportFragmentManager().findFragmentById(R.id.arFragment);
 
        // Use ModelRenderable.Builder to load the *.sfb
        // models at runtime.
        // Load the face regions renderable.
        // To ensure that the asset doesn't cast or receive
        // shadows in the scene, ensure that setShadowCaster
        // and setShadowReceiver are both set to false.
        ModelRenderable.builder()
                .setSource(this, R.raw.fox_face)
                .build()
                .thenAccept(rendarable -> {
                    this.modelRenderable = rendarable;
                    this.modelRenderable.setShadowCaster(false);
                    this.modelRenderable.setShadowReceiver(false);
 
                })
                .exceptionally(throwable -> {
                    Toast.makeText(this, "error loading model", Toast.LENGTH_SHORT).show();
                    return null;
                });
 
        // Load the face mesh texture.(2D texture on face)
        // Save the texture(.png file) in drawable folder.
        Texture.builder()
                .setSource(this, R.drawable.fox_face_mesh_texture)
                .build()
                .thenAccept(textureModel -> this.texture = textureModel)
                .exceptionally(throwable -> {
                    Toast.makeText(this, "cannot load texture", Toast.LENGTH_SHORT).show();
                    return null;
                });
 
        assert customArFragment != null;
 
        // This is important to make sure that the camera
        // stream renders first so that the face mesh
        // occlusion works correctly.
        customArFragment.getArSceneView().setCameraStreamRenderPriority(Renderable.RENDER_PRIORITY_FIRST);
        customArFragment.getArSceneView().getScene().addOnUpdateListener(frameTime -> {
            if (modelRenderable == null || texture == null) {
                return;
            }
            Frame frame = customArFragment.getArSceneView().getArFrame();
            assert frame != null;
 
            // Render the effect for the face Rendering the effect involves these steps:
            // 1.Create the Sceneform face node.
            // 2.Add the face node to the Sceneform scene.
            // 3.Set the face region Renderable. Extracting the face mesh and
            // rendering the face effect is added to a listener on
            // the scene that gets called on every processed camera frame.
            Collection<AugmentedFace> augmentedFaces = frame.getUpdatedTrackables(AugmentedFace.class);
 
            // Make new AugmentedFaceNodes for any new faces.
            for (AugmentedFace augmentedFace : augmentedFaces) {
                if (isAdded) return;
 
                AugmentedFaceNode augmentedFaceMode = new AugmentedFaceNode(augmentedFace);
                augmentedFaceMode.setParent(customArFragment.getArSceneView().getScene());
                augmentedFaceMode.setFaceRegionsRenderable(modelRenderable);
                augmentedFaceMode.setFaceMeshTexture(texture);
                faceNodeMap.put(augmentedFace, augmentedFaceMode);
                isAdded = true;
 
                // Remove any AugmentedFaceNodes associated with
                // an AugmentedFace that stopped tracking.
                Iterator<Map.Entry<AugmentedFace, AugmentedFaceNode>> iterator = faceNodeMap.entrySet().iterator();
                Map.Entry<AugmentedFace, AugmentedFaceNode> entry = iterator.next();
                AugmentedFace face = entry.getKey();
                while (face.getTrackingState() == TrackingState.STOPPED) {
                    AugmentedFaceNode node = entry.getValue();
                    node.setParent(null);
                    iterator.remove();
                }
            }
        });
    }
}


Output: Run on a Physical Device

Github Project Link: https://github.com/raghavtilak/AugmentedFaces

Limitations of Ar Core

  1. Augmented Faces only works with the Front Camera.
  2. Not all devices support ARCore. There’s still a small fraction of devices which doesn’t come with AR Core support. You can check out the list of ARCore supported devices at https://developers.google.com/ar/discover/supported-devices.
  3. For AR Optional App minSdkVersion should be 14 and for AR Required App minSdkVersion should be 24.
  4. If your app falls in the category of AR Required app then the device using it should have AR Core installed on it.

Notes:

  1. Prior to making a Session, it must be verified beforehand that ARCore is installed and up to date. If ARCore isn’t installed, then session creation might fail and any later installation or upgrade of ARCore would require restarting of the app, and might cause the app to be killed.
  2. The orientation of the face mesh is different for Unreal, Android and Unity.
  3. Calling Trackable.createAnchor(Pose) would result in an IllegalStateException because Augmented Faces supports only front-facing (selfie) camera, and does not support attaching anchors.


Last Updated : 17 Feb, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads