Realtime Selfie Segmentation In Android With MLKit
source link: https://proandroiddev.com/realtime-selfie-segmentation-in-android-with-mlkit-38637c8502ba
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
1. 🔨 Adding the dependencies ( CameraX & MLKit ) to build.gradle
In order to add the MLKit Selfie Segmentation feature to our Android app, we need to add a dependency to our build.gradle ( app-level )
file.
Note, make sure to use the latest release of the Selfie Segmentation package.
As we’ll perform image segmentation on the live camera feed, we’ll also require a camera library. So, in order to use CameraX, we add the following dependencies in the same file.
Code Snippet 2: Adding CameraX dependenciesNote, make sure to use the latest release of the CameraX package.
Build and sync the project to make sure we’re to go!
2. 🎥 Adding the PreviewView and initializing the camera feed
In order to display the live camera feed to the user, we’ll use PreviewView
from the CameraX package. We’ll require minimum setup to get a camera live feed running because of PreviewView
.
Now, head on to activity_main.xml
and delete the TextView
which is present there ( the default TextView
showing ‘Hello World’ ). Next, add PreviewView
in activity_main.xml
.
We need to initialize the PreviewView
in MainActivity.kt
, but first we need to add the CAMERA
permission to AndroidManifest.xml
like below,
Now, open up MainActivity.kt
and in the onCreate
method, check if the camera permission is granted, if not, request it from the user. To provide a full screen experience to the user, remove the status bar as well. Also, initialize the PreviewView
we created in activity_main.xml
.
To request the camera permission, we’ll use ActivityResultContracts.RequestPermission
so that the request code is automatically handled by the system. If the permission is denied, we’ll display an AlertDialog
to user,
Wondering what the setupCameraProvider
method will would do? It simply starts the live camera feed using the PreviewView
, which we initialized earlier and a CameraSelector
object,
Now, run the app on a device/emulator and grant the camera permission to the app. The camera feed should run as expected. This completes half of our journey, as we still have to display a segmentation map to the user.
3. 📱 Creating an overlay to display the segmentation
In order to display the segmentation over the live camera feed, we’ll need a SurfaceView
which will be placed over the PreviewView
in activity_main.xml
. The camera frames ( as Bitmap
) will be supplied to the overlay so that they could be drawn over the live camera feed. To start, we create a custom View
called DrawingOverlay
which inherits SurfaceView
.
We’ll add the above View
element in activity_main.xml
.
Also, we need to initialize the DrawingOverlay
in the MainActivity.kt
which we will help us connect it with the live camera feed:
4. 🎦 Getting live camera frames using ImageAnalysis.Analyser
In order to perform segmentation and display the output to the user, we first need a way to get the camera frames from the live feed. Going through the CameraX documentation, you’ll notice that we have to use ImageAnalysis.Analyser
in order to get the camera frames as android.media.Image
which can be converted to our favorite Bitmaps
.
We then create a new class, FrameAnalyser.kt
which inherits ImageAnalysis.Analyser
and takes the DrawingOverlay
as an argument in its constructor. We’ll discuss this in the next section, as this will help us connect the DrawingOverlay
with the live camera feed.
5. 💻 Setting up MLKit’s Segmentor on the live camera feed
We’ll finally initialize Segmentor
which will segment the images from us. For every image-based service in MLKit, you need to convert the input image ( which can be a Bitmap
, InputStream
or Image
) to InputImage
which comes from the MLKit package. All the above mentioned logic will be executed in FrameAnalyser's
analyse()
method. We’ll use the InputImage.fromMediaImage
method to directly use the Image
object provided by the analyse
method.
In the above code snippet, we convert segmentationMask
which is a ByteBuffer
to bitmap
. Finally, we assign the value of bitmap
to the maskBitmap
variable present in DrawingOverlay
. We will also call drawingOverlay.invalidate()
to refresh the overlay. This calls the onDraw
in the DrawingOverlay
class, were we will display the segmentation Bitmap
to the user in a later section.
This connects the live camera feed to the DrawingOverlay
with the help of FrameAnalyser
. One last thing, we need to attach FrameAnalyser
with Camera in MainActivity.kt
,
6. 📝 Drawing the Segmentation Bitmap on the DrawingOverlay
As we saw in the implementation of the DrawingOverlay
class, there’s a variable maskBitmap
which holds the segmentation bitmap for the current frame. Our goal is to draw this Bitmap
onto the screen. So, we call canvas.drawBitmap
in the onDraw
method of our DrawingOverlay
,
Also, note that we need to use flipBitmap
method as we’ll obtain a mirror image of the segmentation.
That’s all, we’re done! Run the app on a physical device and see the magic happen right in front of your eyes!
We’re done
Hope you loved MLKit’s Segmentation API. For any suggestions & queries, feel free to write a message on [email protected] ( including the story’s link/title ). Keep reading, keep learning and have a nice day ahead!
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK