11

Measuring Render Performance with Jetpack Compose

 3 years ago
source link: https://engineering.premise.com/measuring-render-performance-with-jetpack-compose-c0bf5814933
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Measuring Render Performance with Jetpack Compose

By Will Shelor, Android Engineer

Photo by Tsvetoslav Hristov on Unsplash

Update: Leland Richardson discussed our blog post with CodeWithTheItalians on Twitch. He had some great insight into why we saw some of the things we saw, and why subsequent launches will be even better on our end users’ devices. Check it out here!

Update 2: Thanks to Leland Richardson for chatting with us about how ART improves this when installed from the play store! We’ve added some notes from our conversation below.

Our team recently embarked on a journey to use Jetpack Compose in production. Compose gives us the flexibility to create a simple, reusable design language with composable elements that can be shared across multiple screens, projects, and teams.

While our team was implementing Compose, we evaluated different considerations that would be important to any team looking to adopt this tool, such as performance, testing, architecture decisions, and build times.

This article intends to explore our learnings related to our analysis of Jetpack Compose performance.

Performance Matters

As the 1.0.0 release of Jetpack Compose approached, our team was ready to get started using it as soon as we found a good product fit.

Fortunately, coinciding with the 1.0.0 launch of Compose, the opportunity arose to implement a new design language in our application.

Compose matched our needs perfectly. It offers consistent design language, puzzle-piece UI components combined in similar yet distinct ways, and a theme that is applied universally across the application.

One question we wanted to answer before we replaced the foundation of our app with Compose, though, was: what impact does using Composables instead of XML have on load time and performance?

Research on this topic was somewhat limited. While some articles existed, most of the information we found was either focused on the developer experience, statements by Compose creators at Google, or regarding very early versions of compose. Therefore, we set out to answer this question for ourselves.

What did we measure?

We want to do whatever we can to ensure a good experience for users of our application when working with a new framework. In this article, we’re looking specifically at the time from when they launch a screen to the point where the content is visible to the user.

There are a lot of options for building with Jetpack Compose:.

  • What if we build screens entirely in Compose?
  • What if we use Compose views for complex views (such as replacing RecyclerViews with LazyLists) but leave the root in XML?
  • What if we replace many single elements with composables instead of replacing the whole screen?
  • How significantly does debuggability and R8 impact performance?
  • Does Play Services matter?

To begin answering these questions, we built a test application that did very little other than render our experience in a variety of ways. For this initial round, we focused exclusively on rendering a list of 50 elements (of which around a dozen are actually drawn). The list includes a radio button and some random text.

1*Nt71eQ51T3iz30ec5oaIEg.jpeg?q=20
measuring-render-performance-with-jetpack-compose-c0bf5814933
The sample app as rendered on a Galaxy S10

In order to get a clear picture of the impact of all options before us, we measured this experience with a variety of configurations. The 4 configurations listed below have R8 enabled and debugging disabled:

  1. Pure Compose, set via the setContent{ }helper method (No XML)
  2. An XML file with a base ComposeView
  3. An XML layout with a RecyclerView that binds to a ComposeView
  4. Pure XML (No Compose)

We also tested the following configurations to measure the impact of the various build types:

  1. Pure Compose with R8 disabled and debugging enabled
  2. Pure Compose with R8 disabled and debugging disabled
  3. Pure XML with R8 disabled and debugging enabled

Each test was run across a few devices, including recent flagships, a device without Google Play Services, and a few budget phones.

How do we define performance?

The start point is relatively easy here- we want to start at the point where the experience begins to render. For this test application, this will be at the start of onCreate() just before our super.onCreate() call. After that, there are two points that we want to use as anchors for our tests- when onResume has completed, and after the view has fully rendered onto the screen. (We also have additional data points in our full dataset, included below).

Activity lifecycle can be overridden directly, but to measure view rendering, we’ll need to add a global ViewTreeObserver to the base view of our Fragment and track the time until the last onDraw call. Using the profiler, here’s what that looks like:

1*y3js-GEDW1EXWkT8Yt___w.png?q=20
measuring-render-performance-with-jetpack-compose-c0bf5814933
Using XML
1*_7AvTpiPG8PHpa2VGbfViw.png?q=20
measuring-render-performance-with-jetpack-compose-c0bf5814933
Using Compose

The point at which activity launch, activity creation completed, and activity render completed is outlined above. Each of these tests are run 10 times to give us a picture not only of initial render time, but also subsequent render times. You can view the full data of these runs here, including additional information such as performance by device and the values of every run.

Test setup specifics:

A few things were important to make sure the results we captured were accurate.

  1. Tests should be as similar as possible between configurations. The only difference between each test happens in onCreate, and dictates the rendering method used for the test.
  2. All tests are not debuggable unless noted. Having the debuggable flag enabled significantly impacts performance. Note that debuggable build has the debuggable flag set to true, but the debugger is not attached during the test.
  3. All tests are built with R8 unless noted. R8 is the Android minifier that replaced Proguard. The claim has been made that this impacts performance. We want to identify how significant this impact was.
  4. Tests should run in an isolated environment. For these tests, we will be running a very simple application that does little other than render these screens.

Versions used:

Jetpack Compose: 1.0.0

Experiment 1: Repeatedly rendering a screen

The full data set can be found here. However, to make the differences easier to see, here are the results averaged from each of the test devices:

Experiment 1: Conclusions

There are a few things that really stand out from this experiment.

  1. R8 and Debuggability make a significant difference in Jetpack Compose render time. In every experiment, builds with R8 disabled and debuggability enabled took more than twice as long to render as builds without them. On our slowest device, R8 sped the render up by more than half of a second, and disabling debuggability sped it up by another half second.
  2. ComposeView and setContent{ } took almost the exact same amount of time. The difference between these two was negligible, and seems to point to the fact that the performance of a ComposeView within a root layout is not significantly worse than ComposeViews used as a root object.
  3. ComposeView within RecyclerViews was the slowest. This wasn’t much of a surprise- jumping to ComposeView within XML has a cost, so the fewer ComposeViews used in a screen the better.
  4. XML was faster at rendering than Compose. No way around this one- in every scenario, Compose took at around 33% longer to render than XML. Update: According to Leland Richardson, Compose render on launch will be even faster due to bundled AOT compilation when the app is installed form Google Play, narrowing this gap even further. We’re excited to do more benchmarking here, and will look forward to sharing our results in future content.
  5. Subsequent launches always took longer to render than the first launch. If you take a look at the full results, the first screen always took almost twice as long to render as subsequent screens.

The last point here was a really interesting one. Rendering a ComposeView on a cold launch made it much slower than rendering a CompseView on a hot launch. (Note: to validate that this observation was correct, one additional cold launch was performed on some test devices. That data can be seen in the results spreadsheet.)

To figure out what is happening there, let’s go back to our trusty profiler. Here is the view of launching this experience four times in the profiler:

0*ABHjDijbZu6k1_Uu?q=20
measuring-render-performance-with-jetpack-compose-c0bf5814933

Clearly the first render is doing much more than subsequent renders. On one hand, that tells us that internally, Jetpack Compose is pretty good at re-rendering and updating content. However, it also tells us that there is some additional work that has to happen before initial rendering and some of that work takes a significant amount of time.

Update: Leland Richardson discussed our blog post with CodeWithTheItalians on Twitch. He chatted about how the ART compiler plays into this, and how it handles both compilation and precompilation, which also plays into these observations. Check it out here!

Experiment 2: Pre-load Compose?

So, that brings us to the next question: Can we bring our initial render time down? To answer this question, we ran another experiment- this time adding an intermediary screen, and navigating to it first. We attempted this experiment both with the intermediary screen utilizing Compose and with the intermediary screen without Compose to ensure that it wasn’t just intermediary screens themselves that caused the improvement.

Two things stand out here.

  1. Both intermediary screens had a net positive impact on our Compose screen. It seems the first activity launched in the application carries with it some overhead, in this case resulting in around 250ms of additional delay.
  2. Having the intermediary screen render Compose views decreases the launch time of subsequent Compose-based screens significantly (in our case, by around another ~150ms compared to the intermediary activity without Compose).

Let’s see if we can figure out why. Here’s the profiler session for the render of our Compose screen with the intermediate XML screen:

0*q12aIbs_BWVGyMBE?q=20
measuring-render-performance-with-jetpack-compose-c0bf5814933

Notice the block circled above? This method is EnsureCompositionCreated, which internally creates the Coroutines recomposer and initializes the Compose wrapper. With a profiler and debugger attached, it took ~1 second to complete. Let’s have a look at the same render, but with our Hello Compose activity run before our main screen:

0*sw_vQ3q_MZfVQSKD?q=20
measuring-render-performance-with-jetpack-compose-c0bf5814933

Notice how the area circled above matches the red circle on the first activity render above? When we look at the second activity launch (the area circled in yellow), that method is not present, and rendering is faster as a result. In this case, the EnsureCompositionCreatedmethod walked the view tree and found the existing composer that the previous activity had created.

This indicates that adding Composables to our application incurs a cost in render time, but that this cost is paid once. Subsequent activities do not need to pay it again. They reuse the recomposer and wrapper created before.

Experiment 2: Conclusions

Here are our key observations from this experiment:

  1. The first time we load compose has a one-time cost, even if navigating to different screens. As mentioned above, Compose has a global scope that initializes on the first render, contributing significantly to its launch time.
  2. Re-rendering the same components is much faster than rendering new components. Even when changing the contents of our list, Compose was much faster re-rendering than initial render. Compose uses the same context globally, and is smart both when it comes to reusing components and re-initializing its views.

Final Thoughts

If you’re looking to convert a single UI element or other small piece it might have more of a performance cost than you may expect. It might be a good idea to convert a significant body of work, or see if there are ways that the initialization can be pre-loaded, such as a splash or introductory screen.

Even with this small downside, Compose is still an easy choice for most Android teams. In terms of developer productivity, code reuse, and the powerful nature of declarative UI, our experience with Compose has been great, and we can still readily recommend it to Android developers. This deep dive makes us more aware of some of the minor performance cons of using Compose, and helps us adjust our application to be faster and more performant. Even with the small bump in render time (on most devices little more than a frame or two), Compose is still easy to recommend. With it, we’re ready to start building scalable, performant features using this amazing new declarative UI framework.

Additional Resources

Android Developers Backstage: Art History — Understanding how Android Runtime plays into performance

Premise is constantly looking to hire top-tier engineering talent to help us improve our front-end, mobile offerings and cloud infrastructure. Come find out why we’re consistently named among the top places to work in Silicon Valley by visiting our Careers page.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK