Hi, welcome to “What's new in Android.
” I'm Chet Haase from the Android Developer Relations Team.
So the question is, what is new in Android? Well, for one thing, the venue is new.
Normally, I get this once a year.
I'm a Google I/O.
I have a couple of co-speakers.
I miss my co-speakers here.
I also miss the venue, that beautiful stage.
But the talk is essentially the same.
I'm going to talk aboutthree things here, essentially, today.
One is what's new in the release.
We're going to talkabout features in Android 11 that are worth knowingabout for developers.
We're also going to talkabout things we're doing in non-platform things, like the toolsand the unbundled libraries.
And, almost most importantly, I'm going to talk about some of the resourcesthat you should check out to get more informationon all the stuff that I'm talking about and much, much more.
So, let's get going.
Let's start with UI because, I don't know, I kind of like this area a lot.
I worked in this a lot.
It's an interesting area;it's the visual stuff, right? A couple of things going onin UI that are interesting.
One is window insets.
This is a capability for yearsin Android, in the platform, putting information on other thingsthat are on the screen that are going to affect the insetswhere your content needs to be located in order to live in the same space.
Well, the keyboard is animated, and therefore, the screen space is smaller, your app needs to react to it.
But now we're providingmore information traditionally, you just get some geometry information about insets without really knowing what and how and who and why.
Now, we deprecated most of the methods in the Window Insets class, and instead, we added somethat are type-specific, in particular, window-type specific.
Now you can say, “Where is that keyboard? Or where is the navigationor status bar? Or what are the insetsfor this particular window? Very interesting, very possible to create richer experiences this way.
I'll get back to that one.
This is the way it works.
You basically implement a listener, and then inside that listener, you get the insets information, and with that insets, you can query, “Hey, is that keyboard visible?” Stunning new capabilitythat you never had before.
You can find outif the keyboard is visible.
Or you can actually getthe specific insets for the keyboard or any of the other windows– window types that are handledin the insets API.
So why do we enable that? One of the reasons, besides we needed to, was IME Animations is, by far, my favorite feature in the Android 11 release.
The ability to actually synchronize the content in your application with the animation of the keyboard.
So as the keyboard comes on, wouldn't it be nice if you knewwhen it was coming, how fast it was coming, and your application could smoothlytransition with it? Now you can in a couple of different ways.
You can listen for changes in a keyboard, so you can find out as it's animating where it is, how big it is, all of that insets informationdirectly frame by frame by frame, and you can synchronize with it, or you can even drivethe animation directly.
So if you look at the following example, on the left-hand side, you can see we're clickingin a field there, and that makes the keyboard animate, sort of pop into place, and normally, it would snap into place, and then the applicationwould snap to suit.
But you can see it's actuallysmoothly animating with the keyboard because it's gettingthose animation events along the way.
On the right-hand side, we're actually physically scrollingthe content in the application.
We're swiping that thing up, and as we do, the keyboard swipes up with it because we're driving the animationdirectly and manually in the application code directly.
So here's the way you do that.
If you want to listenfor animation changes and react to the automatic animationin the keyboard, you can set a callback listener, and then get that information as each step of the animation progresses and do the appropriate thingwith your content.
If, instead, you want to drivethe animation directly, then you can callthis control window insets method and drive it.
So you drive it for a particular window.
Here, we are driving itfor the IME for the keyboard.
We're giving it a duration -1, which means run forever, because basically, we're going to drive itstep by step by step based on where that gesture ison the screen.
You can give it motion interpolation.
You can handle cancellationif you need to.
And you can also get lifecycle eventsfor the animation.
So complete controlover the animation experience, and a completely synchronousand seamless transition experience for the user– much better.
So a couple of different places to gofor more information on this.
One is the sample that we sawscreenshots of is posted there.
So go to that URL to check that out.
The other one is ADB podcastrecently that we had, episode 138, we talked to engineerson the Window Manager Team who told us both how to use those APIsto get effective animations, as well as how stuff works under the hood, which is always interesting.
Conversations are about people, or people are about conversations.
A lot of the changesin the system UI area in Android 11 are around the concept of people, the people in our life, and ways that we can stay in touchwith those people more easily.
And one of those waysis through conversations.
In your application, you can post information through the already existingnotification system, such that we can bucket this information in a separate place in the UIdedicated to conversations.
So this is a top priority thing.
Chances are you probably caremore deeply about people than you do about, let's say, an update to an app, right? So why not put thosein their own place there and then allow people to actually change information about each of those conversations.
Let's say, for that last conversation, we see there with the school boards, well, maybe I care deeplyabout the information there, so I want to give you more informationabout how the system should notify me, and I also want to change the priority.
This is really important, so let's pop thatto the top priority queue, which changes the orderingin the conversations category for the notification.
So to create conversations, we're going to usethe existing notification mechanism with a little bit more information to create a person object, and we create a shortcut info.
It needs to be long-lived because the user should be ableto get back to that conversation, even if the application doesn't happento be running anymore.
And then, we're goingto pushDynamicShortcut with all of that information.
Then we are going to use MessagingStyle and create and push a notification, just like we normally would, except with a little bit more information.
Another thing that's enabledvery related is bubbles.
So you can see over thereon the top left of the screenshot, you got this bubble of information.
User taps on that bubbleand up pops this mini activity.
So you get directly into the conversation that was happening earlierin the application, but it's actually happeningin the UI on top of everything else.
So it's a really easy way for usersto access this information about people from whereverthey happen to be on the phone, instead of having to navigate, click, click, click, to get back to that conversation in some deep link in your application.
So bubbles were actually enabled already.
They were in Android 10, but they were enabledas a developer option.
So we were working on Android 10.
It didn't get quite to the finish line, so we put it inside developer optionsso developers could start playing with it, but in Android 11, it's enabledas a fully released feature.
So please use it.
In particular, please use itinstead of the System Alert Window.
That was the preferred wayfor applications to do this kind of UI before.
Basically, pop a transparent window up on top of whatever contentis on the screen, and then inside of that, you populate it with UI, like these bubble objects.
But don't do System Alert Window;instead, use the new bubbles API.
And you're already posting notificationswith this stuff anyway, especially if you wantto start integrating with the conversations.
Capability, a system UI.
So now, you just adda little bit more metadata to give it the bubbleinformation that it needs.
Users opt in to the bubble experienceif they would like to, and then you provide an activity, like this mini activityto go into from the bubble.
So the way the code looks is you createthis resizeableActivity because, again, it needs to bethis mini activity on top of the UI.
And then you create the intent for the activity you have for your bubble.
You populate that with shortcutInfo, which maybe you already created because you're doing thisfor conversations.
You populate that with bubble metadata, and then you createand you push notification with all of that metadata, and you're good to go.
Go check out the video, the talk called “What's New in System UI” for more details on all of this stuff.
Go check out this samplecalled BubblesKotlin in the Android samples.
And also go check outthe ADB Podcast episode that actually isn't posted yetwhen I'm recording this.
We've had a conversationwith the Bubbles team, and we had that before I recorded this, but it hasn't been posted yet.
I think it will be episode 140.
I think it will be called “Bubbles.
” I could be wrong, but it'll be something like that.
So check out all of those.
And now, let's talk about privacy.
Now, I'm not even surewhat I should say about this.
But let's talk about it anyway.
So we've made a lot of changes to privacy in the last few releasesbecause it is increasingly obvious, that it is increasingly importantfor users' data to be protected and for them to understandwhat's going on with that data.
So some of the thingsthat we've done in this release are both about protecting user dataas well as making it easier for developers to dealwith these changes over time.
One of those areasis data access auditing.
So if you're an individual developer, and you're working onyour own application, you've written all the code, you know it back and forth, maybe you don't have this issue where you are not surehow personal data is being accessed.
That's not a problem you have.
But now imaginebeing on a really large team, tons of developers, the code's been worked onfor years and years, and you see that the useris being requested for some permission for a feature– you don't even understandwhy we need that permission, or maybe the applicationis pulling in an external library that's asking for extra permissionsthat you don't think should be there.
And it can be hard in a reallylarge source base, especially using external libraries, to track that down.
So we created an API to handlethese situations specifically, going from the complexityof this huge application, what is going on where, to callbacks that give youthe information you need.
In particular, you registera callback like this, and then you get an informationwhen this information is requested, and then you can track downwhere that is happening in the code, and do the right thing about it.
So, in Android 10, we had a new concept of permissions that could be granted either when the applicationwas in the foreground, or for all time, even whenit was in the background.
We've extended that capabilityto include the idea of one-time permissions in Android 11.
This is the concept that maybe the user is okay with that application, let's say, having accessto your location right now, but if it's in the background, “No, that's enough.
” Right now, I understandwhy you need it now, but this is enough.
So we have this one-time permission.
So basically, when the applicationis being used in the current session, that permission is fine.
But then, the permissionis denied after that, and then needs to request it next time.
So the following questionfrom all developers is, “Okay, now what do I needto do to handle this?” And the answer is hopefully nothing.
So if you're using a best practicefor permissions anyway, where you have come into this new world where the user may deny youpermissions at any time, they may go into a settings dialog and actually turn off permissions that they granted you previously where your applicationhas to handle that reality.
If it can handle that reality, it can handle this one.
It just means that useris going to grant permission, and then system is goingto take permission away when you're no longer in the foreground.
So, hopefully, nothing to do here, but you should be aware of the situation.
Background Location is gettinga bit more restrictive.
This has been an ongoing trendin the last few releases.
In particular, you usedto be able to request foreground and background locationat the same time.
Now, those are separate operations.
Instead, you can requestthe foreground permission, and if that's granted, then you can take a separate step and ask for background permission.
And that takes the userto the settings dialog.
This is not a permissionthat's granted inline in the application experience.
Instead, they're taken to settings.
Again, it's all about transparency and letting the user knowwhat data they are sharing and how they're sharing it and giving them an optionto not share it, if that's what they would prefer to do.
Also, for location, in the previous release, if you needed locationin a foreground service, you needed to declare thatin the manifest.
We have extended that capability to include camera, as well as microphone, in Android 11.
So now you need to usethat same attribute, foregroundServiceType, except there's a couple of new flagsfor camera and microphone if you need to have that accessin a foreground service as well.
There are many more privacy changesthat you should learn about.
Package visibility: it is not possibleto query all of the packages installed on a device.
Instead, you need to declarein the manifest what you want access to.
Scoped storage: there's beensome more changes there, as we continue to migrateto that new world, including things that makeit easier for developers, like access to raw path names, as well as the ability to do batch editingwith MediaStore APIs.
Auto-reset permissions: if the user installsan application, runs it once, grants some permissions, and then doesn't run itfor a couple of years, why does it have accessto the same permissions? At some point in time, there is an auto-reset, and the application needsto request those permissions again.
You can learn about all these and more in the “Android 11 privacy changes” talk, as well as the “Modern storageon Android” talk.
So check those out.
Something elsethat we added in Android 11 were various features to makeAndroid developers' lives easier, make it a little easierto write great Android apps.
So, for example, Wi-Fi Debugging, because if there is one universal truth, it is that there are never enoughUSB ports available.
Wouldn't it be niceif you connect to your device through Wi-Fi instead? And now, you can.
It's a bit manual for now.
You basically enable Wi-Fi Debugging, and then you need to manuallypair and connect with the device.
But it is enabledin the Android Studio 4.
2 Canary builds to be accessible through the tool instead, and make that a little bit easier.
Also, we have been addingNullability Annotations to platform APIsover the last couple of releases, and we did more of that this time.
There's two categories.
There's the Recently annotations, @RecentlyNullable and @RecentlyNonNull.
These indicate that we havejust added them in this release, and that they will triggeras warnings if you're billed.
So if you're a Kotlin developer, this makes Kotlin developers livesso much easier.
If you're a Kotlin developer, you call this code with a parameter that disobeys this contract, and you will get a warningin your build, you should fix that.
On the other hand, we havethe full-on annotations, not Recently, and these are @Nullable and @NonNull.
If you call these with codethat does the wrong thing, you're passing a potentiallynull parameter into a nonnull API call, then that will actually triggera build failure, and you really need to fixthat problem immediately.
So if they were Recently last time, chances are they are actuallyfull-on @Nullable and @NonNull annotations this time.
Otherwise, maybe we added themas new annotations this time.
We didn't want to breakyour build [fallout], so we added them as Recently.
So check those out.
Hopefully, it makes life betterfor Kotlin developers overall.
So another thing that we didto make it easier to figure out what is actually going on in the field is Crash Reasons Reporting.
It is really difficult to find outwhy your app crashed on someone's devicethat you've never seen.
Chances are the developmentand testing environment that you have in your office or your home is not matching whatever your usersare running out in the real world.
Wouldn't it be nice if you could find out what is actually going onin the real world? And now you can.
So there's an API to querywhy your app exited, even if it's not on a devicethat you've ever seen before.
So you can getHistoricalProcessExitReasonsthat gives you a collection of information that you iteratethrough each one of those, and get information on”Did this one crash?, ” “Was there an ANR?, “”Did I run out of memory?” And then upload, log these things, take a look at that information.
You can either do this directlyas a developer, or maybe you usea crash reporting service and those crash reportingservices may use this, as well as other facilities internally to get this kind of informationat a very detailed level from the platform.
GWP-ASan builds on somethingthat we enabled in Android 10 earlier called HWASan.
HWASan was about giving youan environment– a person you could use in your buildand test and debug environment, such that you basically installan alternate memory allocator.
And then, if you ever access that memory, as a native developer– this is all for native development– if you ever access that memoryin a way that you shouldn't, let's say you're dereferencingsomething that has been cleared, it will trigger an errorthat it crashed and log a report.
That's great, but it's so much overhead, both in terms of memory and run time, having this extra allocation mechanism, that it was too much to usein the real world.
GWP-ASan basically doesa small set of that.
It's kind of a sampling approachinto solving that same problem where it doesn't usethat alternate allocator everywhere; instead, it's just in certain locations.
And then again, if oneof those problems occurs, it will trigger a crash, trigger a report, and then you can see that report.
It's automatically uploadedto your Google Play dashboard.
And since it's only a sampleof the allocations, then, it's low enough overheadthat you can ship this on everything.
You enable it very easilyin your manifest.
You just say, gwpAsanMode=”always” You can also do thisfor the entire application, or you can just do itfor subprocesses or activities if you choose.
And then, if one of these problems occurs, it automatically triggers a crashas well as a crash report, and then that gets uploaded automatically.
ADB Incremental is about making it faster to install really huge applicationson your testing device.
Let's say you are a gamewith 2GB worth of data, and you just want to changeone line of code.
And then, every time you changethat line of code, you have to push 2GB worthof application onto the device, which takes a while, and probably getsreally tedious and tiring.
But what you could do insteadis use ADB Incremental and make it up to 10 timesfaster to do this.
Here's how it works.
You use an alternate signature mechanism, and then you use adb install –incremental to take advantage of this.
We have introduced some behavior changes, such as some of the privacy changesthat we talked about earlier.
But we're simplifying how things work.
Most of those changes only take effect if you target this API version.
So if you're not targeting Android 11 yet, then there should be not as muchto actually react to.
But if you do want to target, when you want to target this release, then you can easily togglethese behavior changes either on a command line, which you can do on earlier releases, or through the new UI that we've exposed in the developer options settings.
UI looks a lot like this.
You can basically toggleeach of the behavior changes along the way, or, alternatively, on the left, you can see the command line code there.
You can execute this command, which will toggle it, and you can see the changeover in the UI on the right.
In the graphics and media side, there were various changes.
So, if you're an NDK developer, and you probably want an accessto our image decoders– because we have a bunch of them for all the standard file formatsyou would expect– you usually had to eitherup-call through JNI to down-call through the Android SDK into actually what wasnative code on our side to decode these images.
Kind of tedious to do that.
So a lot of developers insteadwould just bundle another native librarythat had its own decoders, which meant now you essentially have two.
We have the platform decoders, as well as the other decodersyou're bundling, which then bulk upyour APK size on your behalf.
Wouldn't it be niceif you didn't have to do that? So now we expose NDK APIs directly, so that you can useour decoders directly instead.
We also have the ability to decodeanimated HEIF files.
This is a lot like the animated”JIF, ” animated GIF– however you would like to pronounce it– and it comes in asan AnimatedImageDrawable file.
And so it's a lot likethe previous functionality, except that HEIF files tend to besignificantly smaller than GIF files overall.
So worth checking out.
The code to do thatis something like this.
You get access to the file, you create the source objectfor the ImageDecoder, and then off main thread– don't do this on the main thread, this is the expensive part– you decode the drawable.
If it came in as an AnimatedImageDrawable, that means it has several of these frames, and you go ahead and start the animation.
If you're an audio developerand you're an NDK developer, chances are you may be usingsomething called OpenSL ES.
This is deprecated, this is not the thingthat we recommend anymore.
Instead, we recommend that you check out an open-source library called Oboe.
It's unbundled, it's for C++ developers.
It's open-source, it's for doinghigh-performance audio, as well as low-latency audio.
Works all the way back to API 16.
And you can get it at GitHubat the URL posted there.
Also, importantly, we talked to some of the audio engineersthat work on Oboe, a few weeks ago.
So check out episode 135of Android Developers Backstage to learn more about how Oboe worksand how you can use it.
So variable refresh rate.
Since the dawn of time, devices have essentially runat 60 frames a second– at least since the dawnof my time in the mobile universe.
This means that you, as a developer, had about 16.
67 millisecondsto do everything you needed to do to post a frame.
So if you are writing an application that has its own rendering loop– let's say you are writing a game, and maybe you find that on some devices you have so much work to dothat you cannot keep up with that 60 frames a second, you can't do everything you need toin that 16 milliseconds, then you need to drop downto the next available frame rate.
Well, the way refresh rate works, that means that basically, now you're only getting30 frames a second.
So along come the new devicesthat are coming out in the market that allow much higher refresh rates– 90 Hertz, even up to 120 frames a second.
That not only allows you to run faster or for the user to seethe screen updated faster, but it also allows more variationin backoff rates.
So now you have the capabilityto access that, and you do that through one of the APIs, like Surface.
This is a hint to the system saying, “I would like this backoff frame rate, ” and the system collatesall of these requests from across multiple windows that it needs to display at the same time, and makes the right decisionfor the capabilities of the device and for the requests that it's gotten.
There is more going on.
that I don't have time to gointo a lot of detail on.
Neural Networks APIfor machine learning stuff.
It's an API [for C developers]for on-device machine learning.
3 release is coming outwith the Android 11 platform.
Some of the changes there, it's all about performanceand functionality.
We have control flows, so conditionals and loops and branches in a way that software developersexpect and benefit from.
We also have Async command-queue APIs, producing lower overhead for chain models.
And we also have Hard-swish op, which I mention not just because it allowsfaster training and more accuracy, but also because I really like that name.
Hard-swish op–just wanted to say it again.
5G enables through some new devices, as well as carrier capabilities, that enables faster and lower latency– better bandwidth, as well as lower latency.
And we have APIsfor taking advantage of that.
Let's say you want to providea higher level of experience, higher resolution, but only if the useris on an unmetered network or has the bandwidth capability to support that.
So you can use APIs, like the ones you see herefor hasCapability, to determine whether the situation is where we take advantageof 5G capabilities.
And Autofill– so we're usedto the UI experience where you tap on a fieldand if Autofill has information for you, it will pull up a little drop-down.
So now, let's say, in a typical situation, we have a keyboard, and the keyboard has somesuggestions on the top of it, and then Autofill has somesuggestions up here, wouldn't it be niceif we could combine all this in one UI? And that's what we havewith new Autofill capability.
There's two sides to this equation.
You have keyboards that can actuallypresent the information in a secure way.
Let's be clear.
They don't haveaccess to the content, they have access to UI that internallypresents the information.
And then you have password applications– these Autofill servicesthat can provide those information in a secure way to the keyboards.
So, with this, instead of that drop-down, we get the information populatedin a keyboard app.
So, we can see this in a screenshotin the Google keyboard here where the user has tapped in this field, and it happens to be an email address.
So we see that populatingthe keyboard down below.
Or maybe it's a credit card field.
Again, we see that populatingthe keyboard down below.
Or it's an actual mail address.
So the way that works on the keyboard side is that you need to implementInputMethodService methods So you've got thisonCreateInLineSuggestionsRequest.
So you handle that request from the IME.
And then, you get back a response that you also handleto actually present this information on the password sideto provide this information.
You handle onFillRequest.
And then you create a data setjust as you normally do, but you populate itwith inlinePresentation information.
And then you create this FillResponsewith that information, and you're good to go.
We've also done a bunch of stuffoutside the platform more and more as we go.
For example, Jetpack suite of libraries, over 70 of these things, shipping releases every couple of weeks, all kinds of stufffrom architecture components, to compatibility APIs, core functionality.
Some of the new and excitingdevelopments there include Hilt, which isa dependency injection library built on top of Dagger.
This is the new recommended approach for doing DI on Android.
Paging has been completely rewritten for version 3, in Kotlin, for Kotlin developers, taking advantageof language features like coroutines, to make writing, paging applications much easier.
And CameraX recently had a beta release, so be sure to check that outas that gets close to stable.
There is more and more going on there, and all of it is talked about in the “What's newin Android Jetpack” talk.
So check that outfor all of the information.
Speaking of Jetpack, Jetpack Compose is still pre-alpha.
This is the new UI toolkit for Android.
It's currently undergoingactive development.
It is, as I said, pre-alpha development.
It's being developed in the open.
Check out information about that.
Run the tutorial and the sampleto learn more about it.
Play around with the code.
And also go to the “Jetpack Compose” talk to learn more specifics about it.
Android Studio has some interesting thingsgoing on in recent releases.
0 release recently went stable.
Motion Editor, this is a visual tool for creating MotionLayout files.
So you've been able to doMotionLayout for several months now, as that was being actively developed.
But you had to write a lotof tedious code, especially for animation.
There was a lot of stuffthat you needed to do, but it was really meant from the beginningto be a visual tool experience.
Motion Editor provides thatfor creating rich animations in your UIs.
Also, LayoutInspector added a lotof really new fundamental capabilities, like being able to visualizeyour containment hierarchy in 3D to get a better handleon what's going on where, or being able to click throughon property values to find out where those thingsare being set and why.
The beta release for 4.
1enables Database Inspector– very cool tool.
If you're using Roomor SQLite under the hood, then you can see what's going onin your database.
So you've got your devicerunning an application using Room, and you've got a viewinto that database on the tool itself.
And you can change the data on either side and see it reflected liveon either side as well.
The Canary build is interestingfor a couple of reasons.
One is the wireless debugging capability, that I mentioned earlier, is enabled through the tool, making it a little bit easierto access on the Canary.
Also, if you want to play aroundwith Jetpack Compose that I mentioned earlier, you'll need to usea Canary build of Studio, so this is where you should get it.
Couple of tools talks to check out: “What New in Android Development Tools” and “What's new in Design Tools.
” On the distribution side, the big news for Google Play is that they completely rewrotethe Play console.
Most of it is about better design and making stuff easierto understand and access, but also adding new capabilitiesalong the way, like a policy status section, reports on user acquisition, management of the team of people that's actually getting accessto the dashboard information.
It is currently in beta, so go to that URL, check it out, download it, play with it, send us feedback, see how it's working for you.
Also, go to the “What's newin Google Play” talk.
That is it.
But I have some extra linksto share with you here.
First of all, all the talksthat I mentioned here and more– because I didn't get to everything– are in the Android 11 Beta Launch Show.
So go to the Android DevelopersYouTube channel to see all of those talksthat have been posted there.
The Android 11 Preview Site, download the beta, play with it, see how it runs with your app, send us feedback if thingsare not working as expected.
11 Weeks of Android is a new campaign that we're launching next week, because there wasn't enough time to cover everythingin just a dozen videos, so let's tell you more.
There is new contentcoming out every week in specific topic areas, like UI and Jetpack.
So tune in every weekto see what that topic area is and learn all of that contentalong the way.
We're also workingwith our online community around the world to host Android 11 meetups.
These are online meetupshappening now through August all over the world.
So go to that site to check out if there's anything happening near you.
And finally, Now in Android is a series of articles, videos, and a podcast that I'm doing to make ita little bit easier to understand what is going on in the Android universe.
We do a lot of stuff, both on the platform side and developer relations, articles, videos, all the stuff happening all the time.
Wouldn't it be nice if we hadan easy way to understand what happenedevery couple of weeks? So that's what Now in Android is about.
So please check those out for more informationon what's happening in Android.
In the meantime, go check out the videos.
Learn all about Android 11, and what's going onin the Android universe.