Android Jetpack: Understand the CameraX Camera-Support Library (Google I/O'19)

[Music] welcome everyone to this session on camera X my name is Veneto and I'm the product manager on the Android camera platform team today we'll show you how to minimize the camera development time with a consistent and easy to use camera API we'll have some code walkthroughs a

lot of fun demos and we'll end by showing how you can elevate your in-app experiences now many of you know this camera development is hard why is that within your apps you need to account for various OSS second you have to provide a consistent experience across a range

of devices from entry level to flagships and finally the camera api's are very powerful but difficult to master so we're excited to give you this first glimpse into camera X this last support library is launching as part of jetpack in alpha we develop completely in the open part

of Android open source project we update multiple times a year so let's learn more so camera X is backwards compatible to Android L this works with 90% of devices in the market today so when your target SDK is API 21 or higher you'll be able to use camera

X under the hood camera X uses the public camera to api's now we know what you're thinking what about the old deprecated camera one API so we provide the same consistency as camera 1 via the camera 2 legacy layer we've tested identified and fixed a number of issues

across these older devices so how do we achieve this consistent behavior last summer I was in China at the Android GDD event and was shocked to learn that several developers are running manual testing across hundreds of devices so we understand your pain and felt that the best way

is to invest in an automated camera test lab this test lab has devices from Android L all the way through Q has devices from various manufacturers and it runs 24 hours a day seven days a week by providing an automated testing experience were able to guarantee a high

quality camera performance across all your apps we've tested hundreds of devices across all OS layers to date and we do basic testing like orientation rotation taking a picture one of this test suite is the latency test suite and we're open sourcing that test suite with this test you'll

be able to start benchmarking your camera start/stop latencies your latency is for a photo capture this is a very powerful tool that we've used very recently with a lot of the top Android applications here's just a list of example issues that we've tackled with camera X and we

solve that on your behalf for example the front and back camera crashes optimizing camera closures or just making sure that the flash works now this is our season premiere of camera X there lots and lots of devices out there so we really hope that you try out this

season you try out this alpha version give us your feedback and then as we improve it'll just keep getting better finally I want to talk about how camera X is incredibly easy to use the current camera to API gives you very fine-grained control over the all the sensors

what we've done is abstracted away the hard bits and provide a use case based API that simply lets you get a display up on screen gives you high-quality frames so you can do amazing analysis and simply take a picture or a video finally we bind all these use

cases to the apps activity lifecycle making sure that camera X is life cycle aware and you don't have to start and stop the camera separately anymore awesome so here's the typical walkthrough the camera app architecture today your camera app might be talking to the public camera2 api and

under the hood it's talking to the device hal or the hardware abstraction layer in this case all the device specific courts have to be accounted for within your application now let's imagine you using camera X in this case camera X hides all that from you we just give

you consistent reliable performance now there are a lot of advanced use cases and for that we have an experimental version where you can interoperate between camera 2 and camera X so let me welcome Trevor up here on stage we're going to do a demo on the camera 360

app now camera 360 is based in China they have hundreds of millions of installs across Android and only on this stage you'll see a demo on a 2016 Samsung j7 this device is running Android 6 marshmallow and under the hood camera X is talking to the camera to

legacy layer he's able to change all the effects all this is done using camera X now camera 360 is usually is using all the three use cases preview image analysis and image capture they're using image analysis or track Trevor's face and draw the effects around it and whenever

Trevor is happy with the picture he likes he take he clicks the button and takes a picture [Music] it's so easy and fun I want to get in on the next picture Thank You Trevor so camera 360 had a number of benefits with camera acts they were able

to reduce their device-specific testing and saw a 75% reduction in lines of code compared to camera two this is in addition to making the code easier to read and a smaller apk size so let's summarize camera X is backwards compatible to Android L working on 90% of devices

today we provide a consistent experience across devices and we test using the automated camera X test lab and the API is incredibly easy to use with a use case base approach so up next is James Fung will deep dive into this API thank you thanks for neat my

name is James I'm a software engineer on the camera X team we talked briefly about use cases so these were the preview image analysis and image capture when we sat down to design camera rack so we realized we had to balance a few design constraints we want to

make an API that's simple to use we want to create an abstraction level that lets us hide device specific issues we also want to make sure that the API is flexible enough so that you can create fully featured applications so how did we think about this how did

we go about this we figured about creating a programming model where we really are capturing the intent of the developer but also providing the right amount of flexibility so really we have an API and we'll walk through some examples where we're really asking three things from the application

first what's the intended usage which of the use cases one are multiple do you want to use preview image analysis and image capture secondly or should the output go when you get the frames how does that hook up to your application so that you can write the code

that you need to do and finally when should the camera start when should the camera stop so I'm gonna walk through some code now and these examples of being caught Lin say you have a device at your workstation and you're starting to use a camera for the first

time one of the things you might want to do is start up that camera and see those live frames directly on screen we call that preview let's take a look at what the code looks like for that we start by configuring a preview so here I built a

preview config using a builder I'm showing it here in his default configuration the default configuration is actually fairly powerful what this is going to do is on that particular device it's going to understand the available resolutions it's also going to understand the resolution of the display and it's

going to try and make a reasonable decision based on that for you it will default also to the back camera and making sure that the default configuration ones well is an example of something that we're able to test in our lab now at this point you could set

additional options if you needed them we'll show some examples coming up now that we have configured the preview we can actually create a preview use case simply by handing it the config and we'll see if these use cases actually have API specific methods the preview provides a method

to set a listener what's going to happen now is when the preview becomes active it's going to output a preview output the current version outputs within the preview output a surface texture that's configured and ready you can use a surface texture to attach to a texture view or

use this with a GL renderer future versions may interact with surface view or accept the surface texture from the application now with this code camera X is configured we just need to be able to turn it on and turn it off so the third step when should the

camera start when should the camera stop we do this by binding the previous case to an activity lifecycle now what this means is when the activity starts the PP is going to start and the camera is going to start to stream when the activity stops the preview is

going to stop the camera is going to shut down other life cycles can also be used for example a fragment life cycle could be used here instead and in this way we're able to hide the complexity of start and stop in particular shutting down things in the right

order can be kind of tricky so here you see all three steps being done together now a real camera application will have additional code there would need to be code for setting up permission attaching to views and managing the surface texture but we do show here is actually

sufficient to get the camera system set up we've hidden many of the details of camera – for example opening the camera creating the session preparing the correct surfaces selecting the resolutions and the careful shutdown that you sometimes have to do to make sure that everything works just right

I'll pause here just for a second so you can look at slide so let's say you have that first previous dreaming now maybe the next thing you want to do is be able to access that camera data so you can do some analysis or image processing on it

so for this we have something called image analysis which provides easy access to the buffers from the camera so that you can perform your own analysis what does it look like it really follows the same steps that we showed with the preview first we're gonna go ahead and

create a config here I'm gonna show what it looks like the set an option on a config in this case we're gonna request a resolution now let's say your processing requires some minimum resolution for it to succeed this is where you can specify that what camera X then

is going to do is it's gonna balance the requests from your application with the device capability so it's going to look at your target resolution request from image analysis it's also going to understand if you have a preview running the requirements there and it's gonna balance all that

by looking also at what the device is capable of so if you're able to get the target resolution you'll get it if not we'll try an X higher resolution so that you can get some guaranteed minimum failing that we will fall back to a 640 by 480 resolution

which is guaranteed across all devices the key thing here being that camera X is doing everything it can to make sure on that particular device it's setting up a session that will run for you once we have the config again we can go ahead create the use case

object itself step two what do we do with the output now in the case of image analysis it lets your applications set an analyzer the analyzer is provided an image proxy this is a camera X abstraction but basically is wrapping an Android media image class so you have

the same access to the same data there you'll notice we also have a rotation parameter that's set here we understand rotation can be really important for image analysis now camera X is not going to rotate the data internally because we know if you're going to do an analysis

pass maybe you're doing a per pixel pass already and it might be more efficient for you to do the rotation in place if you wanted rather what we mean by rotation here is we're going to make it easy for your application to understand which way is up in

that image buffer that you've received no-english ways up can be important for example if you're doing a face detection perhaps your face detection model requires knowledge of which way is up for it to work correctly and it's inside the analyzer then that you have all this information you

have the image available to you and you can actually insert the code that you need to get your job done finally step three when to start and stop here again we bind to a life cycle so this is an example now of binding both in image analysis and

the preview at the same time now what this means is when the activity becomes active both image analysis and preview will be running at the same time and ready so here we have again all three of the steps together for the image analysis what's going to happen now

is when the camera system sets up that preview is going to come up but also your analyzer function is going to start to receive images at the camera frame rate and your analysis is going to start running I'm really proud to say that we partnered with lens and

camera X is being used inside lens and Google go lens and Google go is a version of lens intended to target low-end devices that helps users understand text some of the benefits they expressed to us was that it allowed them to focus on their core competency understanding the

text and creating a really great user experience they were able to spend less time worried about the camera stack they were happy that they found that camera acts just worked on a diversity of low-end devices lens and Google go also has a tight apk budget size with one

of our early builds they were able to integrate the camera stack into their application for less than 30 kilobytes image capture image capture allows you to take a high-quality picture with the camera in sight image capture we do things like implement focus control handle auto exposure and handle

auto white balance sometimes called 3a additionally within image capture we're able to optimize capture requests on a per device basis if necessary let's take a look at how that looks it's gonna follow the same pattern as our previous use cases so first we create a config now we

know getting rotation on devices can be tricky getting portrait mode and landscape mode just right on a variety of devices is hard what we've done here is reduce the problem to having the application specify its current display rotation what's going to happen now is internally inside camera X

we're going to look at the rest of the transforms on the device and make sure that our output is doing the right thing for your application for example in the case of image capture we're going to understand instead of transforms and we're going to make sure that the

X is made it did it is set correctly then we go ahead and create the actual image capture use case itself then we bind to the life cycle as before now here's an example of binding all the three use cases together preview image analysis and now image capture

again here's the code to set up the image capture and so now when the activity starts the preview is going to come on screen if you have any analysis it'll be running and your application is ready to take a picture but what about step two what about attaching

that output it's another example of a use case specific method so for taking a picture it might be pretty common that the user would tap on screen that would have activated button click and perhaps that button click would call into some function let's say on so when user

clicks a button now this can invoke a function and within that function you can simply call image capture take picture here I show first preparing a file and this will be the destination of the image and then simply calling the image capture take picture function with that take

picture function you can then specify the target destination the file itself you can also specify what to do after the image is captured by specifying a listener you can specify what to do on error or what to do after damage has been saved really happy that we've been

able to demo or will you've been able to partner with a company called snow and we'll be able to demo that today so snow is an application they're based in Korea and have a really large multi-million user install base so Trevor is going to help us demo this

again so snow is a really fun app for taking selfies and we're gonna show this demo this is running on an a samsung s ten plus and we're showing all the different use cases we talked about we're showing preview to get the image on screen we're showing image

analysis to detect the face and then also render an overlay and then the whole gang is going to come up and finally when we're ready let's see if we can kind of get a picture that we like with all the foxes when we're ready we can even go

ahead and use the image capture and that'll take a photo these guys so this is a great example with what we have today of using all these three use cases together to create a fully featured application for taking fun selfies working with snow they express on the benefits

of camera acts to us they found it was easy to use they appreciated that we managed the camera lifecycle they also appreciated that we handled camera threatening internally to camera X without the need for them to have that in their application so in summary we've talked about the

use cases preview image analysis image capture we talked about a programming model built around three steps and with just preview and image analysis we talked about how that can make things for easier for example for ml hooking up to ml I show an example using lens and Google

go and finally we've shown that what we have today is actually flexible enough to create a fully featured camera based application now what I've shown today is a starting point this is a glimpse into the direction we're taking with camera API what is available today and I'm really

happy to have you guys try it out thanks [Applause] thanks James so with one more thing to talk about there are many new camera capabilities that are typically part of the native camera app for example portrait night HDR and beauty we've heard from many of you wanting access

to these device specific capabilities and we're excited to share the camera X extensions enables just that we've partnered with some of the top manufacturers across the world from samsung huawei LG and Motorola and you'll start seeing devices supporting these extensions this summer with Huawei you'll be able to

see the HDR and portrait effects on four existing device models that we'll be upgrading this summer and best of all it's two lines of code if a device doesn't support any specific functionality then the boolean returns as false and it's a no op there's no manufacturer specific coding

required on your part camera x handles all of that for you so we're excited to show you a demo on the Samsung s 10 plus so I'll welcome Franklin back up on stage so this app that Franklin's running is the camera X sample app what Franklin is going

to do is take two images a picture with the normal mode and a picture with the HDR mode and let's compare the two images so in the HDR image you'll see that the you know the light that's coming out of the glass door is actually better the table

or the podium here is quite dark in the normal picture but it's more lighted up in the HDR picture so all of these functionalities can be added very simply within your application Thank You Franklin so here's a picture comparison of images taken using camera X on the Left

you'll see HDR off and on the right you'll see HDR on and we have one more Samsung will be the first manufacturer to bring night mode images to developers on the Left you'll see an image in low-light conditions and it's grainy but on the right with the night

mode enabled that image is sharp the colors are more vivid and imagine the experiences that you can bring to your users when you enable these extensions so we have one more demo this demo is the app you can perfect it's a camera first application based in Taiwan with

hundreds of millions of installs this demo is on the hallway may 20 pro and it's running the camera X extensions both in the viewfinder and in image capture so let's jump to the phone mode so it's showing the desktop there you go perfect so in the app all

it's using is two lines of code for camera X to enable the extensions and you'll see that behind me the effect is actually blurred out so let's try a glam shot cool so underneath it's using all the device specific capabilities there's no computational photography that you can have

to do so this type of behavior can be enabled in all your applications thank you so here are some of the benefits that you can express to us when using camera X to get access to a lot of new features that are the same as the native camera

app and the best of all it's still reduces the lines of code it's seventy percent fewer lines of code compared to camera to all of this in top of getting a consistent experience across devices and an easy-to-use API so let's recap what we've talked about in this session

today camera X is backwards compatible to Android L working with 90% of devices on market provides a consistent behavior across a number of devices and we do a lot of the testing for you with an automated camera X test lab the API is easy to use and lifecycle

aware and finally with the new extensions capabilities you'll be able to try a lot of new effects on newer devices we sincerely hope that this is the API and the changes that you're looking for and we really value and look forward to your feedback I want to thank

some of the early developers that are partnered with us on camera X and they've provided us a lot of guidance and constructive feedback so you can download camera X today you can add it into your application just like other jetpack libraries and please share your feedback with us

either by joining the Google group or you tagging us on Stack Overflow you can meet us in the sandbox you'll be able to see some of the live demos that we did also in the sandbox or you can reach us in office hours tomorrow so thank you very

much for attending [Music] you


See More Android:

Hãy bình luận đầu tiên

Để lại một phản hồi

Thư điện tử của bạn sẽ không được hiện thị công khai.