Home Digital Devices Developers Golden Rules   Contact
   
 

Ask a Techspert: What is on-device processing?

Nov 9, 2024
Learn about how on-device processing actually works, plus how it powers features across Google products, like Pixel, Nest and more. 


Every time a new Pixel phone comes out, you might hear that “on-device processing” makes its cool new features possible. Just take a look at the new Pixel 9 phones — things like Pixel Studio and Call Notes run “on device.” And it’s not just phones: Nest cameras, Pixel smartwatches and Fitbit devices also use this whole “on-device processing” thing. Given the devices that use it and the features it’s powering, it sounds pretty important.

It’s safe to assume that the, er, processing, is happening on the, uh…well, the device. But to get a better understanding of what that means, we talked to Trystan Upstill, who has been at Google for nearly 20 years working on engineering teams across Android, Google News and Search.

You were on a team that helped develop some of the exciting features that shipped with our new Pixel devices — can you tell me a little about what you worked on?

Most recently, I worked within Android where I led a team that focuses on melding Google’s various technology stack into an amazing experience that’s meaningful to the user. Then figuring out how to build it and ship it.

Since we’re improving technologies and introducing new ones quite often, it seems like that would be a never-ending job.

Exactly! Within recent years, there’s been this explosion in generative AI capabilities. At first when we started thinking about running large language models on devices, we thought it was kind of a joke — like, “Sure we can do that, but maybe by 2026.” But then we began scoping it out, and the technology performance evolved so quickly that we were able to launch features using Gemini Nano, our on-device model, on Pixel 8 Pro in December 2023.

That’s what I want to know more about: “on-device processing.” Let’s break it down and start with what exactly “processing” means.

The main processor, or system-on-a-chip (SoC), in your devices, has a number of what are called Processing Units designed specifically to handle the tasks you want to do with that device. That's why you'll see the chip (like the Tensor chip found in Pixels) referred to as a "system-on-a-chip: There's not just one processor, but several processing units, memory, interfaces and much more, all together on one piece of silicon.

Let’s use Pixel smartphones as an example: The processing units include a Central Processing Unit, or CPU, as the main “engine” of sorts; a Graphics Processing Unit, or GPU, which renders visuals; and now today we have a Tensor Processing Unit, or TPU, specially designed by Google to run AI/ML workloads on a device. These all work together to help your phone get things done — aka, processing.

For example, when you take photos, you’re often using all elements of your phone’s processing power to good effect. The CPU will be busy running core tasks that control what the phone is doing, the GPU will be helping render what the lens is seeing and, on a premium Android device like a Pixel, there's also a lot of work happening on the TPU to process what the optical lens sees to make your photos look awesome.

Got it. “On-device” processing implies there’s off-device. Where is “off-device processing” happening, exactly?

Off-device processing happens in the cloud. Your device connects to the internet and sends your request to servers elsewhere, which perform the task, and then send the output back to your phone. So if we wanted to take that process and make it happen on device, we’d take the large machine learning model that powered that task in the cloud and make it smaller and more efficient so it can run on your device’s operating system and hardware.

What hardware makes that possible?

New, more powerful chipsets. For example, with the Pixel 9 Pro, that’s happening thanks to our SoC called Tensor G4. Tensor G4 enables these phones to run models like Gemini Nano — it’s able to handle these high-performance computations.

So basically, Tensor is designed specifically to run Google AI, which is also what powers a lot of Pixel’s new gen AI capabilities.

Right! And the generative AI features are definitely part of it, but there are lots of other things on-device processing makes possible, too. Rendering video, playing games, HDR photo editing, language translation — most everything you do with your phone. These are all happening on your phone, not being sent up to a server for processing.

TalkBack with Gemini, which analyzes images and reads descriptions out loud to blind or low-vision users, is an example of on-device processing that makes use of Tensor, Pixel’s system on a chip.

The computation your phone can do today is pretty incredible. Today's smartphones are thousands of times faster than early high-performance computers, even those that were the size of rooms. Back in the day, those high-performance computers were the state of the art in terms of data analysis, image processing, anomaly detection and early AI research. Now we can do this all on device, and it opens up all sorts of neat opportunities to build helpful features that use this processing capability.

Is on-device processing better than off-device?

Not necessarily. If you were to use Search entirely on-device, that would be really slow or really limited or both, because when you’re searching the web, you’re sort of looking for a needle in a haystack. To fit the entire web index on your phone would be too much! Instead, when you use Search, you’re tapping into the cloud and our data centers to access trillions of web pages to find what you’re looking for.

But if you want to perform a more specific task, then on-device processing is really useful. For starters, there’s latency — if something’s being processed directly on the device, you may get the result faster. Then there’s also the fact that features that are fully on device work without an internet connection, meaning better availability and reliability.

Finally, given the AI chip is in your pocket rather than being served through a cloud backend, it's free for apps to leverage the LLM capabilities.

All this said, there are distinct advantages to both: Cloud has more powerful models and can house lots of important data. Lots of your data, like photos, videos and more, sits in the cloud today. It also helps support actions like searching massive databases, like Drive, Gmail and Google Photos.

I’m already pretty impressed with what my Pixel can do today, but from what you’re saying, I’d imagine it’s only going to get better.

Yes, the models we’re using to do these complex tasks on Android devices are getting more capable. And of course it’s not just about better models and better technology: We also put a lot of work and research into thinking about what’s actually going to benefit people. We don’t want to just introduce products because the on-device processing can handle it; we want to make sure it’s something that people want to use on their phones in their everyday lives.

The content of this article is sourced from Google's Blog. [Click to Open]
Recent Posts
The information provided in this article is for reference only, and we do not guarantee that all the information contained therein is accurate and correct. Please verify the accuracy of the relevant information before making any decisions.
Featured Reading
 
Home
Digital Devices
Developers
Golden Rules
Contact Us      
Mobile Site
Conditions of Use    Privacy Notice    © 2024-2024, Hugdigi.com