in ,

Tensor is the name of Google’s own mobile chip.

google-chip-pixel-6

Google said on Monday that its upcoming flagship phones, the Pixel 6 and Pixel 6 Pro, will be powered by a custom-built Tensor processor. It’s developed for all the Android-based sorcery in the future Pixel phones, and it’s been codesigned with machine learning and artificial intelligence researchers over the last four years.

The decision comes at a time when IT behemoths are increasingly favoring proprietary chip designs for their phones, laptops, and other devices.

According to Google’s hardware executive Rick Osterloh, the company’s ability to create with other businesses’ chips has reached a limit. “The problem always boils down to hardware capability,” Osterloh says. “Are you able to actually do the processing necessary to really run sophisticated advanced AI models? Unquestionably, you run into different constraints with off-the-shelf technology. So several years ago, we decided if we’re going to really innovate for the future, we’re going to need to build our own system.”

When Osterloh adds “a few years ago,” he’s referring to Google’s first computer chip, which was unveiled in 2016. The device is known as the Tensor Processing Unit, or TPU, after Google’s open source machine learning framework, TensorFlow. These TPUs are application-specific chips that are meant to help in machine learning. They were, however, designed for AI servers housed in massive data centers. This new Tensor processor will be built directly into the Pixel device you’re holding or carrying in your pocket—at least, it will be when the Pixel 6 phones come in October.

What does this signify for the next generation of Pixels?

According to Osterloh, nearly every current Pixel feature powered by AI and machine learning will improve while utilizing fewer CPU resources, such as Night Sight or Portrait mode in the camera app. But it will also make possible things that were previously impossible.

A snapshot of a Google engineer’s child waving at the camera indoors is his first example. It was taken using the Pixel 6 Pro, but with all AI and machine learning features turned off. The lighting is poor, the youngster is moving, and the person holding the camera is moving as well, resulting in the blurry face and hand wave of the child. Osterloh then compared it to a photo taken with AI and machine learning models enabled. The child’s characteristics became more distinct. He explains, “What we want to do is give the user the intent they have and capture this really fun moment”

To achieve this, the Pixel takes a photo with a regular exposure from the main sensor, followed by a snap with a rapid exposure from the ultrawide camera. It mixes the two, so it gets the ultrawide’s sharpness but the regular sensor’s true colors and noise. The Tensor chip then compensates for motion, such as the subject’s hand wave and the photographer’s hand-induced camera shaking. It’s also using a face detection model to recognize the subject’s face and make sure it’s in focus.

The Tensor chip influences every other element of the Pixel, in addition to increasing camera performance. With an improved version of Google’s Titan M security coprocessor running alongside the Tensor chip, there will be security improvements, according to Google. (Around the time of the Pixel’s introduction, the firm says it aims to provide white papers with further security-related facts.)

Osterloh also showed off two other Tensor-related updates: Live Transcribe, which automatically transcribes movies on the go, and voice dictation in Gboard, Google’s keyboard software.

Leave a Reply

Your email address will not be published.

phone-spying-pegasus-spyware

Pegasus spyware was allegedly used by governments to publish private images of female journalists.

microsoft-windows-365-rent-month

Microsoft Windows 365 cloud PCs ready to rent for as little as $20 per month.