Until now, such kind of software prowess was almost unattainable for nearly every phone maker as they lack the sort of training data Google’s services benefit from. But not anymore. In a surprising move, Google has announced that it’s making the technology behind the Pixel 2’s Portrait Mode feature open source. This essentially means anyone can build applications by implementing the underlying framework Google has employed on its phones. The model — called DeepLab-v3+ — is now included in Google’s open-source computational library, TensorFlow. “We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology, added Liang-Chieh Chen and Yukun Zhu, Software Engineers“, Google Research further in a blog post.
Google researchers also outlined a few more details how DeepLab-v3+ functions. It’s a semantic image segmentation model which, in layman’s terms, translates to assigning a particular, unique label such as “road”, “person” to every pixel in an image. Since it is associating these tags to each pixel, the outcomes and outlines turn out much more accurate than other similar solutions. Now, this doesn’t mean OEMs will be just able to add Google’s Portrait Mode to their phones through an update. They will be still required to tune and parse all the data the algorithms produce into something more meaningful. That itself can be a strenuous process especially for companies who don’t primarily deal with advanced software services. The big guns like Samsung and Huawei will also probably continue polishing their own implementations instead of adopting Google’s own. Both of them have been working towards cutting off as many as dependencies on the Android maker as possible in the past year or two.