Last year, Google revealed it’s intentions to develop Augmented Reality services at it’s I/O Keynote 2017. Now Google has rolled out a special Video Segmentation feature for three YouTube creators in the beta version. This technology is integrated to the YouTube’s stories feature.
Creators can replace and modify the background, effortlessly increase videos’ production value without time-consuming manual editing or a green screen studio environment. They can also use this to convey a particular emotion, transport themselves to a different location to increase the impact of the message. This feature is even suitable for the mobile phones.
Google is reported to have used it’s much-hyped machine learning capabilities of it’s software to develop this feature.
The developers annotated thousands of images that captured a wide spectrum of foreground poses and background settings. Annotations consisted of pixel-accurate locations of foreground elements such as hair, glasses, neck, skin, and lips, and a general background label achieving a cross-validation result of 98 percent Intersection-Over-Union (IOU) of human annotator quality, Google’s research blog said.
“Video segmentation is a widely used technique that enables movie directors and video content creators to separate the foreground of a scene from the background and treat them as two different visual layers. By modifying or replacing the background, creators can convey a particular mood, transport themselves to a fun location or enhance the impact of the message. However, this operation has traditionally been performed as a time-consuming manual process or requires a studio environment with a green screen for real-time background removal. In order to enable users to create this effect live in the viewfinder, we designed a new technique that is suitable for mobile phones,” the blog added.
This feature is still limited to the beta users but will be rolled for all users very soon.
Google also plans to integrate this feature into it’s Augmented Reality services.