The technique enables one to "drag" any points of the image to precisely reach target points in a user-interactive manner.
A group of researchers recently published a paper presenting DragGAN, an innovative technique for manipulating generated images. This method allows users to interactively drag points on the images to achieve precise positioning at target points.
The method comprises two key components. The first component is feature-based motion supervision, which guides the movement of the handle point towards the desired position. The second component involves a novel point tracking approach that utilizes discriminative GAN features to continuously localize the handle points' positions.
With DragGAN, individuals can flexibly deform images while maintaining complete control over the pixel placement. This enables the manipulation of various categories, such as animals, cars, humans, landscapes, and more, allowing for adjustments in pose, shape, expression, and layout.
"As these manipulations are performed on the learned generative image manifold of a GAN, they tend to produce realistic outputs even for challenging scenarios such as hallucinating occluded content and deforming shapes that consistently follow the object's rigidity," commented the team. "Both qualitative and quantitative comparisons demonstrate the advantage of DragGAN over prior approaches in the tasks of image manipulation and point tracking."
Learn more here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.