С ума сойти как круто!
Thanks for your article! I have read through some similar topics! However, your post has given me a very special impression, unlike other posts. I hope you continue to have valuable articles like this or more to share with everyone! happy wheels
SIGGRAPH brings some crazy news. It appears that you can now fake copies of real people with the help of AI software. Computer scientists from the University of Washington generated highly realistic fake videos of former president Barack Obama using existing audio and video clips of him. This model can definitely help create realistic digital models of a person for virtual reality or augmented reality.
Scientists suggested a model that can copy anyone by analyzing images of them collected from the Internet. The researchers state that one day it would be relatively easy to create such models of anybody, when there are untold numbers of digital photos of everyone on the Internet.
The researchers chose Obama for their latest work using hours of high-definition video of him available online. A neural net analyzed millions of frames of video to determine how elements of Obama’s face moved as he talked, such as his lips and teeth and wrinkles around his mouth and chin.
It is stated that in an artificial neural network, components known as artificial neurons are fed data, and work together to solve a problem such as identifying faces or recognizing speech. The system alters the pattern of connections among those neurons to change the way they interact, and the network tries solving the problem again.
The latest study concluded that the neural net learned what mouth shapes were linked to various sounds. The team took audio clips and dubbed them over the original sound files of a video. The difference here is that the new model can learn from millions of hours of video that already exist on the Internet or elsewhere.
There are still limitations though. Obama tilted his face away from the camera in a target video, imperfect 3-D modeling of his face could cause parts of his mouth to get superimposed outside the face and onto the background.
There are also problems with emotions. The face could appear too serious for casual speeches or too happy for serious speeches.
Imagine a possibility of faking anyone’s speech. Sounds terrible, right? However, the team suggested ways to detect fake videos in the future. For example, these videos can blur mouths and teeth. But they could be perfected in the future.
Make sure to read the full paper of the latest study from the team of the University of Washington here.