In a paper titled Few-Shot Adversarial Learning of Realistic Neural Talking Head Models, researchers at the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have revealed how realistic fake videos can be created with just a few source images. While the capability to create convincing deepfakes is not entirely new, the paper demonstrates its easier and quicker than previously imagined.
Currently, to generate a realistic deepfake, you need a large dataset of pictures to train the model. However, the algorithm developed by Samsung researchers can create convincing animated portraits from a small dataset, ranging from just 1 to 32 images. They are able to do this by training their model on “landmark” facial features which include eyes, mouth shapes, the length and shape of a nose bridge.
To demonstrate their work, the researchers created living portraits of Mona Lisa, Albert Einstein, Fyodor Dostoyevsky, Marilyn Monroe, and others by using only source image for each. The results already look remarkable and can be improved further with more source images.
The paper says the technology has “practical applications for telepresence, including (video conferencing) and multi-player games, as well as special effects industry.” While that sounds true and exciting, realistic fake videos also have the potential to destroy individuals and communities. Even if the companies investing in this space may not have any nefarious goals, there is no guarantee their technology will be used only for the intended purposes.
The post Samsung’s research shows realistic fake videos are not too far appeared first on SamMobile.
SamMobile http://bit.ly/2M524In
Tidak ada komentar:
Posting Komentar