Volume- 11
Issue- 6
Year- 2023
DOI: 10.55524/ijircst.2023.11.6.3 |
DOI URL: https://doi.org/10.55524/ijircst.2023.11.6.3
Crossref
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0) (http://creativecommons.org/licenses/by/4.0)
Article Tools: Print the Abstract | Indexing metadata | How to cite item | Email this article | Post a Comment
Y. Prudhvi , T. Adinarayana, T. Chandu, S. Musthak, G. Sireesha
In the field of computer graphics and animation, the challenge of generating lifelike and expressive talking face animations has historically necessitated extensive 3D data and complex facial motion capture systems. However, this project presents an innovative approach to tackle this challenge, with the primary goal of producing realistic 3D motion coefficients for stylized talking face animations driven by a single reference image synchronized with audio input. Leveraging state-of-the-art deep learning techniques, including generative models, image-to-image translation networks, and audio processing methods, the methodology bridges the gap between static images and dynamic, emotionally rich facial animations. The ultimate aim is to synthesize talking face animations that exhibit seamless lip synchronization and natural eye blinking, thereby achieving an exceptional degree of realism and expressiveness, revolutionizing the realm of computer-generated character interactions.
Student, Department of Information Technology, Vasireddy Venkatadri Institute of Technology, Guntur, Nambur, India
No. of Downloads: 16 | No. of Views: 130
Sakshi Srivastava, Ruchi Pandey, Shuvam Kumar Gupta, Saurabh Nayak.
November 2023 - Vol 11, Issue 6
Mallisetty Siva Mahesh, Kattamuri B N Ayyappa, Maddela Murali, Mididoddi Surendra Babu, Nagababu Pachhala.
November 2023 - Vol 11, Issue 6
Sheetal Ajaykumar Takale .
July 2023 - Vol 11, Issue 4