International Journal of Innovative Research in Computer Science and Technology
Year: 2023, Volume: 11, Issue: 6
First page : ( 13) Last page : ( 17)
Online ISSN : 2350-0557.
DOI: 10.55524/ijircst.2023.11.6.3 |
DOI URL: https://doi.org/10.55524/ijircst.2023.11.6.3
Crossref
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0) (http://creativecommons.org/licenses/by/4.0)
Article Tools: Print the Abstract | Indexing metadata | How to cite item | Email this article | Post a Comment
Y. Prudhvi , T. Adinarayana, T. Chandu, S. Musthak, G. Sireesha
In the field of computer graphics and animation, the challenge of generating lifelike and expressive talking face animations has historically necessitated extensive 3D data and complex facial motion capture systems. However, this project presents an innovative approach to tackle this challenge, with the primary goal of producing realistic 3D motion coefficients for stylized talking face animations driven by a single reference image synchronized with audio input. Leveraging state-of-the-art deep learning techniques, including generative models, image-to-image translation networks, and audio processing methods, the methodology bridges the gap between static images and dynamic, emotionally rich facial animations. The ultimate aim is to synthesize talking face animations that exhibit seamless lip synchronization and natural eye blinking, thereby achieving an exceptional degree of realism and expressiveness, revolutionizing the realm of computer-generated character interactions.
Student, Department of Information Technology, Vasireddy Venkatadri Institute of Technology, Guntur, Nambur, India
No. of Downloads: 68 | No. of Views: 1124
Abinesan S, E. Boopathi Kumar.
May 2025 - Vol 13, Issue 3
S. Venkataramana, G. V. Satya Sri Sai, J. D. S. Preetham, J. Mohan Sai, K. Jashwanth Sree.
March 2025 - Vol 13, Issue 2
Sheharyar Nasir, Shumail Sahibzada, Farrukh Sher Malik.
March 2025 - Vol 13, Issue 2