A Multi-Modal Deep Learning Framework for Identifying Deepfakes and Synthetic Media Using Visual Forensics
Main Article Content
Abstract
With deepfakes becoming more convincing and widespread, it is getting harder to tell what’s real and what’s fake in digital media. This paper introduces a practical framework designed to spot deepfakes and synthetic visuals—whether they’re images or videos. The approach blends different techniques: we look at subtle biometric signals like heartbeat rhythms through photoplethysmography (PPG), and we use deep learning models such as CNNs and LSTMs to track patterns over time. By combining these elements, the system becomes better at catching even well-made fakes. Our results show improved accuracy and better resistance to manipulation compared to traditional methods. This makes the framework useful for verifying media, supporting online safety, and helping digital forensics teams deal with the growing threat of fake content