The rapid proliferation of social media platforms has enabled the widespread dissemination of information with minimal oversight, leading to a surge in the creation and spread of misinformation, including deep fake videos. Deep fakes, which leverage advanced artificial intelligence techniques to manipulate audio and video content, pose significant threats to digital security, media integrity, and public trust. To address this challenge, we propose a novel AI-powered deep fake detection framework that employs a hybrid deep learning model integrating Convolution Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). Our approach leverages CNNs to extract spatial features from video frames, capturing subtle inconsistencies in facial expressions, lighting, and texture. Meanwhile, RNNs, particularly Long Short-Term Memory (LSTM) networks, analyze temporal dependencies across frames, detecting unnatural motion patterns and inconsistencies over time. By combining these architectures, our model enhances both spatial and temporal feature extraction, improving deep fake classification accuracy. We validated our model on benchmark deep fake detection datasets, including ISO and FA-KES, demonstrating superior performance compared to traditional single-architecture models. Experimental results highlight the robustness of our hybrid approach, showcasing improved generalization across different datasets. This research contributes to the ongoing fight against misinformation by providing an effective and scalable AI-driven solution for video authentication, paving the way for more reliable digital content verification.
Keywords: Deep fake Detection, Hybrid Deep Learning, CNNs, RNNs, Video Authentication, and Misinformation Prevention