DCDC PROJECT HUB
Deepfake Voice and Video Detection System
Problem statement
Advances in generative AI have made it easy to create deepfake audio and video where a person appears to say or do things they never did. Such content can be used for misinformation, fraud and identity attacks. There is a crucial need for tools that can help detect whether a given clip is genuine or manipulated.
Abstract
This project designs a deepfake detection pipeline for both voice and video. For video, it uses frame-level analysis of facial artifacts, inconsistencies in eye blinking, head motion and texture patterns using CNN-based models or pretrained deepfake detection networks. For voice, it analyzes spectrograms and voice features using CNN/RNN models to distinguish real speech from synthesized or cloned audio. A web interface allows users to upload a video or audio file and receive a probability score indicating whether it is likely to be a deepfake.
Components required
- Dataset of real and deepfake videos (e.g., FaceForensics++, DFDC)
- Dataset of real and synthetic audio samples
- Python with TensorFlow/PyTorch
- Feature extraction tools (Librosa for audio, OpenCV for video)
- GPU-enabled machine for training (or cloud GPU)
- Web interface (Flask/Django backend + simple frontend)
Block diagram
Working
When a user uploads a media file, the system first identifies whether it is audio-only, video-only, or both. For videos, it extracts frames, detects and crops faces, and optionally computes temporal features across frames. For audio, it converts the signal into spectrograms and extracts features like MFCCs. These inputs are passed into pretrained or fine-tuned deep learning models trained to differentiate between real and fake samples. The output contains a confidence score (e.g., 0–100%) that indicates how likely the content is to be manipulated. The interface displays this score along with a simple explanation or visualization (such as heatmaps highlighting suspicious areas in video).
Applications
- Journalism and media verification tools
- Social media content moderation
- Cybersecurity and digital forensics
- Public awareness tools demonstrating deepfake risks
- Academic research on generative model detection