language: en tags: - image-classification - computer-vision - deep-learning - face-detection - resnet datasets: - custom license: mit
ResNet-based Face Classification Model π
This model is trained to distinguish between real human faces and AI-generated faces using a ResNet-based architecture.
Model Description π
Model Architecture
- Deep CNN with residual connections based on ResNet architecture
- Input shape: (224, 224, 3)
- Multiple residual blocks with increasing filter sizes [64, 128, 256, 512]
- Global average pooling
- Dense layers with dropout for classification
- Binary output with sigmoid activation
Task
Binary classification to determine if a face image is real (human) or AI-generated.
Framework and Training
- Framework: TensorFlow
- Training Device: GPU
- Training Dataset: Custom dataset of real and AI-generated faces
- Validation Metrics:
- Accuracy: 52.45%
- Loss: 0.7246
Intended Use π―
Primary Intended Uses
- Research in deepfake detection
- Educational purposes in deep learning
- Face authentication systems
Out-of-Scope Uses
- Production-level face verification
- Legal or forensic applications
- Stand-alone security systems
Training Procedure π
Training Details
optimizer = Adam(learning_rate=0.0001)
loss = 'binary_crossentropy'
metrics = ['accuracy']
Training Hyperparameters
- Learning rate: 0.0001
- Batch size: 32
- Dropout rate: 0.5
- Architecture:
- Initial conv: 64 filters, 7x7
- Residual blocks: [64, 128, 256, 512] filters
- Dense layer: 256 units
Evaluation Results π
Performance Metrics
- Validation Accuracy: 52.45%
- Validation Loss: 0.7246
Limitations
- Performance is only slightly better than random chance
- May struggle with high-quality AI-generated images
- Limited testing on diverse face datasets
Usage π»
from tensorflow.keras.models import load_model
import cv2
import numpy as np
# Load the model
model = load_model('face_classification_model1')
# Preprocess image
def preprocess_image(image_path):
img = cv2.imread(image_path)
img = cv2.resize(img, (224, 224))
img = img / 255.0
return np.expand_dims(img, axis=0)
# Make prediction
image = preprocess_image('face_image.jpg')
prediction = model.predict(image)
is_real = prediction[0][0] > 0.5
Ethical Considerations π€
This model is designed for research and educational purposes only. Users should:
- Obtain proper consent when processing personal face images
- Be aware of potential biases in face detection systems
- Consider privacy implications when using face analysis tools
- Not use this model for surveillance or harmful applications
Technical Limitations β οΈ
Current performance limitations:
- Accuracy only slightly above random chance
- May require ensemble methods for better results
- Limited testing on diverse datasets
Recommended improvements:
- Extended training with larger datasets
- Implementation of data augmentation
- Hyperparameter optimization
- Transfer learning from pre-trained models
Citation π
@software{face_classification_model1,
author = {Your Name},
title = {Face Classification Model using ResNet Architecture},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/arsath-sm/face_classification_model1}
}
Contributors π₯
- Arsath S.M
- Faahith K.R.M
- Arafath M.S.M
University of Jaffna
License π
This model is licensed under the MIT License.