ContactShieldAI / README.md
parth parekh
improved readme
af9a326
|
raw
history blame
7.81 kB
metadata
library_name: transformers
tags:
  - text-classification
  - contact-information-detection
  - privacy

Model Card for ContactShieldAI

ContactShieldAI is a powerful text classification model designed to detect if users are sharing contact information on freelancing websites. This model helps maintain privacy and adherence to platform guidelines by identifying instances where users might be attempting to circumvent communication policies.

Model Description

ContactShieldAI is based on an enhanced CNN-LSTM architecture, combining the strengths of both convolutional and recurrent neural networks for effective text classification.

  • Developed by: xxparthparekhxx
  • Model type: Text Classification
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from model: Trained from scratch, initialized with GloVe embeddings

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

ContactShieldAI is designed for:

  • Detecting contact information sharing in text on freelancing platforms
  • Enhancing privacy protection in online marketplaces
  • Assisting moderators in identifying policy violations

Downstream Use [optional]

ContactShieldAI can be fine-tuned or integrated into:

  • Content moderation systems for social media platforms
  • Customer support chatbots to protect user privacy
  • Email filtering systems to detect potential policy violations

Out-of-Scope Use

ContactShieldAI should not be used for:

  • Censoring legitimate communication that doesn't violate platform policies
  • Invading user privacy by scanning personal conversations without consent
  • Making decisions about user accounts without human review
  • Detecting contact information in languages other than English (current version)

Bias, Risks, and Limitations

  • The model is trained on synthetic data and may not capture all real-world variations
  • It's specifically tailored for English language text
  • Performance may vary on very short or highly obfuscated text

Recommendations

While this model is designed to enhance privacy and policy compliance, users should be aware of potential biases in the training data. It should be used as a tool to assist human moderators rather than as a sole decision-maker in content moderation.

How to Get Started with the Model

You can use the model directly with the Hugging Face Transformers library:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("xxparthparekhxx/ContactShieldAI")
model = AutoModelForSequenceClassification.from_pretrained("xxparthparekhxx/ContactShieldAI")

text = "Please contact me at [email protected] or call 555-1234."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
prediction = outputs.logits.argmax(-1).item()

print("Contains contact info" if prediction == 1 else "No contact info")

Training Data

The model was trained on a synthetically generated dataset:

  • 200,000 examples created using LLaMA 3.1 70B
  • Balanced dataset of positive (containing contact info) and negative examples

Training Procedure

The training procedure for ContactShieldAI follows these steps:

  1. Data Preparation:

    • Load the dataset using the load_data() function
    • Create a vocabulary from the dataset using build_vocab_from_iterator()
    • Initialize GloVe embeddings for the vocabulary
  2. Model Initialization:

    • Create an instance of EnhancedContactSharingModel
    • Initialize the embedding layer with pretrained GloVe embeddings
  3. Dataset and DataLoader Creation:

    • Create a ContactSharingDataset instance
    • Use DataLoader with custom collate_batch function for efficient batching
  4. Training Loop:

    • Implement k-fold cross-validation (k=5) using KFold from sklearn
    • For each fold:
      • Reset model parameters (except embeddings)
      • Create train and validation data loaders
      • Initialize Adam optimizer and ReduceLROnPlateau scheduler
      • Train for a specified number of epochs (default: 4)
      • In each epoch:
        • Iterate through batches, compute loss, and update model parameters
        • Evaluate on validation set and update learning rate if needed
      • Save the best model based on validation loss
  5. Evaluation:

    • Implement an evaluate() function to compute loss on a given dataset
  6. Prediction:

    • Implement a predict() function for making predictions on new text inputs

The training process utilizes techniques such as learning rate scheduling, early stopping, and k-fold cross-validation to ensure robust model performance and generalization.

Preprocessing [optional]

  • Text tokenization using SpaCy
  • Vocabulary built from the training data
  • Texts padded to uniform length

Training Hyperparameters

  • Optimizer: Adam (lr=0.0001, weight_decay=1e-5)
  • Loss Function: Cross Entropy Loss
  • Learning Rate Scheduler: ReduceLROnPlateau
  • Batch Size: 128
  • Epochs: 15 (early stopping based on validation loss)

Results

The model achieved an impressive validation loss of 0.0211, indicating high accuracy in detecting contact information sharing.

Summary

ContactShieldAI is a powerful model designed to detect contact information sharing in text. Key features include:

  • Trained on a large, balanced dataset of 200,000 examples
  • Utilizes a sophisticated architecture combining LSTM and CNN
  • Achieves high accuracy with a validation loss of 0.0211
  • Easy to use with Hugging Face Transformers library
  • Suitable for various applications requiring privacy protection and data security

The model's architecture and training procedure are optimized for efficient and accurate detection of contact information, making it a valuable tool for safeguarding user privacy in various text-based applications.

Technical Specifications [optional]

Model Architecture and Objective

ContactShieldAI utilizes a sophisticated architecture:

  1. Embedding Layer: Initialized with GloVe 6B 300d embeddings, expanded to 600d
  2. Bidirectional LSTM: Processes the embedded sequence
  3. Multi-scale CNN: Multiple convolutional layers with different filter sizes (3 to 10)
  4. Max Pooling: Applied after each convolutional layer
  5. Fully Connected Layers: Two FC layers with ReLU activation and dropout
  6. Output Layer: 2-dimensional output for binary classification

Key Parameters:

  • Vocabulary Size: 225,817
  • Embedding Dimension: 600
  • Number of Filters: 600
  • Filter Sizes: [3, 4, 5, 6, 7, 8, 9, 10]
  • LSTM Hidden Dimension: 768
  • Dropout: 0.5

@misc{ContactShieldAI, author = {xxparthparekhxx}, title = {ContactShieldAI: A Model for Detecting Contact Information Sharing}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://huggingface.co/xxparthparekhxx/ContactShieldAI}} }