model-evaluation / README.md
NimaBoscarino's picture
Fix README
864b389

A newer version of the Gradio SDK is available: 5.6.0

Upgrade
metadata
title: Evaluating LLMs on Hugging Face
emoji: 🦝
colorFrom: purple
colorTo: gray
sdk: gradio
sdk_version: 3.23.0
app_file: app.py
pinned: false
license: gpl-3.0

Evaluating LLMs on Hugging Face

The AVID (AI Vulnerability Database) team is examining a few large language models (LLMs) on Hugging Face. We will develop a way to evaluate and catalog their vulnerabilities in the hopes of encouraging the community to contribute. As a first step, we’re going to pick a single model and try to evaluate it for vulnerabilities on a specific task. Once we have done one model, we’ll see if we can generalize our data sets and tools to function broadly on the Hugging Face platform.

Vision

Build a foundation for evaluating LLMs using the Hugging Face platform and start populating our database with real incidents.

Goals

  • Build, test, and refine our own data sets for evaluating models
  • Identify existing data sets we want to use for evaluating models (Ex. Stereoset, wino_bias, etc.)
  • Test different tools and methods for evaluating LLMs so we can start to create and support some for cataloging vulnerabilities in our database
  • Start populating the database with known, verified, and discovered vulnerabilities for models hosted on Hugging Face

Resources

The links below should help anyone who wants to support the project find a place to start. They are not exhaustive, and people should feel free to add anything relevant.