mahnerak's picture
add huggingface spaces metadata
f833094
|
raw
history blame
3.12 kB
metadata
title: LLM Transparency Tool
emoji: πŸ”¬πŸ”¬πŸ”¬
colorFrom: red
colorTo: yellow
sdk: docker
app_file: app.py
pinned: false

LLM Transparency Tool

screenshot

Key functionality

  • Choose your model, choose or add your prompt, run the inference.
  • Browse contribution graph.
    • Select the token to build the graph from.
    • Tune the contribution threshold.
  • Select representation of any token after any block.
  • For the representation, see its projection to the output vocabulary, see which tokens were promoted/suppressed but the previous block.
  • The following things are clickable:
    • Edges. That shows more info about the contributing attention head.
    • Heads when an edge is selected. You can see what this head is promoting/suppressing.
    • FFN blocks (little squares on the graph).
    • Neurons when an FFN block is selected.

Installation

Dockerized running

# From the repository root directory
docker build -t llm_transparency_tool .
docker run --rm -p 7860:7860 llm_transparency_tool

Local Installation

# download
git clone [email protected]:facebookresearch/llm-transparency-tool.git
cd llm-transparency-tool

# install the necessary packages
conda env create --name llmtt -f env.yaml
# install the `llm_transparency_tool` package
pip install -e .

# now, we need to build the frontend
# don't worry, even `yarn` comes preinstalled by `env.yaml`
cd llm_transparency_tool/components/frontend
yarn install
yarn build

Launch

streamlit run llm_transparency_tool/server/app.py -- config/local.json

Adding support for your LLM

Initially, the tool allows you to select from just a handful of models. Here are the options you can try for using your model in the tool, from least to most effort.

The model is already supported by TransformerLens

Full list of models is here. In this case, the model can be added to the configuration json file.

Tuned version of a model supported by TransformerLens

Add the official name of the model to the config along with the location to read the weights from.

The model is not supported by TransformerLens

In this case the UI wouldn't know how to create proper hooks for the model. You'd need to implement your version of TransparentLlm class and alter the Streamlit app to use your implementation.

License

This code is made available under a CC BY-NC 4.0 license, as found in the LICENSE file. However you may have other legal obligations that govern your use of other content, such as the terms of service for third-party models.