topic_modelling / requirements_gpu.txt
seanpedrickcase's picture
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
22ca76e
raw
history blame
646 Bytes
gradio # Not specified version due to interaction with spacy - reinstall latest version after requirements.txt load
boto3
transformers==4.41.2
accelerate==0.26.1
bertopic==0.16.2
spacy==3.7.4
en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1.tar.gz
pyarrow==14.0.2
openpyxl==3.1.3
Faker==22.2.0
presidio_analyzer==2.2.354
presidio_anonymizer==2.2.354
scipy==1.11.4
polars==0.20.6
llama-cpp-python==0.2.87 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121
torch --index-url https://download.pytorch.org/whl/cu121
sentence-transformers==3.0.1
numpy==1.26.4