Inquiry on Minimum Configuration and Cost for Running C3TR-Adapter_gguf Model Efficiently

#2
by ltkien2003 - opened

I am interested in running the C3TR-Adapter_gguf model and would like to inquire about the minimum hardware configuration required to achieve fast and immediate responses. Additionally, could you please provide an estimate of the associated costs for operating the model under these conditions?

webbigdata org

Hello.

That's a difficult question. Are you planning to buy hardware in the future?

C3TR-Adapter_gguf is designed to run on computers that are not high-end, so I think it will work on recent hardware, but I have not yet looked into it comprehensively. A similar question has been posted in the llama.cpp discussion, but there is no answer yet.
https://github.com/ggerganov/llama.cpp/discussions/8728

The Gemma 2 announcement doesn't state any clear hardware requirements.

Blazing fast inference across hardware: Gemma 2 is optimized to run at incredible speed across a range of hardware, from powerful gaming laptops and high-end desktops, to cloud-based setups. Try Gemma 2 at full precision in Google AI Studio, unlock local performance with the quantized version with Gemma.cpp on your CPU, or try it on your home computer with an NVIDIA RTX or GeForce RTX via Hugging Face Transformers.

https://blog.google/technology/developers/google-gemma-2/

This may be because the technology is evolving so quickly that no best practice configuration has been established, and because "best performance" is subjective, it is difficult to answer.

Sign up or log in to comment