--- license: apache-2.0 base_model: mistralai/Mistral-Nemo-Base-2407 tags: - general-purpose - text-generation --- # Astra-v1-12B Astra-v1-12B is a fine-tuned version of the base model [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407), developed for general-purpose natural language processing tasks. It was fine-tuned to replicate the quality and style of Claude 3's Sonnet and Opus models. ![Astra-v1-12B](https://i.imgur.com/rCXcyno.png) ### Model Description Astra-v1-12B is a general-purpose transformer-based language model fine-tuned for instruction-following tasks. The fine-tuning was designed to match the high-quality generation seen in Claude 3's Sonnet and Opus models, optimized for tasks such as text generation, summarization, question answering, and more. - **Developed by:** P0x0 - **Finetuned from:** [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) - **License:** Apache 2.0 ### Model Sources - **Repository:** [https://huggingface.co/P0x0/astra-v1-12b](https://huggingface.co/P0x0/astra-v1-12b) ## Uses ### Direct Use Astra-v1-12B can be used directly for a wide range of NLP tasks, including: - Text generation - Summarization - Question answering - Dialogue systems ### Out-of-Scope Use Astra-v1-12B is not intended for real-time decision-making in critical applications or generating harmful or biased content. ## How to Get Started with the quantized model To run the quantized version of the model, you can use [KoboldCPP](https://github.com/LostRuins/koboldcpp), which allows you to run quantized GGUF models locally.