|
--- |
|
base_model: yentinglin/Llama-3-Taiwan-70B-Instruct |
|
language: |
|
- zh |
|
- en |
|
license: llama3 |
|
model_creator: yentinglin |
|
model_name: Llama-3-Taiwan-70B-Instruct |
|
model_type: llama |
|
pipeline_tag: text-generation |
|
quantized_by: minyichen |
|
tags: |
|
- llama-3 |
|
--- |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/vlfv5sHbt4hBxb3YwULlU.png" alt="Taiwan LLM Logo" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
|
|
|
# Llama-3-Taiwan-70B-Instruct - GPTQ |
|
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin) |
|
- Original model: [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct) |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains GPTQ model files for [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct). |
|
|
|
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. |
|
|
|
<!-- description end --> |
|
<!-- repositories-available start --> |
|
* [GPTQ models for GPU inference](minyichen/Llama-3-Taiwan-70B-Instruct-GPTQ) |
|
* [Yen-Ting Lin's original unquantized model](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct) |
|
<!-- repositories-available end --> |
|
|
|
<!-- prompt-template start --> |
|
## Prompt template: Vicuna |
|
|
|
``` |
|
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: |
|
|
|
``` |
|
<!-- prompt-template end --> |
|
|