File size: 1,596 Bytes
3b14208
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
base_model: yentinglin/Llama-3-Taiwan-70B-Instruct
language:
- zh
- en
license: llama3
model_creator: yentinglin
model_name: Llama-3-Taiwan-70B-Instruct
model_type: llama
pipeline_tag: text-generation
quantized_by: minyichen
tags:
- llama-3
---

<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/vlfv5sHbt4hBxb3YwULlU.png" alt="Taiwan LLM Logo" width="600" style="margin-left:'auto' margin-right:'auto' display:'block'"/>

# Llama-3-Taiwan-70B-Instruct - GPTQ
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)

<!-- description start -->
## Description

This repo contains GPTQ model files for [Llama-3-Taiwan-70B-Instruct](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct).

Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.

<!-- description end -->
<!-- repositories-available start -->
* [GPTQ models for GPU inference](minyichen/Llama-3-Taiwan-70B-Instruct-GPTQ)
* [Yen-Ting Lin's original unquantized  model](https://huggingface.co/yentinglin/Llama-3-Taiwan-70B-Instruct)
<!-- repositories-available end -->

<!-- prompt-template start -->
## Prompt template: Vicuna

```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:

```
<!-- prompt-template end -->