RichardErkhov
commited on
Commit
•
6e284f1
1
Parent(s):
3006f75
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
llama-3-meerkat-70b-v1.0 - GGUF
|
11 |
+
- Model creator: https://huggingface.co/dmis-lab/
|
12 |
+
- Original model: https://huggingface.co/dmis-lab/llama-3-meerkat-70b-v1.0/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [llama-3-meerkat-70b-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.Q2_K.gguf) | Q2_K | 24.56GB |
|
18 |
+
| [llama-3-meerkat-70b-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
|
19 |
+
| [llama-3-meerkat-70b-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.IQ3_S.gguf) | IQ3_S | 28.79GB |
|
20 |
+
| [llama-3-meerkat-70b-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
|
21 |
+
| [llama-3-meerkat-70b-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.IQ3_M.gguf) | IQ3_M | 29.74GB |
|
22 |
+
| [llama-3-meerkat-70b-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.Q3_K.gguf) | Q3_K | 31.91GB |
|
23 |
+
| [llama-3-meerkat-70b-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
|
24 |
+
| [llama-3-meerkat-70b-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
|
25 |
+
| [llama-3-meerkat-70b-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
|
26 |
+
| [llama-3-meerkat-70b-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/blob/main/llama-3-meerkat-70b-v1.0.Q4_0.gguf) | Q4_0 | 37.22GB |
|
27 |
+
| [llama-3-meerkat-70b-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | IQ4_NL | 37.58GB |
|
28 |
+
| [llama-3-meerkat-70b-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q4_K_S | 37.58GB |
|
29 |
+
| [llama-3-meerkat-70b-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q4_K | 39.6GB |
|
30 |
+
| [llama-3-meerkat-70b-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q4_K_M | 39.6GB |
|
31 |
+
| [llama-3-meerkat-70b-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q4_1 | 41.27GB |
|
32 |
+
| [llama-3-meerkat-70b-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q5_0 | 45.32GB |
|
33 |
+
| [llama-3-meerkat-70b-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q5_K_S | 45.32GB |
|
34 |
+
| [llama-3-meerkat-70b-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q5_K | 46.52GB |
|
35 |
+
| [llama-3-meerkat-70b-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q5_K_M | 46.52GB |
|
36 |
+
| [llama-3-meerkat-70b-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q5_1 | 49.36GB |
|
37 |
+
| [llama-3-meerkat-70b-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q6_K | 53.91GB |
|
38 |
+
| [llama-3-meerkat-70b-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/dmis-lab_-_llama-3-meerkat-70b-v1.0-gguf/tree/main/) | Q8_0 | 69.83GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
license: cc-by-nc-4.0
|
46 |
+
pipeline_tag: text-generation
|
47 |
+
tags:
|
48 |
+
- medical
|
49 |
+
- small LM
|
50 |
+
- instruction-tuned
|
51 |
+
- usmle
|
52 |
+
- synthetic data
|
53 |
+
---
|
54 |
+
|
55 |
+
|
56 |
+
# Meerkat-70B (Version 1.0)
|
57 |
+
|
58 |
+
🚀 Meerkat-70B is a new instruction-tuned medical AI system of the Meerkat model family.
|
59 |
+
The model was based on the Meta's Llama-3-70B-Instruct model and fine-tuned using our new synthetic dataset consisting of high-quality chain-of-thought reasoning paths sourced from 18 medical textbooks, along with diverse instruction-following datasets.
|
60 |
+
This equips the model with high-level medical reasoning capabilities required for solving complex medical problems.
|
61 |
+
For further insights into our model, please refer to our paper!
|
62 |
+
|
63 |
+
📄 **Paper**: [Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks](https://arxiv.org/abs/2404.00376)
|
64 |
+
|
65 |
+
|
66 |
+
## Quick Start
|
67 |
+
|
68 |
+
```python
|
69 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
70 |
+
import torch
|
71 |
+
|
72 |
+
model_id = "dmis-lab/llama-3-meerkat-70b-v1.0"
|
73 |
+
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
75 |
+
model = AutoModelForCausalLM.from_pretrained(
|
76 |
+
model_id,
|
77 |
+
torch_dtype=torch.bfloat16, # You can choose to use this when there's not enough GPU memory available.
|
78 |
+
device_map="auto",
|
79 |
+
)
|
80 |
+
|
81 |
+
# Multi-turn dialogue example
|
82 |
+
messages =[
|
83 |
+
{"role": "system", "content": "You are a helpful doctor or healthcare professional. Guide the conversation to provide useful, complete, and scientifically-grounded answers to user questions. You have the option to compose a concise, single-turn conversation if the user's input is comprehensive to provide accurate answers. However, if essential details are missing, you should engage in a multi-turn dialogue, asking follow-up questions to gather a thorough medical history and records.\n\n"},
|
84 |
+
{"role": "user", "content": "Hello, doctor. I'm really concerned about my 10-year-old son. We recently discovered a painless mass in his left testicle, so we brought him to the pediatrician."},
|
85 |
+
{"role": "assistant", "content": "I understand your concern. Let's gather some more information. Has your son experienced any other symptoms along with the mass?"},
|
86 |
+
{"role": "user", "content": "Other than the mass, my son hasn't shown any symptoms. He's been his usual self, playing and eating normally."}
|
87 |
+
]
|
88 |
+
|
89 |
+
input_ids = tokenizer.apply_chat_template(
|
90 |
+
messages,
|
91 |
+
add_generation_prompt=True,
|
92 |
+
return_tensors="pt"
|
93 |
+
).to(model.device)
|
94 |
+
|
95 |
+
terminators = [
|
96 |
+
tokenizer.eos_token_id,
|
97 |
+
tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
98 |
+
]
|
99 |
+
|
100 |
+
outputs = model.generate(
|
101 |
+
input_ids,
|
102 |
+
max_new_tokens=1000,
|
103 |
+
eos_token_id=terminators,
|
104 |
+
do_sample=True,
|
105 |
+
temperature=0.7,
|
106 |
+
)
|
107 |
+
response = outputs[0][input_ids.shape[-1]:]
|
108 |
+
print(tokenizer.decode(response, skip_special_tokens=True))
|
109 |
+
```
|
110 |
+
|
111 |
+
## Prompt Details
|
112 |
+
|
113 |
+
To reproduce the results reported in our paper, it is advisable to utilize the identical system messages used during model training. Please refer to the guidelines detailed below.
|
114 |
+
|
115 |
+
### USMLE
|
116 |
+
|
117 |
+
When solving USMLE-style questions such as [MedQA](https://arxiv.org/abs/2009.13081) and [MedBullets](https://arxiv.org/abs/2402.18060), use the following system message:
|
118 |
+
```
|
119 |
+
messages = [
|
120 |
+
{"role": "system", "content": "The following is a multiple-choice question about medical knowledge. Solve this in a step-by-step fashion, starting by summarizing the available information. Output a single option from the given options as the final answer. You are strongly required to follow the specified output format; conclude your response with the phrase \"the answer is ([option_id]) [answer_string]\".\n\n"},
|
121 |
+
{"role": "user", "content": "Two weeks after undergoing an emergency cardiac catherization with stenting for unstable angina pectoris, a 61-year-old man has decreased urinary output and malaise. He has type 2 diabetes mellitus and osteoarthritis of the hips. Prior to admission, his medications were insulin and naproxen. He was also started on aspirin, clopidogrel, and metoprolol after the coronary intervention. His temperature is 38\u00b0C (100.4\u00b0F), pulse is 93/min, and blood pressure is 125/85 mm Hg. Examination shows mottled, reticulated purplish discoloration of the feet. Laboratory studies show:\nHemoglobin count 14 g/dL\nLeukocyte count 16,400/mm3\nSegmented neutrophils 56%\nEosinophils 11%\nLymphocytes 31%\nMonocytes 2%\nPlatelet count 260,000/mm3\nErythrocyte sedimentation rate 68 mm/h\nSerum\nUrea nitrogen 25 mg/dL\nCreatinine 4.2 mg/dL\nRenal biopsy shows intravascular spindle-shaped vacuoles. Which of the following is the most likely cause of this patient's symptoms?\" (A) Renal papillary necrosis (B) Cholesterol embolization (C) Eosinophilic granulomatosis with polyangiitis (D) Polyarteritis nodosa"},
|
122 |
+
]
|
123 |
+
```
|
124 |
+
The model generates reasoning paths to solve the problem and then sequentially provides the predicted answers.
|
125 |
+
Since the model ends its response with "the answer is," it is straightforward to extract the predicted answer for comparison with the actual answer.
|
126 |
+
|
127 |
+
### Multiple-choice Exams
|
128 |
+
|
129 |
+
For other types of multiple-choice exams such as [MedMCQA](https://arxiv.org/abs/2203.14371) or [MMLU](https://arxiv.org/abs/2009.03300), use the following simple system message:
|
130 |
+
```
|
131 |
+
messages = [
|
132 |
+
{"role": "system", "content": "Answer the multiple-choice question about medical knowledge.\n\n"},
|
133 |
+
{"role": "user", "content": "In a Robertsonian translocation fusion occurs at the: (A) telomeres. (B) centromeres. (C) histones. (D) ends of the long arms."},
|
134 |
+
]
|
135 |
+
```
|
136 |
+
|
137 |
+
### Other Use Cases
|
138 |
+
Our model was trained using the [AlpaCare](https://github.com/xzhang97666/alpacare) instruction dataset comprising 52K examples, to enhance its generalization capabilities across diverse user prompts.
|
139 |
+
Feel free to design and test your prompts and to share your thoughts with us, whether the model exceeds expectations or falls short!
|
140 |
+
|
141 |
+
|
142 |
+
## Reproducing MedQA Performance with vLLM
|
143 |
+
|
144 |
+
Here is an example code for fast model evaluation in MedQA using vLLM. To adapt this code for other datasets like MedMCQA or MMLU, simply modify the instructions and update the dataset paths as needed.
|
145 |
+
```python
|
146 |
+
# export CUDA_VISIBLE_DEVICES=0,1
|
147 |
+
|
148 |
+
import re
|
149 |
+
from datasets import load_dataset
|
150 |
+
from vllm import LLM, SamplingParams
|
151 |
+
USMLE_INSTRUCTION = (
|
152 |
+
"The following is a multiple-choice question about medical knowledge. Solve this in"
|
153 |
+
" a step-by-step fashion, starting by summarizing the available information. Output"
|
154 |
+
" a single option from the given options as the final answer. You are strongly"
|
155 |
+
" required to follow the specified output format; conclude your response with the"
|
156 |
+
' phrase "the answer is ([option_id]) [answer_string]".\n\n'
|
157 |
+
)
|
158 |
+
llm = LLM(
|
159 |
+
model="dmis-lab/llama-3-meerkat-70b-v1.0",
|
160 |
+
dtype="bfloat16",
|
161 |
+
gpu_memory_utilization=0.9,
|
162 |
+
max_model_len=2048,
|
163 |
+
trust_remote_code=True,
|
164 |
+
tensor_parallel_size=2
|
165 |
+
)
|
166 |
+
|
167 |
+
tokenizer = llm.get_tokenizer()
|
168 |
+
|
169 |
+
inputs, labels = [], []
|
170 |
+
for sample in load_dataset(
|
171 |
+
"GBaker/MedQA-USMLE-4-options", split="test", trust_remote_code=True
|
172 |
+
):
|
173 |
+
options = sorted(sample["options"].items())
|
174 |
+
options = " ".join(map(lambda x: f"({x[0]}) {x[1]}", options))
|
175 |
+
content = tokenizer.apply_chat_template(
|
176 |
+
[{"role": "system", "content": USMLE_INSTRUCTION}, {"role": "user", "content": sample["question"] + " " + options}],
|
177 |
+
add_generation_prompt=True,
|
178 |
+
tokenize=False,
|
179 |
+
)
|
180 |
+
inputs.append(content)
|
181 |
+
labels.append(sample["answer_idx"])
|
182 |
+
|
183 |
+
generated = llm.generate(
|
184 |
+
inputs,
|
185 |
+
SamplingParams(
|
186 |
+
temperature=0.0,
|
187 |
+
stop_token_ids=[tokenizer.vocab["<|eot_id|>"]],
|
188 |
+
max_tokens=1024,
|
189 |
+
),
|
190 |
+
)
|
191 |
+
def extract_answer(text: str, options: str = "ABCD") -> str:
|
192 |
+
return (re.findall(rf"he answer is \(([{options}])\)", text) or [options[0]])[-1]
|
193 |
+
|
194 |
+
correctness = []
|
195 |
+
|
196 |
+
for g, l in zip(generated, labels):
|
197 |
+
correctness.append(extract_answer(g.outputs[0].text) == l)
|
198 |
+
|
199 |
+
print(sum(correctness) / len(correctness))
|
200 |
+
```
|
201 |
+
|
202 |
+
|
203 |
+
## Evaluation
|
204 |
+
|
205 |
+
We tested models on seven medical benchmarks: [MedQA](https://arxiv.org/abs/2009.13081), [USMLE sample test](https://www.usmle.org/prepare-your-exam), [Medbullets-4](https://arxiv.org/abs/2402.18060), [Medbullets-5](https://arxiv.org/abs/2402.18060) , [MedMCQA](https://arxiv.org/abs/2203.14371), [MMLU-Medical](https://arxiv.org/abs/2009.03300), and [JAMA Clinical Challenge](https://arxiv.org/abs/2402.18060).
|
206 |
+
|
207 |
+
| **Model** | **Average** | **MedQA** | **USMLE** | **Medbullets-4** | **Medbullets-5** | **MedMCQA** | **MMLU-Medical** |
|
208 |
+
|:--------------------------------|:-----------:|:---------:|:---------:|:----------------:|:----------------:|:-----------:|:----------------:|
|
209 |
+
| GPT-4 | 76.6 | 81.4 | 86.6 | 68.8 | 63.3 | 72.4 | **87.1** |
|
210 |
+
| GPT-3.5 | 54.8 | 53.6 | 58.5 | 51.0 | 47.4 | 51.0 | 67.3 |
|
211 |
+
| MediTron-70B (Ensemble, 5 runs) | - | 70.2 | - | - | - | 66.0 | 78.0 |
|
212 |
+
| MediTron-7B | 51.0 | 50.2 | 44.6 | 51.1 | 45.5 | 57.9 | 56.7 |
|
213 |
+
| BioMistral-7B | 55.4 | 54.3 | 51.4 | 52.3 | 48.7 | 61.1 | 64.6 |
|
214 |
+
| Meerkat-7B | 62.6 | 70.6 | 70.3 | 58.7 | 52.9 | 60.6 | 70.5 |
|
215 |
+
| Meerkat-8B (**New**) | 67.3 | 74.0 | 74.2 | 62.3 | 55.5 | 62.7 | 75.2 |
|
216 |
+
| Meerkat-70B (**New**) | **77.9** | **82.6** | **87.4** | **71.4** | **65.3** | **73.9** | 86.9 |
|
217 |
+
|
218 |
+
Please note that the scores in MMLU-Medical were calculated based on the average accuracies across six medical-related subjects in the original MMLU benchmark, and each result for a single subject is presented below.
|
219 |
+
|
220 |
+
| **Model** | **Average** | **Cliniq Knowledge** | **Medical Genetics** | **Anatomy** | **Professional Medicine** | **College Biology** | **College Medicine** |
|
221 |
+
|:--------------------------------|:-----------:|:--------------------:|:--------------------:|:-----------:|:-------------------------:|:-------------------:|:--------------------:|
|
222 |
+
| GPT-4 | **87.1** | 86.4 | **92.0** | 80.0 | **93.8** | **93.8** | 76.3 |
|
223 |
+
| GPT-3.5 | 67.3 | 68.7 | 68.0 | 60.7 | 69.9 | 72.9 | 63.6 |
|
224 |
+
| MediTron-70B (Ensemble, 5 runs) | 78.0 | 75.5 | 85.9 | 69.4 | 82.3 | 86.7 | 68.0 |
|
225 |
+
| MediTron-7B | 56.7 | 57.7 | 63.8 | 56.9 | 56.0 | 57.1 | 48.9 |
|
226 |
+
| BioMistral-7B | 64.6 | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 |
|
227 |
+
| Meerkat-7B | 70.5 | 71.6 | 74.8 | 63.2 | 77.3 | 70.8 | 65.2 |
|
228 |
+
| Meerkat-8B (**New**) | 75.2 | 74.3 | 76.7 | 74.8 | 75.3 | 76.1 | 74.3 |
|
229 |
+
| Meerkat-70B (**New**) | 86.9 | **87.2** | 88.2 | **84.4** | 87.2 | 87.9 | **86.6** |
|
230 |
+
|
231 |
+
|
232 |
+
## Reference
|
233 |
+
|
234 |
+
Please see the information below to cite our paper.
|
235 |
+
```bibtex
|
236 |
+
@article{kim2024small,
|
237 |
+
title={Small language models learn enhanced reasoning skills from medical textbooks},
|
238 |
+
author={Kim, Hyunjae and Hwang, Hyeon and Lee, Jiwoo and Park, Sihyeon and Kim, Dain and Lee, Taewhoo and Yoon, Chanwoong and Sohn, Jiwoong and Choi, Donghee and Kang, Jaewoo},
|
239 |
+
journal={arXiv preprint arXiv:2404.00376},
|
240 |
+
year={2024}
|
241 |
+
}
|
242 |
+
```
|
243 |
+
|
244 |
+
## Acknowledgement
|
245 |
+
|
246 |
+
Research supported with Cloud TPUs from Google’s TPU Research Cloud (TRC).
|
247 |
+
|
248 |
+
## Contact
|
249 |
+
|
250 |
+
Feel free to email `[email protected]` if you have any questions.
|
251 |
+
|