PEFT
Portuguese
llama
LoRA
Llama
Stanford-Alpaca
dominguesm commited on
Commit
8133f1e
1 Parent(s): 88ca543

Correção link colab

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ inference: false
27
 
28
  **This model was trained and made available solely and exclusively for research purposes.**
29
 
30
- Try the pretrained model out on Colab [here](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/03%20-%20Evaluate.ipynb)!
31
 
32
  This repository contains a [low-ranked adapter (LoRa)](https://arxiv.org/pdf/2106.09685.pdf) for LLaMA-7b fit on the [**Stanford Alpaca dataset**](https://github.com/tatsu-lab/stanford_alpaca) translated into **Brazilian Portuguese** using the [**Helsinki-NLP/opus-mt-tc-big-en-pt**](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-pt) model.
33
 
 
27
 
28
  **This model was trained and made available solely and exclusively for research purposes.**
29
 
30
+ Try the pretrained model out on Colab [here](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Evaluate.ipynb)!
31
 
32
  This repository contains a [low-ranked adapter (LoRa)](https://arxiv.org/pdf/2106.09685.pdf) for LLaMA-7b fit on the [**Stanford Alpaca dataset**](https://github.com/tatsu-lab/stanford_alpaca) translated into **Brazilian Portuguese** using the [**Helsinki-NLP/opus-mt-tc-big-en-pt**](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-pt) model.
33