yi6B_Vicuna / README.md
lorinma's picture
Update README.md
adcf38b
|
raw
history blame
911 Bytes
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- zh
- en
---
*TODO:Upload pending, training is finished. still testing.
*Update: Having a bit issue, still figuring things out.
Reproduce Vicuna, but based on yi-6B.
We can see from some preliminary results, the conversation is natural and informative (unsurprisingly), also we observe the unfiltering seems to be working!
**Heads up** some examples are unsafe and inappropriate, this is entirely for the purpose of testing how un-aligned SFT data affect LLM's final output.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6413d7be996b2e426f230fb7/pklSsljCRN34QuL2ZF2zU.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6413d7be996b2e426f230fb7/22pTSVkBCVlQ5N8A8JBkF.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6413d7be996b2e426f230fb7/WfQYyyLxtXA2KlePmIPQJ.png)