yi6B_Vicuna / README.md
lorinma's picture
Update README.md
adcf38b
|
raw
history blame
911 Bytes
metadata
datasets:
  - anon8231489123/ShareGPT_Vicuna_unfiltered
language:
  - zh
  - en

*TODO:Upload pending, training is finished. still testing. *Update: Having a bit issue, still figuring things out.

Reproduce Vicuna, but based on yi-6B.

We can see from some preliminary results, the conversation is natural and informative (unsurprisingly), also we observe the unfiltering seems to be working!

Heads up some examples are unsafe and inappropriate, this is entirely for the purpose of testing how un-aligned SFT data affect LLM's final output.

image/png

image/png

image/png