elinas

elinas

AI & ML interests

LLMs & Finetuning

Recent Activity

updated a model about 1 month ago
ZeusLabs/Chronos-Platinum-72B
New activity about 2 months ago
ZeusLabs/Chronos-Platinum-72B
updated a collection about 2 months ago
My Models

Organizations

Posts 1

view post
Post
1970
We conducted an experiment in an effort to revive LLaMA 1 33B as it had unique prose and a lack of "GPT-isms" and "slop" in its pretraining data, as well as being one of the favorites at the time. With multiple finetune runs, we were able to extend the model from it's pretrained base of 2048 to ~12,000 tokens adding approx. 500M tokens in the process. The effective length is 16,384 but it's better to keep it on the lower range. It writes well and in multiple formats. In the future, we have some ideas like implementing GQA. Please take a look and we would love to hear your feedback!

ZeusLabs/Chronos-Divergence-33B

datasets

None public yet