TRAINING LORAS
Thanks and this is a good route to choose as loras are a funemental way to segment task and design a llm based on combinations of only the reequired tasks : thanks !
BUT !!
When training lora models to be used in this way it also means that we will not be mergeing it into the model being trained ie the base model !
SO:
to get the best from the lora it is suggested to
** 1 :use a rank of 128 / 32
*** this is to target as many possible parameters in the model :
** 2: Targets all gates and modules and most important the LM HEAD ! <<<<
*** this also is to target the most important part of the model ( you will see the parameterrs shoot up double )
Then do a Function Filter on the data To make sure each record is not bigger than the context even truncate it if neccasary or reject it : this insures a stable training ! ( gpu usage is constant and cpu and memeory ois all constant ) so you can now calcualte the size of the batch ( ie the biggest possibe batch ) ( so you will always be uinder the max resources ) ...
Then you can choose and do the dataset with custom prompt and custom output shape !
when you export this adapter it will be a very intensive lora file ( it is like its own neural network ) < with a head ! ( Tuned ) ( if your able to also tune the embeddings at the same time ( they will also be included in the lora ) <<<
so we know now what we are putting in the lora so when you export the adapter it will be larger than the dataset ! or at least according to your steps , larger than your steps . as well as most important to highly fit the data ( it cannot be over fit as over fitting is highly desirable for single tasks ) !! <<
this is why this lora is so small !!!!
this method need to be repeated for every lora you have !!
as this is why they are not STRONG !
then you can even use them like layers ! and stack them up to make a single model .... ( there is a technique for thiis called mixture of adapters ) ...