--- language: - en pretty_name: GPTDynamics --- # Dataset Card for GPTDynamics ## Dataset Summary GPTDynamics is a dataset for training and evaluating GPT simulators with training curriculums when training GPT models in fine-tuned and commanded fine-tuning scenarios as well as test metrics (including the loss, BLEU, or ROUGE score) for the test samples at each training steps. It is introduced by this [paper](https://arxiv.org/pdf/2404.07840). ## Dataset Structure - **id**: ID of test examples - **trajectory**: A list of training status items for GPT training. Each item includes the current training step, the corresponding training sample, and the test metrics for the test sample with ID *id*. ## Data Instances Here is an example of GPTDynamics: ``` { {"id": 200, "loss_trajectory": [{"step": 1, "loss": 2.1776978969573975}, {"step": 2, "loss": 1.9405722618103027}, {"step": 3, "loss": 1.7367844581604004}, {"step": 4, "loss": 1.7367844581604004}, {"step": 5, "loss": 1.5723190307617188}, {"step": 6, "loss": 1.4200186729431152}, {"step": 7, "loss": 1.3043007850646973}, {"step": 8, "loss": 1.2168896198272705}, {"step": 9, "loss": 1.1534075736999512}, {"step": 10, "loss": 1.1152348518371582}, {"step": 11, "loss": 1.1152348518371582}, {"step": 12, "loss": 1.0790379047393799}, {"step": 13, "loss": 1.0651202201843262}, {"step": 14, "loss": 1.0498850345611572}, {"step": 15, "loss": 1.031670331954956}, {"step": 16, "loss": 1.0191452503204346}, {"step": 17, "loss": 1.0147688388824463}, {"step": 18, "loss": 0.9998137950897217}, {"step": 19, "loss": 0.9855930209159851}, {"step": 20, "loss": 0.9674589037895203}, {"step": 21, "loss": 0.9639376401901245}, {"step": 22, "loss": 0.9576956033706665}, {"step": 23, "loss": 0.9551132917404175}, {"step": 24, "loss": 0.9648815393447876}, {"step": 25, "loss": 0.9736171364784241}, {"step": 26, "loss": 0.9851577281951904}, {"step": 27, "loss": 0.9893707036972046}, {"step": 28, "loss": 1.0030837059020996}, {"step": 29, "loss": 1.0139732360839844}, {"step": 30, "loss": 1.0163781642913818}, {"step": 31, "loss": 1.0070099830627441}, {"step": 32, "loss": 0.9952836036682129}, {"step": 33, "loss": 0.9842988848686218}, {"step": 34, "loss": 0.9693409204483032}, {"step": 35, "loss": 0.9520652890205383}, {"step": 36, "loss": 0.9321508407592773}, {"step": 37, "loss": 0.9054916501045227}, {"step": 38, "loss": 0.8903782367706299}, {"step": 39, "loss": 0.8781676292419434}, {"step": 40, "loss": 0.8616900444030762}, {"step": 41, "loss": 0.8523870706558228}, {"step": 42, "loss": 0.8555602431297302}, {"step": 43, "loss": 0.857650876045227}, {"step": 44, "loss": 0.8613216876983643}, {"step": 45, "loss": 0.862149715423584}, {"step": 46, "loss": 0.8526755571365356}, {"step": 47, "loss": 0.8449978828430176}, {"step": 48, "loss": 0.8408513069152832}, {"step": 49, "loss": 0.836147665977478}, {"step": 50, "loss": 0.8282585740089417}, {"step": 51, "loss": 0.8207768201828003}, {"step": 52, "loss": 0.8117837905883789}, {"step": 53, "loss": 0.8007177710533142}, {"step": 54, "loss": 0.7916263937950134}, {"step": 55, "loss": 0.7800232172012329}, {"step": 56, "loss": 0.7684219479560852}, {"step": 57, "loss": 0.7614703178405762}, {"step": 58, "loss": 0.7508001327514648}, {"step": 59, "loss": 0.741269588470459}, {"step": 60, "loss": 0.7335830330848694}, {"step": 61, "loss": 0.7259370684623718}, {"step": 62, "loss": 0.7219422459602356}, {"step": 63, "loss": 0.7146060466766357}, {"step": 64, "loss": 0.7088412642478943}, {"step": 65, "loss": 0.7015798091888428}, {"step": 66, "loss": 0.6983332633972168}, {"step": 67, "loss": 0.6971897482872009}, {"step": 68, "loss": 0.6952411532402039}, {"step": 69, "loss": 0.6954054236412048}, {"step": 70, "loss": 0.69813472032547}, {"step": 71, "loss": 0.7001814842224121}, {"step": 72, "loss": 0.6996586322784424}, {"step": 73, "loss": 0.6953248977661133}, {"step": 74, "loss": 0.6917775273323059}, {"step": 75, "loss": 0.6852239370346069}, {"step": 76, "loss": 0.6780304908752441}, {"step": 77, "loss": 0.6724499464035034}, {"step": 78, "loss": 0.6652747392654419}, {"step": 79, "loss": 0.6599810719490051}, {"step": 80, "loss": 0.6562151908874512}, {"step": 81, "loss": 0.6603475213050842}, {"step": 82, "loss": 0.6595749258995056}, {"step": 83, "loss": 0.6532007455825806}, {"step": 84, "loss": 0.6475570797920227}, {"step": 85, "loss": 0.6428706049919128}, {"step": 86, "loss": 0.638140082359314}, {"step": 87, "loss": 0.6333405375480652}, {"step": 88, "loss": 0.6285641193389893}, {"step": 89, "loss": 0.6250292658805847}, {"step": 90, "loss": 0.6216303110122681}, {"step": 91, "loss": 0.6190891861915588}, {"step": 92, "loss": 0.6162238717079163}, {"step": 93, "loss": 0.6136907935142517}, {"step": 94, "loss": 0.6119751930236816}, {"step": 95, "loss": 0.6114301085472107}, {"step": 96, "loss": 0.6112507581710815}]} {"id": 201, "loss_trajectory": [{"step": 1, "loss": 2.661651134490967}, {"step": 2, "loss": 2.3306431770324707}, {"step": 3, "loss": 2.03875732421875}, {"step": 4, "loss": 2.03875732421875}, {"step": 5, "loss": 1.743143916130066}, {"step": 6, "loss": 1.4888012409210205}, {"step": 7, "loss": 1.2995624542236328}, {"step": 8, "loss": 1.154435396194458}, {"step": 9, "loss": 1.0413002967834473}, {"step": 10, "loss": 0.944778323173523}, {"step": 11, "loss": 0.944778323173523}, {"step": 12, "loss": 0.8778289556503296}, {"step": 13, "loss": 0.8155273795127869}, {"step": 14, "loss": 0.7719510793685913}, {"step": 15, "loss": 0.743318498134613}, {"step": 16, "loss": 0.7230879068374634}, {"step": 17, "loss": 0.7014121413230896}, {"step": 18, "loss": 0.6848206520080566}, {"step": 19, "loss": 0.6771003007888794}, {"step": 20, "loss": 0.6715677976608276}, {"step": 21, "loss": 0.6617311239242554}, {"step": 22, "loss": 0.6589836478233337}, {"step": 23, "loss": 0.6560938358306885}, {"step": 24, "loss": 0.6462780833244324}, {"step": 25, "loss": 0.6388468146324158}, {"step": 26, "loss": 0.6293094754219055}, {"step": 27, "loss": 0.6265830993652344}, {"step": 28, "loss": 0.6162292957305908}, {"step": 29, "loss": 0.6083053946495056}, {"step": 30, "loss": 0.6056196093559265}, {"step": 31, "loss": 0.6099292039871216}, {"step": 32, "loss": 0.6157264709472656}, {"step": 33, "loss": 0.6204148530960083}, {"step": 34, "loss": 0.6296204924583435}, {"step": 35, "loss": 0.6403841376304626}, {"step": 36, "loss": 0.652870774269104}, {"step": 37, "loss": 0.6713826656341553}, {"step": 38, "loss": 0.6812401413917542}, {"step": 39, "loss": 0.6874089241027832}, {"step": 40, "loss": 0.6968488097190857}, {"step": 41, "loss": 0.7042997479438782}, {"step": 42, "loss": 0.7002748847007751}, {"step": 43, "loss": 0.6977438926696777}, {"step": 44, "loss": 0.6954635977745056}, {"step": 45, "loss": 0.6966844201087952}, {"step": 46, "loss": 0.695155143737793}, {"step": 47, "loss": 0.6946768760681152}, {"step": 48, "loss": 0.6923564076423645}, {"step": 49, "loss": 0.6908800601959229}, {"step": 50, "loss": 0.6927938461303711}, {"step": 51, "loss": 0.6945635676383972}, {"step": 52, "loss": 0.6978188157081604}, {"step": 53, "loss": 0.7048851251602173}, {"step": 54, "loss": 0.7114452123641968}, {"step": 55, "loss": 0.7197942137718201}, {"step": 56, "loss": 0.7273781299591064}, {"step": 57, "loss": 0.7309868931770325}, {"step": 58, "loss": 0.7392228245735168}, {"step": 59, "loss": 0.7478148341178894}, {"step": 60, "loss": 0.7554481029510498}, {"step": 61, "loss": 0.7621862292289734}, {"step": 62, "loss": 0.7660795450210571}, {"step": 63, "loss": 0.7729960083961487}, {"step": 64, "loss": 0.7787044644355774}, {"step": 65, "loss": 0.7865316271781921}, {"step": 66, "loss": 0.7893784046173096}, {"step": 67, "loss": 0.7897890210151672}, {"step": 68, "loss": 0.7911185622215271}, {"step": 69, "loss": 0.7901228666305542}, {"step": 70, "loss": 0.786424994468689}, {"step": 71, "loss": 0.7833899855613708}, {"step": 72, "loss": 0.7841241359710693}, {"step": 73, "loss": 0.7885948419570923}, {"step": 74, "loss": 0.7922827005386353}, {"step": 75, "loss": 0.7996699213981628}, {"step": 76, "loss": 0.8086601495742798}, {"step": 77, "loss": 0.8154159784317017}, {"step": 78, "loss": 0.8235976696014404}, {"step": 79, "loss": 0.8295583724975586}, {"step": 80, "loss": 0.8354929685592651}, {"step": 81, "loss": 0.8384872674942017}, {"step": 82, "loss": 0.8431093692779541}, {"step": 83, "loss": 0.8491389155387878}, {"step": 84, "loss": 0.85647052526474}, {"step": 85, "loss": 0.8622291684150696}, {"step": 86, "loss": 0.8699511289596558}, {"step": 87, "loss": 0.8779494762420654}, {"step": 88, "loss": 0.8841904997825623}, {"step": 89, "loss": 0.8887885808944702}, {"step": 90, "loss": 0.8933967351913452}, {"step": 91, "loss": 0.89702308177948}, {"step": 92, "loss": 0.9009832739830017}, {"step": 93, "loss": 0.9048527479171753}, {"step": 94, "loss": 0.9068139791488647}, {"step": 95, "loss": 0.9083170294761658}, {"step": 96, "loss": 0.9079004526138306}]} } ``` # Citation Information ``` @article{liu2024training, title={On Training Data Influence of GPT Models}, author={Liu, Qingyi and Chai, Yekun and Wang, Shuohuan and Sun, Yu and Wang, Keze and Wu, Hua}, journal={arXiv preprint arXiv:2404.07840}, year={2024} } ```