Spaces:
Runtime error
Runtime error
File size: 20,213 Bytes
d64d27f |
1 |
{"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"name":"python","version":"3.10.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"kaggle":{"accelerator":"gpu","dataSources":[],"dockerImageVersionId":30747,"isInternetEnabled":true,"language":"python","sourceType":"notebook","isGpuEnabled":true}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"code","source":"!pip install -Uqq transformers datasets evaluate bitsandbytes peft accelerate scipy einops trl","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:49:38.260628Z","iopub.execute_input":"2024-08-12T14:49:38.261217Z","iopub.status.idle":"2024-08-12T14:50:21.011903Z","shell.execute_reply.started":"2024-08-12T14:49:38.261184Z","shell.execute_reply":"2024-08-12T14:50:21.010797Z"},"trusted":true},"execution_count":1,"outputs":[{"name":"stdout","text":"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ncuml 24.6.1 requires cupy-cuda11x>=12.0.0, which is not installed.\nkfp 2.5.0 requires google-cloud-storage<3,>=2.2.1, but you have google-cloud-storage 1.44.0 which is incompatible.\nlibpysal 4.9.2 requires packaging>=22, but you have packaging 21.3 which is incompatible.\nlibpysal 4.9.2 requires shapely>=2.0.1, but you have shapely 1.8.5.post1 which is incompatible.\nmomepy 0.7.2 requires shapely>=2, but you have shapely 1.8.5.post1 which is incompatible.\npointpats 2.5.0 requires shapely>=2, but you have shapely 1.8.5.post1 which is incompatible.\nspaghetti 1.7.6 requires shapely>=2.0.1, but you have shapely 1.8.5.post1 which is incompatible.\nspopt 0.6.1 requires shapely>=2.0.1, but you have shapely 1.8.5.post1 which is incompatible.\nydata-profiling 4.6.4 requires numpy<1.26,>=1.16.0, but you have numpy 1.26.4 which is incompatible.\nydata-profiling 4.6.4 requires scipy<1.12,>=1.4.1, but you have scipy 1.14.0 which is incompatible.\u001b[0m\u001b[31m\n\u001b[0m","output_type":"stream"}]},{"cell_type":"code","source":"import torch\nimport wandb\nimport evaluate\nimport datasets\nimport peft\nfrom peft import AutoPeftModelForCausalLM\nfrom transformers import AutoTokenizer, pipeline\nfrom huggingface_hub import login\nfrom kaggle_secrets import UserSecretsClient\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\nfrom trl import setup_chat_format","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:50:25.811088Z","iopub.execute_input":"2024-08-12T14:50:25.812019Z","iopub.status.idle":"2024-08-12T14:50:45.078565Z","shell.execute_reply.started":"2024-08-12T14:50:25.811981Z","shell.execute_reply":"2024-08-12T14:50:45.077727Z"},"trusted":true},"execution_count":2,"outputs":[{"name":"stderr","text":"2024-08-12 14:50:34.357616: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n2024-08-12 14:50:34.357778: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n2024-08-12 14:50:34.484438: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n","output_type":"stream"}]},{"cell_type":"code","source":"user_secrets = UserSecretsClient()\nsecret_value_0 = user_secrets.get_secret(\"mlops_huggingface_token\")\nsecret_value_1 = user_secrets.get_secret(\"wandb_key\")\n\nwandb.login(key = secret_value_1)","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:50:45.080592Z","iopub.execute_input":"2024-08-12T14:50:45.081183Z","iopub.status.idle":"2024-08-12T14:50:47.425904Z","shell.execute_reply.started":"2024-08-12T14:50:45.081141Z","shell.execute_reply":"2024-08-12T14:50:47.424945Z"},"trusted":true},"execution_count":3,"outputs":[{"name":"stderr","text":"\u001b[34m\u001b[1mwandb\u001b[0m: W&B API key is configured. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m If you're specifying your api key in code, ensure this code is not shared publicly.\n\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Consider setting the WANDB_API_KEY environment variable, or running `wandb login` from the command line.\n\u001b[34m\u001b[1mwandb\u001b[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc\n","output_type":"stream"},{"execution_count":3,"output_type":"execute_result","data":{"text/plain":"True"},"metadata":{}}]},{"cell_type":"code","source":"login(token = secret_value_0)","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:50:47.427066Z","iopub.execute_input":"2024-08-12T14:50:47.427802Z","iopub.status.idle":"2024-08-12T14:50:47.556901Z","shell.execute_reply.started":"2024-08-12T14:50:47.427771Z","shell.execute_reply":"2024-08-12T14:50:47.555544Z"},"trusted":true},"execution_count":4,"outputs":[{"name":"stdout","text":"The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.\nToken is valid (permission: fineGrained).\nYour token has been saved to /root/.cache/huggingface/token\nLogin successful\n","output_type":"stream"}]},{"cell_type":"markdown","source":"#### Download model","metadata":{}},{"cell_type":"code","source":"base_model = 'microsoft/phi-2'\npeft_model = 'bisoye/phi-2-for-mental-health-2'","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:50:47.559103Z","iopub.execute_input":"2024-08-12T14:50:47.559939Z","iopub.status.idle":"2024-08-12T14:50:47.645370Z","shell.execute_reply.started":"2024-08-12T14:50:47.559900Z","shell.execute_reply":"2024-08-12T14:50:47.644355Z"},"trusted":true},"execution_count":5,"outputs":[]},{"cell_type":"code","source":"# Load base model\nmodel = AutoModelForCausalLM.from_pretrained(\n base_model,\n device_map=\"auto\",\n torch_dtype=torch.bfloat16,\n quantization_config=bnb_config\n)","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:13:24.186945Z","iopub.execute_input":"2024-08-12T14:13:24.187935Z","iopub.status.idle":"2024-08-12T14:16:42.905124Z","shell.execute_reply.started":"2024-08-12T14:13:24.187886Z","shell.execute_reply":"2024-08-12T14:16:42.904179Z"},"trusted":true},"execution_count":9,"outputs":[{"output_type":"display_data","data":{"text/plain":"config.json: 0%| | 0.00/735 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"e157f73c9e3f49de93510cc1564bde5f"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"model.safetensors.index.json: 0%| | 0.00/35.7k [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"cc1f9a82f2c548908baca19438c08cd1"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"6518b226abf148cf8efe66f564141a1f"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"model-00001-of-00002.safetensors: 0%| | 0.00/5.00G [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"37d03facde064524b20a25a42f056218"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"model-00002-of-00002.safetensors: 0%| | 0.00/564M [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"9771d3f5e3794213981727d3045a251d"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"87040aa136714bd4a55bebccf3842670"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"generation_config.json: 0%| | 0.00/124 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"e7bda228f9264158873e30c1bfb6ee64"}},"metadata":{}}]},{"cell_type":"code","source":"len(tokenizer)","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:16:46.610477Z","iopub.execute_input":"2024-08-12T14:16:46.610756Z","iopub.status.idle":"2024-08-12T14:16:46.621654Z","shell.execute_reply.started":"2024-08-12T14:16:46.610730Z","shell.execute_reply":"2024-08-12T14:16:46.620778Z"},"trusted":true},"execution_count":11,"outputs":[{"execution_count":11,"output_type":"execute_result","data":{"text/plain":"50297"},"metadata":{}}]},{"cell_type":"code","source":"#resize embeddings\nmodel.resize_token_embeddings(len(tokenizer))","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:16:51.248213Z","iopub.execute_input":"2024-08-12T14:16:51.249157Z","iopub.status.idle":"2024-08-12T14:16:51.291341Z","shell.execute_reply.started":"2024-08-12T14:16:51.249120Z","shell.execute_reply":"2024-08-12T14:16:51.290324Z"},"trusted":true},"execution_count":12,"outputs":[{"execution_count":12,"output_type":"execute_result","data":{"text/plain":"Embedding(50297, 2560)"},"metadata":{}}]},{"cell_type":"code","source":"peft_model = PeftModel.from_pretrained(model=model, model_id=model_id)","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:17:22.151940Z","iopub.execute_input":"2024-08-12T14:17:22.152501Z","iopub.status.idle":"2024-08-12T14:18:10.854284Z","shell.execute_reply.started":"2024-08-12T14:17:22.152463Z","shell.execute_reply":"2024-08-12T14:18:10.853279Z"},"trusted":true},"execution_count":13,"outputs":[{"output_type":"display_data","data":{"text/plain":"adapter_config.json: 0%| | 0.00/693 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"9d8700e2f7e041da9f0707582f235c5a"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"adapter_model.safetensors: 0%| | 0.00/1.20G [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"b35ca6348a7a4143a07d945a6befccf1"}},"metadata":{}}]},{"cell_type":"markdown","source":"#### Download data","metadata":{}},{"cell_type":"code","source":"# Load Model with PEFT adapter\nmodel = AutoPeftModelForCausalLM.from_pretrained(\n peft_model,\n device_map=\"auto\",\n torch_dtype=torch.float16\n)\ntokenizer = AutoTokenizer.from_pretrained(peft_model)\n# load into pipeline\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)","metadata":{"execution":{"iopub.status.busy":"2024-08-12T14:51:24.842716Z","iopub.execute_input":"2024-08-12T14:51:24.843521Z","iopub.status.idle":"2024-08-12T14:52:08.942582Z","shell.execute_reply.started":"2024-08-12T14:51:24.843474Z","shell.execute_reply":"2024-08-12T14:52:08.941641Z"},"trusted":true},"execution_count":6,"outputs":[{"output_type":"display_data","data":{"text/plain":"adapter_config.json: 0%| | 0.00/693 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"c64cde16592d46e7a4643090b553ce1b"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"config.json: 0%| | 0.00/735 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"b7f96e183f79490a8bbf35c1e44cc5aa"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"model.safetensors.index.json: 0%| | 0.00/35.7k [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"0ad5fe04ab5447ce9c2f7eae0e43c51f"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"2b7e2c970940428eb8ac0670f1aee03c"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"model-00001-of-00002.safetensors: 0%| | 0.00/5.00G [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"f3ca11b96fa54d4a9a6a86ae4c84086a"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"model-00002-of-00002.safetensors: 0%| | 0.00/564M [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"97e0cea2f28e4686b757c35f99e5b555"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"782756a28e6d46ca8e704b0ba9c3993e"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"generation_config.json: 0%| | 0.00/124 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"bac1b0b00507400886af9623f3675047"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"tokenizer_config.json: 0%| | 0.00/8.05k [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"f9b90b7b23d34947984293389000fb01"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"vocab.json: 0%| | 0.00/798k [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"484f0227517b46c3ad2a7debb3440fd1"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"80062be423ef433f9c993d4e3bc945c4"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"tokenizer.json: 0%| | 0.00/2.12M [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"42d6facfc932444f8edecd6cb546fa1f"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"added_tokens.json: 0%| | 0.00/1.13k [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"7501abeb7eba445b944e0ca8c8fd018a"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"special_tokens_map.json: 0%| | 0.00/565 [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"ea1d154c2987456f9443e585e4b2b6d6"}},"metadata":{}},{"output_type":"display_data","data":{"text/plain":"adapter_model.safetensors: 0%| | 0.00/1.20G [00:00<?, ?B/s]","application/vnd.jupyter.widget-view+json":{"version_major":2,"version_minor":0,"model_id":"5e828bbcd11146fdb29107afdc922232"}},"metadata":{}},{"name":"stderr","text":"The model 'PeftModelForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'LlamaForCausalLM', 'CodeGenForCausalLM', 'CohereForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'DbrxForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'FalconForCausalLM', 'FuyuForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'JambaForCausalLM', 'JetMoeForCausalLM', 'LlamaForCausalLM', 'MambaForCausalLM', 'Mamba2ForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'MptForCausalLM', 'MusicgenForCausalLM', 'MusicgenMelodyForCausalLM', 'MvpForCausalLM', 'NemotronForCausalLM', 'OlmoForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PersimmonForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RecurrentGemmaForCausalLM', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'WhisperForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].\n","output_type":"stream"}]},{"cell_type":"code","source":"from datasets import load_dataset\nfrom random import randint\nfrom pprint import pprint\n \neval_dataset = load_dataset(\"bisoye/mental_health_chatbot\", split=\"train\")\nrand_idx = randint(0, len(eval_dataset))\n \n# Test on sample\nprompt = pipe.tokenizer.apply_chat_template(eval_dataset[rand_idx][\"messages\"][:2], tokenize=False, add_generation_prompt=True)\noutputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)\n \nprint(f\"Query:\\n\\t{eval_dataset[rand_idx]['messages'][1]['content']}\")\nprint()\nprint(f\"Original Answer:\\n\\t{eval_dataset[rand_idx]['messages'][2]['content']}\")\nprint()\nprint(f\"Generated Answer:\\n\\t{outputs[0]['generated_text'][len(prompt):].strip()}\")","metadata":{"execution":{"iopub.status.busy":"2024-08-12T15:04:37.118896Z","iopub.execute_input":"2024-08-12T15:04:37.119448Z","iopub.status.idle":"2024-08-12T15:04:47.943034Z","shell.execute_reply.started":"2024-08-12T15:04:37.119404Z","shell.execute_reply":"2024-08-12T15:04:47.942006Z"},"trusted":true},"execution_count":15,"outputs":[{"name":"stdout","text":"Query:\n\tIs it normal for people to cry during therapy, or is it just me?\n\nOriginal Answer:\n\tit is quite normal as conversations we have may touch on emotions, thoughts and feelings that have been covered up for a long time. Just as laughter (which may also be present in therapy), joy, sadness, reflections, these are all emotions and insights that can occur. Allowing yourself to feel and express yourself in a space of safety is freeing and enlightening. Not all sessions can have that but those moments are wonderful and continue on ones pattern of growth. Grab a Kleenex and let it out!\n\nGenerated Answer:\n\tAbsolutely! Therapy is a safe place to explore your feelings and emotions. It is a place where you can be yourself and be vulnerable. It is a place where you can be honest and open. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place where you can be yourself. It is a place where you can be free. It is a place\n","output_type":"stream"}]},{"cell_type":"code","source":"","metadata":{},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"","metadata":{},"execution_count":null,"outputs":[]}]} |