khulnasoft commited on
Commit
31707af
1 Parent(s): 1954e11

Upload llm-01-how-to-use-llms-with-hugging-face.ipynb

Browse files
llm-01-how-to-use-llms-with-hugging-face.ipynb ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"name":"python","version":"3.10.10","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"markdown","source":"## ⚡ Further Notebooks In This Course ⚡","metadata":{}},{"cell_type":"markdown","source":"**Notebooks:**\n1. [LLM 01 - How to use LLMs with Hugging Face](https://www.kaggle.com/code/aliabdin1/llm-01-llms-with-hugging-face)\n2. [LLM 02 - Embeddings, Vector Databases, and Search](https://www.kaggle.com/code/aliabdin1/llm-02-embeddings-vector-databases-and-search)\n3. [LLM 03 - Building LLM Chain](https://www.kaggle.com/code/aliabdin1/llm-03-building-llm-chain)\n4. [LLM 04a - Fine-tuning LLMs](https://www.kaggle.com/code/aliabdin1/llm-04a-fine-tuning-llms)\n4. [LLM 04b - Evaluating LLMs](https://www.kaggle.com/code/aliabdin1/llm-04b-evaluating-llms)\n5. [LLM 05 - LLMs and Society](https://www.kaggle.com/code/aliabdin1/llm-05-llms-and-society)\n6. [LLM 06 - LLMOps](https://www.kaggle.com/code/aliabdin1/llm-06-llmops)\n\n**Hands-on Lab Notebooks:**\n1. [LLM 01L - How to use LLMs with Hugging Face Lab](https://www.kaggle.com/code/aliabdin1/llm-01l-llms-with-hugging-face-lab)\n2. [LLM 02L - Embeddings, Vector Databases, and Search Lab](https://www.kaggle.com/code/aliabdin1/llm-02l-embeddings-vector-databases-and-search)\n3. [LLM 03L - Building LLM Chains Lab](https://www.kaggle.com/code/aliabdin1/llm-03l-building-llm-chains-lab)\n4. [LLM 04L - Fine-tuning LLMs Lab](https://www.kaggle.com/code/aliabdin1/llm-04l-fine-tuning-llms-lab)\n5. [LLM 05L - LLMs and Society Lab](https://www.kaggle.com/code/aliabdin1/llm-05l-llms-and-society-lab)","metadata":{}},{"cell_type":"markdown","source":"\n\n<div style=\"text-align: center; line-height: 0; padding-top: 9px;\">\n <img src=\"https://databricks.com/wp-content/uploads/2018/03/db-academy-rgb-1200px.png\" alt=\"Databricks Learning\" style=\"width: 600px\">\n</div>","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"74ccf15a-e1ec-47f6-8fb1-eb0e0f991309","inputWidgets":{},"title":""}}},{"cell_type":"markdown","source":"# LLMs with Hugging Face\n\nIn this notebook, we'll take a whirlwind tour of some top applications using Large Language Models (LLMs):\n* Summarization\n* Sentiment analysis\n* Translation\n* Zero-shot classification\n* Few-shot learning\n\nWe will see how existing, open-source (and proprietary) models can be used out-of-the-box for many applications. For this, we will use [Hugging Face models](https://huggingface.co/models) and some simple prompt engineering.\n\nWe will then look at Hugging Face APIs in more detail to understand how to configure LLM pipelines.\n\n### ![Dolly](https://files.training.databricks.com/images/llm/dolly_small.png) Learning Objectives\n1. Use a variety of existing models for a variety of common applications.\n1. Understand basic prompt engineering.\n1. Understand search vs. sampling for LLM inference.\n1. Get familiar with the main Hugging Face abstractions: datasets, pipelines, tokenizers, and models.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"3696585c-137d-4e7b-8e8b-e12cea3a641b","inputWidgets":{},"title":""}}},{"cell_type":"markdown","source":"## Classroom Setup","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"148d6576-ee3c-41c2-9ff6-85be6927adab","inputWidgets":{},"title":""}}},{"cell_type":"markdown","source":"Libraries:\n* [sacremoses](https://github.com/alvations/sacremoses) is for the translation model `Helsinki-NLP/opus-mt-en-es`","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"25d2b047-1767-4e7b-9c1c-4342fa48ba60","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"%pip install sacremoses==0.0.53","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"d222572b-a549-402a-91e9-1f336397304d","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T14:56:39.537572Z","iopub.execute_input":"2023-06-22T14:56:39.537972Z","iopub.status.idle":"2023-06-22T14:56:57.890597Z","shell.execute_reply.started":"2023-06-22T14:56:39.537943Z","shell.execute_reply":"2023-06-22T14:56:57.889082Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"#%run ../Includes/Classroom-Setup","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"7d85d1c0-d6b9-4718-adc1-b6ad8ebfd590","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T13:49:00.644085Z","iopub.execute_input":"2023-06-22T13:49:00.644492Z","iopub.status.idle":"2023-06-22T13:49:00.650042Z","shell.execute_reply.started":"2023-06-22T13:49:00.644454Z","shell.execute_reply":"2023-06-22T13:49:00.648976Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"## Common LLM applications\n\nThe goal of this section is to get your feet wet with several LLM applications and to show how easy it can be to get started with LLMs.\n\nAs you go through the examples, note the datasets, models, APIs, and options used. These simple examples can be starting points when you need to build your own application.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"560eb7b0-9ecb-4bef-8be3-34845642a0e5","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"!pip install -U accelerate --quiet","metadata":{"execution":{"iopub.status.busy":"2023-06-22T14:56:57.892742Z","iopub.execute_input":"2023-06-22T14:56:57.893169Z","iopub.status.idle":"2023-06-22T14:57:12.048017Z","shell.execute_reply.started":"2023-06-22T14:56:57.893128Z","shell.execute_reply":"2023-06-22T14:57:12.046407Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"from datasets import load_dataset\nfrom transformers import pipeline","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"a34f4dd0-29c7-4640-86a9-7c0e25875237","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T14:57:19.510429Z","iopub.execute_input":"2023-06-22T14:57:19.510851Z","iopub.status.idle":"2023-06-22T14:57:37.539460Z","shell.execute_reply.started":"2023-06-22T14:57:19.510814Z","shell.execute_reply":"2023-06-22T14:57:37.538078Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Summarization\n\nSummarization can take two forms:\n* `extractive` (selecting representative excerpts from the text)\n* `abstractive` (generating novel text summaries)\n\nHere, we will use a model which does *abstractive* summarization.\n\n**Background reading**: The [Hugging Face summarization task page](https://huggingface.co/docs/transformers/tasks/summarization) lists model architectures which support summarization. The [summarization course chapter](https://huggingface.co/course/chapter7/5) provides a detailed walkthrough.\n\nIn this section, we will use:\n* **Data**: [xsum](https://huggingface.co/datasets/xsum) dataset, which provides a set of BBC articles and summaries.\n* **Model**: [t5-small](https://huggingface.co/t5-small) model, which has 60 million parameters (242MB for PyTorch). T5 is an encoder-decoder model created by Google which supports several tasks such as summarization, translation, Q&A, and text classification. For more details, see the [Google blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html), [code on GitHub](https://github.com/google-research/text-to-text-transfer-transformer), or the [research paper](https://arxiv.org/pdf/1910.10683.pdf).","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"387bbd9c-69cd-44da-b010-01236fca5532","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"mkdir cache","metadata":{"execution":{"iopub.status.busy":"2023-06-22T14:58:09.398490Z","iopub.execute_input":"2023-06-22T14:58:09.398911Z","iopub.status.idle":"2023-06-22T14:58:10.519649Z","shell.execute_reply.started":"2023-06-22T14:58:09.398878Z","shell.execute_reply":"2023-06-22T14:58:10.518200Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"xsum_dataset = load_dataset(\n \"xsum\", version=\"1.2.0\", cache_dir=\"../working/cache/\"\n) # Note: We specify cache_dir to use predownloaded data.\nxsum_dataset # The printed representation of this object shows the `num_rows` of each dataset split.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"48c763a2-4594-4416-8e7e-39f506e0b485","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T14:58:17.870610Z","iopub.execute_input":"2023-06-22T14:58:17.871061Z","iopub.status.idle":"2023-06-22T15:00:28.976646Z","shell.execute_reply.started":"2023-06-22T14:58:17.871013Z","shell.execute_reply":"2023-06-22T15:00:28.975376Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"This dataset provides 3 columns:\n* `document`: the BBC article text\n* `summary`: a \"ground-truth\" summary --> Note how subjective this \"ground-truth\" is. Is this the same summary you would write? This a great example of how many LLM applications do not have obvious \"right\" answers.\n* `id`: article ID","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"f6425d0c-0a17-4056-a4bb-b8f836100b2d","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"xsum_sample = xsum_dataset[\"train\"].select(range(10))\ndisplay(xsum_sample.to_pandas())","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"3b7c5c8c-95ef-4b05-b6c7-e8699f9db6c5","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:00:42.244153Z","iopub.execute_input":"2023-06-22T15:00:42.244608Z","iopub.status.idle":"2023-06-22T15:00:42.305751Z","shell.execute_reply.started":"2023-06-22T15:00:42.244574Z","shell.execute_reply":"2023-06-22T15:00:42.304531Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"We next use the Hugging Face `pipeline` tool to load a pre-trained model. In this LLM pipeline constructor, we specify:\n* `task`: This first argument specifies the primary task. See [Hugging Face tasks](https://huggingface.co/tasks) for more information.\n* `model`: This is the name of the pre-trained model from the [Hugging Face Hub](https://huggingface.co/models).\n* `min_length`, `max_length`: We want our generated summaries to be between these two token lengths.\n* `truncation`: Some input articles may be too long for the LLM to process. Most LLMs have fixed limits on the length of input sequences. This option tells the pipeline to truncate the input if needed.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"c0a57229-dadd-4be5-93bd-841f2859527e","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"summarizer = pipeline(\n task=\"summarization\",\n model=\"t5-small\",\n min_length=20,\n max_length=40,\n truncation=True,\n model_kwargs={\"cache_dir\": \"../working/cache/\"},\n) # Note: We specify cache_dir to use predownloaded models.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"ea2af42c-f7de-4753-b34c-251eab45516a","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:01:25.206053Z","iopub.execute_input":"2023-06-22T15:01:25.207409Z","iopub.status.idle":"2023-06-22T15:01:29.662822Z","shell.execute_reply.started":"2023-06-22T15:01:25.207350Z","shell.execute_reply":"2023-06-22T15:01:29.661447Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Apply to 1 article\nsummarizer(xsum_sample[\"document\"][0])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"fb8026d1-dfe5-460c-80a9-bcf28b4aa768","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:01:50.328330Z","iopub.execute_input":"2023-06-22T15:01:50.328760Z","iopub.status.idle":"2023-06-22T15:01:52.652743Z","shell.execute_reply.started":"2023-06-22T15:01:50.328725Z","shell.execute_reply":"2023-06-22T15:01:52.651738Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Apply to a batch of articles\nresults = summarizer(xsum_sample[\"document\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"c3082ad1-b6d5-416f-a3e7-70fc70421224","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:02:10.271034Z","iopub.execute_input":"2023-06-22T15:02:10.271428Z","iopub.status.idle":"2023-06-22T15:02:26.691510Z","shell.execute_reply.started":"2023-06-22T15:02:10.271399Z","shell.execute_reply":"2023-06-22T15:02:26.690039Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Display the generated summary side-by-side with the reference summary and original document.\n# We use Pandas to join the inputs and outputs together in a nice format.\nimport pandas as pd\n\ndisplay(\n pd.DataFrame.from_dict(results)\n .rename({\"summary_text\": \"generated_summary\"}, axis=1)\n .join(pd.DataFrame.from_dict(xsum_sample))[\n [\"generated_summary\", \"summary\", \"document\"]\n ]\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{"rowLimit":10000,"byteLimit":2048000},"nuid":"e29403a0-1d29-46e6-a8e0-e03a26b6cd49","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:02:47.916575Z","iopub.execute_input":"2023-06-22T15:02:47.917038Z","iopub.status.idle":"2023-06-22T15:02:47.947768Z","shell.execute_reply.started":"2023-06-22T15:02:47.917004Z","shell.execute_reply":"2023-06-22T15:02:47.946483Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Sentiment analysis\n\nSentiment analysis is a text classification task of estimating whether a piece of text is positive, negative, or another \"sentiment\" label. The precise set of sentiment labels can vary across applications.\n\n**Background reading**: See the Hugging Face [task page on text classification](https://huggingface.co/tasks/text-classification) or [Wikipedia on sentiment analysis](https://en.wikipedia.org/wiki/Sentiment_analysis).\n\nIn this section, we will use:\n* **Data**: [poem sentiment](https://huggingface.co/datasets/poem_sentiment) dataset, which provides lines from poems tagged with sentiments `negative` (0), `positive` (1), `no_impact` (2), or `mixed` (3).\n* **Model**: [fine-tuned version of BERT](https://huggingface.co/nickwong64/bert-base-uncased-poems-sentiment). BERT, or Bidirectional Encoder Representations from Transformers, is an encoder-only model from Google usable for 11+ tasks such as sentiment analysis and entity recognition. For more details, see this [Hugging Face blog post](https://huggingface.co/blog/bert-101) or the [Wikipedia page](https://en.wikipedia.org/wiki/BERT_&#40;language_model&#41;).","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"82ac9aed-da6d-4714-826d-eb5fbf9f645c","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"poem_dataset = load_dataset(\n \"poem_sentiment\", version=\"1.0.0\", cache_dir=\"../working/cache/\"\n)\npoem_sample = poem_dataset[\"train\"].select(range(10))\ndisplay(poem_sample.to_pandas())","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"bcff95f3-2da8-4e1d-bc9b-e31263e422a0","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:03:31.582362Z","iopub.execute_input":"2023-06-22T15:03:31.582809Z","iopub.status.idle":"2023-06-22T15:03:33.990656Z","shell.execute_reply.started":"2023-06-22T15:03:31.582773Z","shell.execute_reply":"2023-06-22T15:03:33.989457Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"We load the pipeline using the task `text-classification` since we want to classify text with a fixed set of labels.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"581224d6-d79e-4855-9ec6-9a775b79d37d","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"sentiment_classifier = pipeline(\n task=\"text-classification\",\n model=\"nickwong64/bert-base-uncased-poems-sentiment\",\n model_kwargs={\"cache_dir\": \"../working/cache/\"},\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"b636a37d-abd9-4f36-bdea-1578ae78412d","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:03:47.713609Z","iopub.execute_input":"2023-06-22T15:03:47.714030Z","iopub.status.idle":"2023-06-22T15:03:53.664322Z","shell.execute_reply.started":"2023-06-22T15:03:47.713999Z","shell.execute_reply":"2023-06-22T15:03:53.663341Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"results = sentiment_classifier(poem_sample[\"verse_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"321fe3e9-57a1-4415-a4ef-ce7eca68a7d8","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:04:09.381688Z","iopub.execute_input":"2023-06-22T15:04:09.382191Z","iopub.status.idle":"2023-06-22T15:04:10.056883Z","shell.execute_reply.started":"2023-06-22T15:04:09.382142Z","shell.execute_reply":"2023-06-22T15:04:10.055707Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Display the predicted sentiment side-by-side with the ground-truth label and original text.\n# The score indicates the model's confidence in its prediction.\n\n# Join predictions with ground-truth data\njoined_data = (\n pd.DataFrame.from_dict(results)\n .rename({\"label\": \"predicted_label\"}, axis=1)\n .join(pd.DataFrame.from_dict(poem_sample).rename({\"label\": \"true_label\"}, axis=1))\n)\n\n# Change label indices to text labels\nsentiment_labels = {0: \"negative\", 1: \"positive\", 2: \"no_impact\", 3: \"mixed\"}\njoined_data = joined_data.replace({\"true_label\": sentiment_labels})\n\ndisplay(joined_data[[\"predicted_label\", \"true_label\", \"score\", \"verse_text\"]])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"dc08683d-f481-4ecd-bf49-45749158c782","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:04:10.523196Z","iopub.execute_input":"2023-06-22T15:04:10.523977Z","iopub.status.idle":"2023-06-22T15:04:10.549788Z","shell.execute_reply.started":"2023-06-22T15:04:10.523943Z","shell.execute_reply":"2023-06-22T15:04:10.548405Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Translation\n\nTranslation models may be designed for specific pairs of languages, or they may support more than two languages. We will see both below.\n\n**Background reading**: See the Hugging Face [task page on translation](https://huggingface.co/tasks/translation) or the [Wikipedia page on machine translation](https://en.wikipedia.org/wiki/Machine_translation).\n\nIn this section, we will use:\n* **Data**: We will use some example hard-coded sentences. However, there are a variety of [translation datasets](https://huggingface.co/datasets?task_categories=task_categories:translation&sort=downloads) available from Hugging Face.\n* **Models**:\n * [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) is used for the first example of English (\"en\") to Spanish (\"es\") translation. This model is based on [Marian NMT](https://marian-nmt.github.io/), a neural machine translation framework developed by Microsoft and other researchers. See the [GitHub page](https://github.com/Helsinki-NLP/Opus-MT) for code and links to related resources.\n * [t5-small](https://huggingface.co/t5-small) model, which has 60 million parameters (242MB for PyTorch). T5 is an encoder-decoder model created by Google which supports several tasks such as summarization, translation, Q&A, and text classification. For more details, see the [Google blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html), [code on GitHub](https://github.com/google-research/text-to-text-transfer-transformer), or the [research paper](https://arxiv.org/pdf/1910.10683.pdf). For our purposes, it supports translation for English, French, Romanian, and German.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"665b1fb2-56ac-4b8a-ae08-46c0711739f2","inputWidgets":{},"title":""}}},{"cell_type":"markdown","source":"Some models are designed for specific language-to-language translation. Below, we use an English-to-Spanish model.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"9ea09d05-b6ce-463d-990f-fc3e769e0adc","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"en_to_es_translation_pipeline = pipeline(\n task=\"translation\",\n model=\"Helsinki-NLP/opus-mt-en-es\",\n model_kwargs={\"cache_dir\": \"../working/cache/\"},\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"783fb686-7812-4864-9dc5-51aa48bcc82b","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:05:11.893440Z","iopub.execute_input":"2023-06-22T15:05:11.893840Z","iopub.status.idle":"2023-06-22T15:05:19.076385Z","shell.execute_reply.started":"2023-06-22T15:05:11.893809Z","shell.execute_reply":"2023-06-22T15:05:19.075398Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"en_to_es_translation_pipeline(\n \"Existing, open-source (and proprietary) models can be used out-of-the-box for many applications.\"\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"f323ebd7-08cb-4a49-9cc5-da8cdba38815","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:05:44.620840Z","iopub.execute_input":"2023-06-22T15:05:44.621291Z","iopub.status.idle":"2023-06-22T15:05:45.367015Z","shell.execute_reply.started":"2023-06-22T15:05:44.621247Z","shell.execute_reply":"2023-06-22T15:05:45.365913Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"Other models are designed to handle multiple languages. Below, we show this with `t5-small`. Note that, since it supports multiple languages (and tasks), we give it an explicit instruction to translate from one language to another.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"8b9ce8b1-a30c-44b7-abb4-8e49291c404c","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"t5_small_pipeline = pipeline(\n task=\"text2text-generation\",\n model=\"t5-small\",\n max_length=50,\n model_kwargs={\"cache_dir\": \"../working/cache/\"},\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"8f1bc7ad-68b7-4dc1-b032-8f136863f81c","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:06:03.139572Z","iopub.execute_input":"2023-06-22T15:06:03.140025Z","iopub.status.idle":"2023-06-22T15:06:04.437865Z","shell.execute_reply.started":"2023-06-22T15:06:03.139976Z","shell.execute_reply":"2023-06-22T15:06:04.436598Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"t5_small_pipeline(\n \"translate English to French: Existing, open-source (and proprietary) models can be used out-of-the-box for many applications.\"\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"3287af56-3ca9-42db-a1e2-668a55442fa0","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:06:13.037136Z","iopub.execute_input":"2023-06-22T15:06:13.037531Z","iopub.status.idle":"2023-06-22T15:06:13.698957Z","shell.execute_reply.started":"2023-06-22T15:06:13.037494Z","shell.execute_reply":"2023-06-22T15:06:13.698153Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"t5_small_pipeline(\n \"translate English to Romanian: Existing, open-source (and proprietary) models can be used out-of-the-box for many applications.\"\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"d2587107-ef70-4f0b-bea3-84e0a0bd03fc","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:06:18.845257Z","iopub.execute_input":"2023-06-22T15:06:18.845681Z","iopub.status.idle":"2023-06-22T15:06:19.290525Z","shell.execute_reply.started":"2023-06-22T15:06:18.845648Z","shell.execute_reply":"2023-06-22T15:06:19.288430Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Zero-shot classification\n\nZero-shot classification (or zero-shot learning) is the task of classifying a piece of text into one of a few given categories or labels, without having explicitly trained the model to predict those categories beforehand. The idea appeared in literature before modern LLMs, but recent advances in LLMs have made zero-shot learning much more flexible and powerful.\n\n**Background reading**: See the Hugging Face [task page on zero-shot classification](https://huggingface.co/tasks/zero-shot-classification) or [Wikipedia on zero-shot learning](https://en.wikipedia.org/wiki/Zero-shot_learning).\n\nIn this section, we will use:\n* **Data**: a few example articles from the [xsum](https://huggingface.co/datasets/xsum) dataset used in the Summarization section above. Our goal is to label news articles under a few categories.\n* **Model**: [nli-deberta-v3-small](https://huggingface.co/cross-encoder/nli-deberta-v3-small), a fine-tuned version of the DeBERTa model. The DeBERTa base model was developed by Microsoft and is one of several models derived from BERT; for more details on DeBERTa, see the [Hugging Face doc page](https://huggingface.co/docs/transformers/model_doc/deberta), the [code on GitHub](https://github.com/microsoft/DeBERTa), or the [research paper](https://arxiv.org/abs/2006.03654).","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"d7dd4ac4-4b14-4a6f-bbaa-b5bb80303252","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"zero_shot_pipeline = pipeline(\n task=\"zero-shot-classification\",\n model=\"cross-encoder/nli-deberta-v3-small\",\n model_kwargs={\"cache_dir\": \"../working/cache/\"},\n)\n\n\ndef categorize_article(article: str) -> None:\n \"\"\"\n This helper function defines the categories (labels) which the model must use to label articles.\n Note that our model was NOT fine-tuned to use these specific labels,\n but it \"knows\" what the labels mean from its more general training.\n\n This function then prints out the predicted labels alongside their confidence scores.\n \"\"\"\n results = zero_shot_pipeline(\n article,\n candidate_labels=[\n \"politics\",\n \"finance\",\n \"sports\",\n \"science and technology\",\n \"pop culture\",\n \"breaking news\",\n ],\n )\n # Print the results nicely\n del results[\"sequence\"]\n display(pd.DataFrame(results))","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"f92a4c41-edb8-4711-9503-260e01f88962","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:06:44.748302Z","iopub.execute_input":"2023-06-22T15:06:44.748723Z","iopub.status.idle":"2023-06-22T15:07:02.187038Z","shell.execute_reply.started":"2023-06-22T15:06:44.748690Z","shell.execute_reply":"2023-06-22T15:07:02.185851Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"categorize_article(\n \"\"\"\nSimone Favaro got the crucial try with the last move of the game, following earlier touchdowns by Chris Fusaro, Zander Fagerson and Junior Bulumakau.\nRynard Landman and Ashton Hewitt got a try in either half for the Dragons.\nGlasgow showed far superior strength in depth as they took control of a messy match in the second period.\nHome coach Gregor Townsend gave a debut to powerhouse Fijian-born Wallaby wing Taqele Naiyaravoro, and centre Alex Dunbar returned from long-term injury, while the Dragons gave first starts of the season to wing Aled Brew and hooker Elliot Dee.\nGlasgow lost hooker Pat McArthur to an early shoulder injury but took advantage of their first pressure when Rory Clegg slotted over a penalty on 12 minutes.\nIt took 24 minutes for a disjointed game to produce a try as Sarel Pretorius sniped from close range and Landman forced his way over for Jason Tovey to convert - although it was the lock's last contribution as he departed with a chest injury shortly afterwards.\nGlasgow struck back when Fusaro drove over from a rolling maul on 35 minutes for Clegg to convert.\nBut the Dragons levelled at 10-10 before half-time when Naiyaravoro was yellow-carded for an aerial tackle on Brew and Tovey slotted the easy goal.\nThe visitors could not make the most of their one-man advantage after the break as their error count cost them dearly.\nIt was Glasgow's bench experience that showed when Mike Blair's break led to a short-range score from teenage prop Fagerson, converted by Clegg.\nDebutant Favaro was the second home player to be sin-binned, on 63 minutes, but again the Warriors made light of it as replacement wing Bulumakau, a recruit from the Army, pounced to deftly hack through a bouncing ball for an opportunist try.\nThe Dragons got back within striking range with some excellent combined handling putting Hewitt over unopposed after 72 minutes.\nHowever, Favaro became sinner-turned-saint as he got on the end of another effective rolling maul to earn his side the extra point with the last move of the game, Clegg converting.\nDragons director of rugby Lyn Jones said: \"We're disappointed to have lost but our performance was a lot better [than against Leinster] and the game could have gone either way.\n\"Unfortunately too many errors behind the scrum cost us a great deal, though from where we were a fortnight ago in Dublin our workrate and desire was excellent.\n\"It was simply error count from individuals behind the scrum that cost us field position, it's not rocket science - they were correct in how they played and we had a few errors, that was the difference.\"\nGlasgow Warriors: Rory Hughes, Taqele Naiyaravoro, Alex Dunbar, Fraser Lyle, Lee Jones, Rory Clegg, Grayson Hart; Alex Allan, Pat MacArthur, Zander Fagerson, Rob Harley (capt), Scott Cummings, Hugh Blake, Chris Fusaro, Adam Ashe.\nReplacements: Fergus Scott, Jerry Yanuyanutawa, Mike Cusack, Greg Peterson, Simone Favaro, Mike Blair, Gregor Hunter, Junior Bulumakau.\nDragons: Carl Meyer, Ashton Hewitt, Ross Wardle, Adam Warren, Aled Brew, Jason Tovey, Sarel Pretorius; Boris Stankovich, Elliot Dee, Brok Harris, Nick Crosswell, Rynard Landman (capt), Lewis Evans, Nic Cudd, Ed Jackson.\nReplacements: Rhys Buckley, Phil Price, Shaun Knight, Matthew Screech, Ollie Griffiths, Luc Jones, Charlie Davies, Nick Scott.\n\"\"\"\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"1f18aad0-d068-4e91-9679-4efb836aee25","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:07:30.652569Z","iopub.execute_input":"2023-06-22T15:07:30.653863Z","iopub.status.idle":"2023-06-22T15:07:36.169800Z","shell.execute_reply.started":"2023-06-22T15:07:30.653813Z","shell.execute_reply":"2023-06-22T15:07:36.168737Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"categorize_article(\n \"\"\"\nThe full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.\nRepair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.\nTrains on the west coast mainline face disruption due to damage at the Lamington Viaduct.\nMany businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.\nFirst Minister Nicola Sturgeon visited the area to inspect the damage.\nThe waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.\nJeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.\nHowever, she said more preventative work could have been carried out to ensure the retaining wall did not fail.\n\"It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we're neglected or forgotten,\" she said.\n\"That may not be true but it is perhaps my perspective over the last few days.\n\"Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?\"\nMeanwhile, a flood alert remains in place across the Borders because of the constant rain.\nPeebles was badly hit by problems, sparking calls to introduce more defences in the area.\nScottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.\nThe Labour Party's deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.\nHe said it was important to get the flood protection plan right but backed calls to speed up the process.\n\"I was quite taken aback by the amount of damage that has been done,\" he said.\n\"Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.\"\nHe said it was important that \"immediate steps\" were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.\nHave you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on [email protected] or [email protected].\n\"\"\"\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"bf293fde-bb97-4aeb-963f-2f54e0ddb998","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:07:39.040333Z","iopub.execute_input":"2023-06-22T15:07:39.040798Z","iopub.status.idle":"2023-06-22T15:07:43.046457Z","shell.execute_reply.started":"2023-06-22T15:07:39.040765Z","shell.execute_reply":"2023-06-22T15:07:43.044506Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Few-shot learning\n\nIn few-shot learning tasks, you give the model an instruction, a few query-response examples of how to follow that instruction, and then a new query. The model must generate the response for that new query. This technique has pros and cons: it is very powerful and allows models to be reused for many more applications, but it can be finicky and require significant prompt engineering to get good and reliable results.\n\n**Background reading**: See the [Wikipedia page on few-shot learning](https://en.wikipedia.org/wiki/Few-shot_learning_&#40;natural_language_processing&#41;) or [this Hugging Face blog about few-shot learning](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api).\n\nIn this section, we will use:\n* **Task**: Few-shot learning can be applied to many tasks. Here, we will do sentiment analysis, which was covered earlier. However, you will see how few-shot learning allows us to specify custom labels, whereas the previous model was tuned for a specific set of labels. We will also show other (toy) tasks at the end. In terms of the Hugging Face `task` specified in the `pipeline` constructor, few-shot learning is handled as a `text-generation` task.\n* **Data**: We use a few examples, including a tweet example from the blog post linked above.\n* **Model**: [gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B), a version of the GPT-Neo model discussed in the blog linked above. It is a transformer model with 1.3 billion parameters developed by Eleuther AI. For more details, see the [code on GitHub](https://github.com/EleutherAI/gpt-neo) or the [research paper](https://arxiv.org/abs/2204.06745).","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"1702970e-1acf-4c24-92ee-65a0158cf7d5","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"# We will limit the response length for our few-shot learning tasks.\nfew_shot_pipeline = pipeline(\n task=\"text-generation\",\n model=\"EleutherAI/gpt-neo-1.3B\",\n max_new_tokens=10,\n model_kwargs={\"cache_dir\": \"../working/cache/\"},\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"49e43d6b-100d-4595-a40a-bd10a208ca29","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:07:55.160943Z","iopub.execute_input":"2023-06-22T15:07:55.161451Z","iopub.status.idle":"2023-06-22T15:09:07.510530Z","shell.execute_reply.started":"2023-06-22T15:07:55.161416Z","shell.execute_reply":"2023-06-22T15:09:07.509064Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"***Tip***: In the few-shot prompts below, we separate the examples with a special token \"###\" and use the same token to encourage the LLM to end its output after answering the query. We will tell the pipeline to use that special token as the end-of-sequence (EOS) token below.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"3ef49afb-cc3c-46ce-9cb9-0e723e8c6d5d","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"# Get the token ID for \"###\", which we will use as the EOS token below.\neos_token_id = few_shot_pipeline.tokenizer.encode(\"###\")[0]","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"136483dc-620a-4c2c-93a5-e639f0b9761b","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:09:13.983099Z","iopub.execute_input":"2023-06-22T15:09:13.983560Z","iopub.status.idle":"2023-06-22T15:09:13.993912Z","shell.execute_reply.started":"2023-06-22T15:09:13.983516Z","shell.execute_reply":"2023-06-22T15:09:13.992600Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Without any examples, the model output is inconsistent and usually incorrect.\nresults = few_shot_pipeline(\n \"\"\"For each tweet, describe its sentiment:\n\n[Tweet]: \"This new music video was incredible\"\n[Sentiment]:\"\"\",\n eos_token_id=eos_token_id,\n)\n\nprint(results[0][\"generated_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"b999b2c2-cf30-4b25-913b-8687805fe699","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:09:21.382920Z","iopub.execute_input":"2023-06-22T15:09:21.383378Z","iopub.status.idle":"2023-06-22T15:09:25.209076Z","shell.execute_reply.started":"2023-06-22T15:09:21.383345Z","shell.execute_reply":"2023-06-22T15:09:25.207579Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# With only 1 example, the model may or may not get the answer right.\nresults = few_shot_pipeline(\n \"\"\"For each tweet, describe its sentiment:\n\n[Tweet]: \"This is the link to the article\"\n[Sentiment]: Neutral\n###\n[Tweet]: \"This new music video was incredible\"\n[Sentiment]:\"\"\",\n eos_token_id=eos_token_id,\n)\n\nprint(results[0][\"generated_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"a74f16cf-2b8b-47d1-81ed-2cdb65b7ce25","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:09:39.116847Z","iopub.execute_input":"2023-06-22T15:09:39.117293Z","iopub.status.idle":"2023-06-22T15:09:42.012431Z","shell.execute_reply.started":"2023-06-22T15:09:39.117261Z","shell.execute_reply":"2023-06-22T15:09:42.011201Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# With 1 example for each sentiment, the model is more likely to understand!\nresults = few_shot_pipeline(\n \"\"\"For each tweet, describe its sentiment:\n\n[Tweet]: \"I hate it when my phone battery dies.\"\n[Sentiment]: Negative\n###\n[Tweet]: \"My day has been 👍\"\n[Sentiment]: Positive\n###\n[Tweet]: \"This is the link to the article\"\n[Sentiment]: Neutral\n###\n[Tweet]: \"This new music video was incredible\"\n[Sentiment]:\"\"\",\n eos_token_id=eos_token_id,\n)\n\nprint(results[0][\"generated_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"f74004c3-7502-4545-a033-ac8ae4287492","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:09:52.337839Z","iopub.execute_input":"2023-06-22T15:09:52.338998Z","iopub.status.idle":"2023-06-22T15:09:55.745833Z","shell.execute_reply.started":"2023-06-22T15:09:52.338943Z","shell.execute_reply":"2023-06-22T15:09:55.744735Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"Just for fun, we show a few more examples below.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"4939f6ff-6301-4fe6-8b2d-6d648dfa1705","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"# The model isn't ready to serve drinks!\nresults = few_shot_pipeline(\n \"\"\"For each food, suggest a good drink pairing:\n\n[food]: tapas\n[drink]: wine\n###\n[food]: pizza\n[drink]: soda\n###\n[food]: jalapenos poppers\n[drink]: beer\n###\n[food]: scone\n[drink]:\"\"\",\n eos_token_id=eos_token_id,\n)\n\nprint(results[0][\"generated_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"d0f79966-ef95-4e97-81a7-b27395cdb831","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:10:10.408278Z","iopub.execute_input":"2023-06-22T15:10:10.408700Z","iopub.status.idle":"2023-06-22T15:10:14.150212Z","shell.execute_reply.started":"2023-06-22T15:10:10.408668Z","shell.execute_reply":"2023-06-22T15:10:14.149013Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# This example sometimes works and sometimes does not, when sampling. Too abstract?\nresults = few_shot_pipeline(\n \"\"\"Given a word describing how someone is feeling, suggest a description of that person. The description should not include the original word.\n\n[word]: happy\n[description]: smiling, laughing, clapping\n###\n[word]: nervous\n[description]: glancing around quickly, sweating, fidgeting\n###\n[word]: sleepy\n[description]: heavy-lidded, slumping, rubbing eyes\n###\n[word]: confused\n[description]:\"\"\",\n eos_token_id=eos_token_id,\n)\n\nprint(results[0][\"generated_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"54d78ce6-39bd-449a-b6fa-d01e65009ff9","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:10:28.262756Z","iopub.execute_input":"2023-06-22T15:10:28.263191Z","iopub.status.idle":"2023-06-22T15:10:33.733712Z","shell.execute_reply.started":"2023-06-22T15:10:28.263157Z","shell.execute_reply":"2023-06-22T15:10:33.732209Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# We override max_new_tokens to generate longer answers.\n# These book descriptions were taken from their corresponding Wikipedia pages.\nresults = few_shot_pipeline(\n \"\"\"Generate a book summary from the title:\n\n[book title]: \"Stranger in a Strange Land\"\n[book description]: \"This novel tells the story of Valentine Michael Smith, a human who comes to Earth in early adulthood after being born on the planet Mars and raised by Martians, and explores his interaction with and eventual transformation of Terran culture.\"\n###\n[book title]: \"The Adventures of Tom Sawyer\"\n[book description]: \"This novel is about a boy growing up along the Mississippi River. It is set in the 1840s in the town of St. Petersburg, which is based on Hannibal, Missouri, where Twain lived as a boy. In the novel, Tom Sawyer has several adventures, often with his friend Huckleberry Finn.\"\n###\n[book title]: \"Dune\"\n[book description]: \"This novel is set in the distant future amidst a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or spice, a drug that extends life and enhances mental abilities. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.\"\n###\n[book title]: \"Blue Mars\"\n[book description]:\"\"\",\n eos_token_id=eos_token_id,\n max_new_tokens=50,\n)\n\nprint(results[0][\"generated_text\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"f09626f3-a387-4b2a-a2a1-d68aa4873f3d","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:12:04.265391Z","iopub.execute_input":"2023-06-22T15:12:04.265895Z","iopub.status.idle":"2023-06-22T15:12:28.050497Z","shell.execute_reply.started":"2023-06-22T15:12:04.265859Z","shell.execute_reply":"2023-06-22T15:12:28.049642Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"**Prompt engineering** is a new but critical technique for working with LLMs. You saw some brief examples above. As you use more general and powerful models, constructing good prompts becomes ever more important. Some great resources to learn more are:\n* [Wikipedia](https://en.wikipedia.org/wiki/Prompt_engineering) for a brief overview\n* [Best practices for prompt engineering with OpenAI API](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api)\n* [🧠 Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts) for fun examples with ChatGPT","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"801c3a04-237a-43c7-860a-2c02649e6f47","inputWidgets":{},"title":""}}},{"cell_type":"markdown","source":"## Hugging Face APIs\n\nIn this section, we dive into some more details on Hugging Face APIs.\n* Search and sampling to generate text\n* Auto* loaders for tokenizers and models\n* Model-specific loaders\n\nRecall the `xsum` dataset from the **Summarization** section above:","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"fcacab4f-d3f1-4e0c-b67c-1567a44caad5","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"display(xsum_sample.to_pandas())","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"4b9a5fbe-d306-481a-baeb-aeb0b83773d2","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:12:49.111193Z","iopub.execute_input":"2023-06-22T15:12:49.111667Z","iopub.status.idle":"2023-06-22T15:12:49.137805Z","shell.execute_reply.started":"2023-06-22T15:12:49.111629Z","shell.execute_reply":"2023-06-22T15:12:49.136412Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Search and sampling in inference\n\nYou may see parameters like `num_beams`, `do_sample`, etc. specified in Hugging Face pipelines. These are inference configurations.\n\nLLMs work by predicting (generating) the next token, then the next, and so on. The goal is to generate a high probability sequence of tokens, which is essentially a search through the (enormous) space of potential sequences.\n\nTo do this search, LLMs use one of two main methods:\n* **Search**: Given the tokens generated so far, pick the next most likely token in a \"search.\"\n * **Greedy search** (default): Pick the single next most likely token in a greedy search.\n * **Beam search**: Greedy search can be extended via beam search, which searches down several sequence paths, via the parameter `num_beams`.\n* **Sampling**: Given the tokens generated so far, pick the next token by sampling from the predicted distribution of tokens.\n * **Top-K sampling**: The parameter `top_k` modifies sampling by limiting it to the `k` most likely tokens.\n * **Top-p sampling**: The parameter `top_p` modifies sampling by limiting it to the most likely tokens up to probability mass `p`.\n\nYou can toggle between search and sampling via parameter `do_sample`.\n\nFor more background on search and sampling, see [this Hugging Face blog post](https://huggingface.co/blog/how-to-generate).\n\nWe will illustrate these various options below using our summarization pipeline.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"e8428997-8cf5-48fb-af0a-f95785d54da3","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"# We previously called the summarization pipeline using the default inference configuration.\n# This does greedy search.\nsummarizer(xsum_sample[\"document\"][0])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"5f54ec3d-b5b6-4b2e-8b1e-95bc89020d83","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:13:04.983266Z","iopub.execute_input":"2023-06-22T15:13:04.983711Z","iopub.status.idle":"2023-06-22T15:13:06.694212Z","shell.execute_reply.started":"2023-06-22T15:13:04.983676Z","shell.execute_reply":"2023-06-22T15:13:06.693056Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# We can instead do a beam search by specifying num_beams.\n# This takes longer to run, but it might find a better (more likely) sequence of text.\nsummarizer(xsum_sample[\"document\"][0], num_beams=10)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"21a66b7b-60f3-4cfc-b48d-873bd960e121","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:13:08.981710Z","iopub.execute_input":"2023-06-22T15:13:08.982302Z","iopub.status.idle":"2023-06-22T15:13:12.324090Z","shell.execute_reply.started":"2023-06-22T15:13:08.982255Z","shell.execute_reply":"2023-06-22T15:13:12.323004Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Alternatively, we could use sampling.\nsummarizer(xsum_sample[\"document\"][0], do_sample=True)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"45e6222f-8934-49bb-b174-d9ef0715c181","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:13:24.920913Z","iopub.execute_input":"2023-06-22T15:13:24.921422Z","iopub.status.idle":"2023-06-22T15:13:26.700087Z","shell.execute_reply.started":"2023-06-22T15:13:24.921387Z","shell.execute_reply":"2023-06-22T15:13:26.698756Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# We can modify sampling to be more greedy by limiting sampling to the top_k or top_p most likely next tokens.\nsummarizer(xsum_sample[\"document\"][0], do_sample=True, top_k=10, top_p=0.8)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"c3ee68e5-0aed-4aba-90cf-3fbc1b4e85a9","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:13:34.280703Z","iopub.execute_input":"2023-06-22T15:13:34.281488Z","iopub.status.idle":"2023-06-22T15:13:36.311488Z","shell.execute_reply.started":"2023-06-22T15:13:34.281434Z","shell.execute_reply":"2023-06-22T15:13:36.310253Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Auto* loaders for tokenizers and models\n\nWe have already seen the `dataset` and `pipeline` abstractions from Hugging Face. While a `pipeline` is a quick way to set up an LLM for a given task, the slightly lower-level abstractions `model` and `tokenizer` permit a bit more control over options. We will show how to use those briefly, following this pattern:\n\n* Given input articles.\n* Tokenize them (converting to token indices).\n* Apply the model on the tokenized data to generate summaries (represented as token indices).\n* Decode the summaries into human-readable text.\n\nWe will first look at the [Auto* classes](https://huggingface.co/docs/transformers/model_doc/auto) for tokenizers and model types which can simplify loading pre-trained tokenizers and models.\n\nAPI docs:\n* [AutoTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer)\n* [AutoModelForSeq2SeqLM](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSeq2SeqLM)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"e953c694-2f99-4eb8-a976-d308311b54ed","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n\n# Load the pre-trained tokenizer and model.\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\", cache_dir=\"../working/cache/\")\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"t5-small\", cache_dir=\"../working/cache/\")","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"f0e01474-f98c-4c31-9125-d5117bbc8f7e","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:14:00.285794Z","iopub.execute_input":"2023-06-22T15:14:00.286216Z","iopub.status.idle":"2023-06-22T15:14:01.812980Z","shell.execute_reply.started":"2023-06-22T15:14:00.286184Z","shell.execute_reply":"2023-06-22T15:14:01.812066Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# For summarization, T5-small expects a prefix \"summarize: \", so we prepend that to each article as a prompt.\narticles = list(map(lambda article: \"summarize: \" + article, xsum_sample[\"document\"]))\ndisplay(pd.DataFrame(articles, columns=[\"prompts\"]))","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"87ce0db8-de9f-4cbd-a87e-a777642ec5dc","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:15:43.845523Z","iopub.execute_input":"2023-06-22T15:15:43.846043Z","iopub.status.idle":"2023-06-22T15:15:43.860804Z","shell.execute_reply.started":"2023-06-22T15:15:43.846007Z","shell.execute_reply":"2023-06-22T15:15:43.859406Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Tokenize the input\ninputs = tokenizer(\n articles, max_length=1024, return_tensors=\"pt\", padding=True, truncation=True\n)\nprint(\"input_ids:\")\nprint(inputs[\"input_ids\"])\nprint(\"attention_mask:\")\nprint(inputs[\"attention_mask\"])","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"7d5d5374-d240-4d81-969d-911c8ea5edbb","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:15:49.767233Z","iopub.execute_input":"2023-06-22T15:15:49.767627Z","iopub.status.idle":"2023-06-22T15:15:49.814082Z","shell.execute_reply.started":"2023-06-22T15:15:49.767597Z","shell.execute_reply":"2023-06-22T15:15:49.812907Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Generate summaries\nsummary_ids = model.generate(\n inputs.input_ids,\n attention_mask=inputs.attention_mask,\n num_beams=2,\n min_length=0,\n max_length=40,\n)\nprint(summary_ids)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"a2100dcc-8088-42b3-a44f-4dc0bccc0661","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:15:52.898662Z","iopub.execute_input":"2023-06-22T15:15:52.899632Z","iopub.status.idle":"2023-06-22T15:16:15.536724Z","shell.execute_reply.started":"2023-06-22T15:15:52.899594Z","shell.execute_reply":"2023-06-22T15:16:15.535699Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Decode the generated summaries\ndecoded_summaries = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)\ndisplay(pd.DataFrame(decoded_summaries, columns=[\"decoded_summaries\"]))","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"1774ce2b-fb7d-4b66-9dd3-a229b08cbf2c","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:16:15.538377Z","iopub.execute_input":"2023-06-22T15:16:15.539233Z","iopub.status.idle":"2023-06-22T15:16:15.555124Z","shell.execute_reply.started":"2023-06-22T15:16:15.539199Z","shell.execute_reply":"2023-06-22T15:16:15.553749Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"### Model-specific tokenizer and model loaders\n\nYou can also more directly load specific tokenizer and model types, rather than relying on `Auto*` classes to choose the right ones for you.\n\nAPI docs:\n* [T5Tokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5Tokenizer)\n* [T5ForConditionalGeneration](https://huggingface.co/docs/transformers/main/en/model_doc/t5#transformers.T5ForConditionalGeneration)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"4a915d2c-7381-4ff0-b322-914a652da4de","inputWidgets":{},"title":""}}},{"cell_type":"code","source":"from transformers import T5Tokenizer, T5ForConditionalGeneration\n\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\", cache_dir=\"../working/cache/\")\nmodel = T5ForConditionalGeneration.from_pretrained(\n \"t5-small\", cache_dir=\"../working/cache/\"\n)","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"009fd456-3698-468f-b680-bb33d7512436","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:16:39.946895Z","iopub.execute_input":"2023-06-22T15:16:39.947297Z","iopub.status.idle":"2023-06-22T15:16:41.479955Z","shell.execute_reply.started":"2023-06-22T15:16:39.947267Z","shell.execute_reply":"2023-06-22T15:16:41.478407Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# The tokenizer and model can then be used similarly to how we used the ones loaded by the Auto* classes.\ninputs = tokenizer(\n articles, max_length=1024, return_tensors=\"pt\", padding=True, truncation=True\n)\nsummary_ids = model.generate(\n inputs.input_ids,\n attention_mask=inputs.attention_mask,\n num_beams=2,\n min_length=0,\n max_length=40,\n)\ndecoded_summaries = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)\n\ndisplay(pd.DataFrame(decoded_summaries, columns=[\"decoded_summaries\"]))","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"0bd0c2a5-b39e-4eb7-a53b-d2b1daa760c9","inputWidgets":{},"title":""},"execution":{"iopub.status.busy":"2023-06-22T15:16:43.302572Z","iopub.execute_input":"2023-06-22T15:16:43.303071Z","iopub.status.idle":"2023-06-22T15:17:05.898488Z","shell.execute_reply.started":"2023-06-22T15:16:43.303035Z","shell.execute_reply":"2023-06-22T15:17:05.897235Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"## Summary\n\nWe've covered some common LLM applications and seen how to get started with them quickly using pre-trained models from the Hugging Face Hub. We've also see how to tweak some configurations.\n\nBut how did we find those models for our tasks? In the lab, you will find new pre-trained models for tasks, using the Hugging Face Hub. You will also explore tweaking model configurations to gain intuition about their effects.","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"5deaf8c7-eaed-4931-9f28-2191a1bdb515","inputWidgets":{},"title":""}}},{"cell_type":"markdown","source":"&copy; 2023 Databricks, Inc. All rights reserved.<br/>\nApache, Apache Spark, Spark and the Spark logo are trademarks of the <a href=\"https://www.apache.org/\">Apache Software Foundation</a>.<br/>\n<br/>\n<a href=\"https://databricks.com/privacy-policy\">Privacy Policy</a> | <a href=\"https://databricks.com/terms-of-use\">Terms of Use</a> | <a href=\"https://help.databricks.com/\">Support</a>","metadata":{"application/vnd.databricks.v1+cell":{"showTitle":false,"cellMetadata":{},"nuid":"c2d133d8-cd1d-43f7-9215-24fb31c7df5c","inputWidgets":{},"title":""}}}]}