File size: 7,764 Bytes
d030b37
 
 
 
 
 
 
 
 
 
 
 
b2c96a7
d030b37
 
 
 
 
 
 
 
 
 
 
b2c96a7
d030b37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2c96a7
d030b37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2c96a7
d030b37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2c96a7
 
 
 
 
 
d030b37
 
 
 
 
 
242ba4e
 
 
 
 
 
b2c96a7
d030b37
b2c96a7
d030b37
b2c96a7
d030b37
b2c96a7
 
 
 
 
 
 
 
d030b37
b2c96a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
242ba4e
 
 
b2c96a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
# Building a Chainlit App

What if we want to take our Week 1 Day 2 assignment - [Pythonic RAG](https://github.com/AI-Maker-Space/AIE4/tree/main/Week%201/Day%202) - and bring it out of the notebook?

Well - we'll cover exactly that here!

## Anatomy of a Chainlit Application

[Chainlit](https://docs.chainlit.io/get-started/overview) is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).

The primary method of customizing and interacting with the Chainlit UI is through a few critical [decorators](https://blog.hubspot.com/website/decorators-in-python).

> NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.

We'll be concerning ourselves with three main scopes:

1. On application start - when we start the Chainlit application with a command like `chainlit run app.py`
2. On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
3. On message - when the users sends a message through the input text box in the Chainlit UI

Let's dig into each scope and see what we're doing!

## On Application Start:

The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.

```python
import os
from typing import List
from chainlit.types import AskFileResponse
from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
from aimakerspace.openai_utils.prompts import (
    UserRolePrompt,
    SystemRolePrompt,
    AssistantRolePrompt,
)
from aimakerspace.openai_utils.embedding import EmbeddingModel
from aimakerspace.vectordatabase import VectorDatabase
from aimakerspace.openai_utils.chatmodel import ChatOpenAI
import chainlit as cl
```

Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.

```python
system_template = """\
Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
system_role_prompt = SystemRolePrompt(system_template)

user_prompt_template = """\
Context:
{context}

Question:
{question}
"""
user_role_prompt = UserRolePrompt(user_prompt_template)
```

> NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!

Following that - we can create the Python Class definition for our RAG pipeline - or _chain_, as we'll refer to it in the rest of this walkthrough.

Let's look at the definition first:

```python
class RetrievalAugmentedQAPipeline:
    def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
        self.llm = llm
        self.vector_db_retriever = vector_db_retriever

    async def arun_pipeline(self, user_query: str):
        ### RETRIEVAL
        context_list = self.vector_db_retriever.search_by_text(user_query, k=4)

        context_prompt = ""
        for context in context_list:
            context_prompt += context[0] + "\n"

        ### AUGMENTED
        formatted_system_prompt = system_role_prompt.create_message()

        formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)


        ### GENERATION
        async def generate_response():
            async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
                yield chunk

        return {"response": generate_response(), "context": context_list}
```

Notice a few things:

1. We have modified this `RetrievalAugmentedQAPipeline` from the initial notebook to support streaming.
2. In essence, our pipeline is _chaining_ a few events together:
   1. We take our user query, and chain it into our Vector Database to collect related chunks
   2. We take those contexts and our user's questions and chain them into the prompt templates
   3. We take that prompt template and chain it into our LLM call
   4. We chain the response of the LLM call to the user
3. We are using a lot of `async` again!

#### QUESTION #1:

Why do we want to support streaming? What about streaming is important, or useful?


#### Question #1 Answer:

Streaming the response allows the user to see the response being generated contemporaneously. I suppose it's a better UX as users don't have to wait for the entire response to be loaded in.


## On Chat Start:

The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.

You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.

while files == None:
files = await cl.AskFileMessage(
content="Please upload a Text File file to begin!",
accept=["text/plain"],
max_size_mb=2,
timeout=180,
).send()
Once we've obtained the text file - we'll use our processing helper function to process our text!

After we have processed our text file - we'll need to create a VectorDatabase and populate it with our processed chunks and their related embeddings!

vector_db = VectorDatabase()
vector_db = await vector_db.abuild_from_list(texts)
Once we have that piece completed - we can create the chain we'll be using to respond to user queries!

retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
vector_db_retriever=vector_db,
llm=chat_openai
)
Now, we'll save that into our user session!

NOTE: Chainlit has some great documentation about User Session.

#### QUESTION #2:

Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?

#### Question #2 Answer:

The reason we use user session is so that the chats that we are having are stored directly within the current session and if we start a new chat we can clear the current session and begin a new one - aka a new version, which is defined at the on_start level.
## On Message

First, we load our chain from the user session:

chain = cl.user_session.get("chain")
Then, we run the chain on the content of the message - and stream it to the front end - that's it!

msg = cl.Message(content="")
result = await chain.arun_pipeline(message.content)

async for stream_resp in result["response"]:
await msg.stream_token(stream_resp)
πŸŽ‰
With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!

### 🚧 CHALLENGE MODE 🚧

For an extra challenge - modify the behaviour of your applciation by integrating changes you made to your Pythonic RAG notebook (using new retrieval methods, etc.)

If you're still looking for a challenge, or didn't make any modifications to your Pythonic RAG notebook:

1. Allow users to upload PDFs (this will require you to build a PDF parser as well)
2. Modify the VectorStore to leverage Qdrant

NOTE: The motivation for these challenges is simple - the beginning of the course is extremely information dense, and people come from all kinds of different technical backgrounds. In order to ensure that all learners are able to engage with the content confidently and comfortably, we want to focus on the basic units of technical competency required. This leads to a situation where some learners, who came in with more robust technical skills, find the introductory material to be too simple - and these open-ended challenges help us do this!