url
stringlengths 36
73
| transcription
stringlengths 31
481k
| title
stringlengths 8
99
| duration
int64 95
31.1k
| uploader
stringlengths 4
44
| upload_date
stringlengths 8
19
| description
stringlengths 0
4.04k
| datetime
stringlengths 26
26
|
---|---|---|---|---|---|---|---|
https://www.youtube.com/watch?v=example | This is a sample transcription. | Sample Title | 123 | Sample Uploader | 2024-06-10T12:34:56 | Sample description. | 2024-06-10T01:34:31.859268 |
https://www.youtube.com/live/Brwhbjh3boU | Hey, fun fact. Guys, did you know that max sequence input length is actually not the same as the context window? Wiz, Leo, isn't that right? Yeah. Pretty true, Greg. Yeah, yeah, yeah, yeah, yeah. Fun fact, and we will dive into all of the details that we need to go long on context today. You guys ready to get into it? Let's go. Let's do it. Ready. Yep. All right. We'll have you guys back in just a sec. Today, we go long on context with the chief scientist from Gradient, Leo Pichelis, and of course, the Wiz. Today, we're talking about exactly how these context windows should be thought about, how we can make them longer, what this requires, and how it's happening in real time. And of course, once we have the long context window, how do we know that it's actually performing well? How can we evaluate the long context window performance? So we're going to set this off by providing some context. Then we're going to talk about what goes into actually creating an off-the-shelf model that then gets expanded with its context window. And we're going to talk about some of the popular frameworks for evaluating long context window LLMs. We're going to talk about some of the popular frameworks for evaluating long context window LLMs. So I want to sort of first provide a little background here, a little context. In April, of course, Meta released Lama 3. And what was so interesting is when they put out the blog, they said, well, we're not going to do the long context window for a little while. We're working on it. We've got a bigger model. We've got some longer context versions. And everybody said, man, it's only 8K input sequence length. I don't know. I'm really left wanting with this small amount that I can put into this thing. And the race was on in the industry. Gradient, friends of ours, were instantly off to the races, crushing it. Somehow, within just a few days, working maniacally over the weekend, we understand, released 160K context length version of the 8 billion parameter model before they then released a few days later a 1 million context length version of the 8B. And then they released a 262 context length version of the 70B before they just went straight buck wild on May the 4th with their 1 million context length version of 70B. They've actually released a 4 million length version of 8B now, and it's really just kind of hard to keep track of all the cool stuff that they're doing so i wanted to sort of get leo up on stage to tell you a little bit about what this has been like behind the scenes at gradient so leo i mean how did you guys pull this off before meta and so quickly what's it been like down in the trenches and like what should we expect to see from you guys next yeah yeah thanks um it's definitely been a a lot of work kind of like as you mentioned and you know a number of uh nights and weekends i think a a lot of it is being in the right place at the right time um we were coincidentally already working on some long context stuff before Llama 3 dropped. We were working with our compute partner, actually, Crusoe, and they had just spun up, I think it's a thousand GPU L40s cluster. And we were kind of like throwing the ball around, trying to figure out uh what's something uh really interesting that we could work on together and we settled on on long context right i think over the past few months like really just this year people been uh putting out long context models there was like the large world model uh maybe like a month or two ago up to like a million context uh google gemini has been talking about it too um and so we were coincidentally working on some stuff um and it was kind of the perfect storm too because it wasn't just us it was kind of like the whole open source community uh there's this great easy context github repo out there like again literally like released maybe like two weeks or three weeks before llama 3 and so it we were in the right place at the right time working on the right stuff heard about llama 3 dropping and we're like great let's let's see how it would work and it all kind of just fell together it wasn't much of a pivot then it was just sort of like an acceleration of some of you guys were already working on behind the scenes? Yeah, yeah, exactly. I mean, we were mostly working on like Lama 2 architectures. And so like there's a little bit of like a hope and pray that all of this would just kind of work for Lama 3 and like credit to Meta and their team, it did. It kind of worked flawlessly. Nice, nice. So, I mean, you know, we saw the 4 million input sequence length. What's next? What are you guys working on now? And I mean, how long are we going here exactly? Yeah. I mean, you could potentially take it as long as you want. But, you know, at a certain point, there's diminishing returns, right? Like 4 million context length. I think it's something like 1.25 tokens per word or something like that. But anyways, at like 4 million, you've got like the whole Harry Potter book series a number of times over. And you figure at the end of the day, Gradient serves enterprise customers, right? Like we build agent systems for enterprise company. And something like four or five million is pretty much kind of like where the demand that we've been seeing is. And so I think probably the pivot from here is making those models better. I think in that direction, there's tons of room to improve, both on kind of like evaluating and figuring out, well, okay, it can like read long context, but maybe you want it to like reason about long context. And then also just working on the applications, right? I think we're just barely scratching the surface of all the cool stuff we could do with it. Yeah, yeah, yeah. I like this idea of sort of enterprise and they have sort of a kind of context length that seems to be sort of this diminishing returns point. This might be an interesting place for us to come back to in our discussions today. And then, you know, this idea of exactly what better means, I think we'll cross that path as well. Thanks for giving us some initial context, Leo. We'll have you back in just a little bit to keep the discussion rolling. For now, what we're going to talk about is a little background. And it's important to understand sort of this idea in general of in-context learning is kind of fundamental. This came out with the GPT-3 paper called Language Models or Few Shot. And it's it's all in the figure caption right here. Larger models make increasingly efficient use of in context information in the previous GPT one and GPT two paradigms long time ago now, but they were really requiring a lot of this fine tuning. And what we saw with the in-context learning enabled GPT-3 is that if we just did zero shot, that is instruction only, one shot, meaning one example of the kind of thing you wanted to do into few shot, especially at large enough model sizes, we're getting very, very, very performant results across many tasks. And in fact, if you've watched our channel before, you've noticed that we oftentimes talk about RAG as fundamentally requiring this in-context learning piece. And, you know, it's a bit jargony to even say in-context learning. What we're really talking about is we're talking about using the prompts, right? We're talking about doing some prompt engineering, using the input to the LLM, giving clear and specific instructions. And of course, we can talk about providing a context during our prompting. We can say, hey, you're a specific role. You should act a specific way. You have a specific feeling. But more importantly, when we want to fact checkable things, we're oftentimes saying, hey, we actually want to worry about this retrieved context piece. Because, of course, as we move from prompt engineering into RAG, we're sort of optimizing the context. That's fundamentally what we're doing. And as we go from zero shot to one shot to shot, to many shot learning, we're just giving more and more examples. These are sort of task-specific things. But in RAG, we're kind of giving more reference material. In either case, we're sort of giving the LLM access to knowledge that it wasn't really trained on in the first place. And the way that this works in a retrieval augmented generation system is we ask a question, it gets converted to a vector representation, and we've stored all of our potentially relevant data locally in a database called a vector store. locally in a database called a vector store. And then we look for stuff that's similar to our question. The stuff that we find that's similar to our question, we put and inject directly into our prompt using a prompt template that says something like use the provided context to answer the user's query. You context to answer the user's query. You may not answer the user's query unless there's specific context. If you don't know the answer, say I don't know. And then what we return is we return natural language directly to this context. And this retrieval process is the R in RAG, but this augmentation that we're doing to the prompt, that's in context learning. And so the RA into the G that is finally the generation, that's really the whole process here to get our answer. Now, in terms of what we're talking about doing now is we're saying, well, what if we just had a question and then we just copy pasted the entire Harry Potter series a couple of times over, right? Then how about just like, bam, let's just shove it into the model and get our answer. This is kind of the long context approach. And you might say, well, why do people want to do this? And I mean, clearly it's a no brainer, right? Because it doesn't get easier than doing something like this. So let's bring our experts back up to the stage here to have a little discussion about the big question that we are constantly getting here, guys. Why do we even need RAG if we have long context windows? They're going to kill RAG, right? Let's start off with that. Leo, what are your thoughts on this? Yeah, I mean, I don't think RAG is dead if you get long context, right? But I do think it looks a little bit different. You know, like one of the things that RAG really depends on is you got to pull out the right information. And that's why, like, people are working on a lot of, like, re-rankers. Like, you pull out more information than you think you need and then you kind of, like, sift through it. That stuff is, like, pretty complicated. And like, isn't it great if you could just pull out more information than you need throw it all into the model and let it kind of like figure out what it uses and what it doesn't use um and i think like especially the part that like rag is like pretty it would be pretty hard to achieve with just rag is when the pieces of information are interrelated, right? It's like, one example is say, and it's kind of a silly example, but, you know, maybe in one document, you have people's birthdays, right? Like maybe Sally's birthday is May 22nd. And in another document, you have Sally went to the store and bought like apples and oranges. And you ask the rag, what does someone whose birthday is in May, like buy at the store? Because those two pieces are interrelated, I think it would take an LLM multiple queries, but if you had it all in the context, it'd be just like, well, apples and oranges. So I think like that kind of interrelated stuff is really where the long context shines. Yeah. Yeah, so this sort of long range interrelations and correlations, I mean, I think this is something you've thought quite a lot about all the questions we get from students was, and maybe you can kind of comment on this quote that I took directly from you here and to kind of color the discussion if we built a perfect retrieval system then we wouldn't need long context is this true or false in your opinion and talk a little bit about why you think so well i feel like because i said it i'm gonna have to say that it's true uh The idea, I think, is right. If we have perfect retrieval, then we always get the information that we need to answer the question correctly. So we don't really need long context. We don't necessarily need long context, assuming that the context we need to answer the question isn't super long, right? And for most questions, that's going to be reasonably true. But that perfect retrieval system is not a trivial problem. Let's like, you know, be a little silly about it. But it's a very, very difficult problem, in fact. And I think, you know, long context can help us fudge that perfect context by just including more information so that we're more likely to have the correct context when we're looking to do anything with that context. Okay. To qualify a bit. That's what it is. Yeah, yeah, yeah, yeah. I mean, like, so this is clearly a hot clearly a hot take. Now I want to bring in, Leo, you mentioned enterprise earlier and you mentioned this four to five million length. I mean, enterprises got to be asking themselves this question. How are you thinking about providing enterprise the best solution, you know, that sort of balances great retrieval with the right amount of context are you typically leveraging both of these tools uh in any given use case uh maybe just tell us a little bit about um something that uh you know maybe you can tell us about right no happy happy to to share uh i know people people might get mad at me at work afterwards, but happy to share whatever. Yeah, I think the one thing I was going to say about the perfect retrieval is I think you can think about long context as a way to get at perfect retrieval. I think that's kind of what you're getting at as well um you know it's like i always like to think about analogies and like for us like in the human world and you know i could ask um i don't know like like a like a researcher or like a lawyer or someone to find me the yeah maybe like lawyer right like ask them to find me like the best counter argument for this particular case and and yeah like what they would probably do is like read through like a whole ton of different like prior cases find out like the few examples that are really worthwhile and like compile them and and return them right and like that one piece of information they return might very well be pretty short context right you know lawyers get get paid bait by the hour it's probably you know worthwhile to a lot of folks to to have it be a concise summary but there's a ton of work that went into that right and and so maybe if you have the llm go through all those pages and pages of briefs to to end up with that perfect retrieval that's that's like one way to do it. You know, you were asking about like the cost considerations for enterprise and totally that's like something to think about because I think the trade-off that you're making by using a long context window is it is probably more costly than doing a RAG query. You know, it takes a certain number of GPUs. I think on eight GPUs with an 8B model, we're getting up to 500K or maybe 600K context. Anything past that, we got to get beefier GPUs, we've got to get more of them. And so, you know, for any kind of like where the rubber meets the road system, you got to think about ROI, right? So, you know, maybe you don't want to throw like the whole internet or like, you know, Harry Potter five times over into the context, maybe some of it you want in the rag and some of it you want in the long context. Okay. Okay. Okay. So long context then sort of in spirit is kind of moving towards this idea of a perfect retrieval system. Something like, you know, we were chatting the other day about, Wiz and I were chatting about kind of like whether using semantic chunking or you're using a hybrid retrieval mechanism or you're using re-ranking you're sort of trying to figure out what is most relevant in the context right and when we do the assessment with tools like rag assessment we're sort of saying we don't want very duplicative information we don't want uh you know stuff that's going to be sort of just copy paste of the sameness, but we also want sort of rank ordered relevant information. I mean, you know, we talk about the problem RAG is solving as sort of being fact checkable. Leo, do you think that's the same problem long context is solving or can we sort of state it a little bit differently? I mean, are we kind of moving in the same direction with these two tools? Yeah, I think it's definitely moving in the same direction. I mean, fact checkable is such an interesting term because if you think about like all of the data that a language model is trained on it's it's trained on facts right it's trained on information that's out there um but like you know hallucination the model maybe is is extrapolating or it's like connecting pieces of information that maybe it shouldn't be connecting right and and all of this kind of like learning is trying to trying to teach the model both about new facts and how it maybe like shouldn't connect pieces of information that it has. And so like fact checking, I think you can totally do with long context because the facts are in the context. And I think there's been plenty of research to show that kind of like you mentioned in context learning, way more sample efficient than trying to pre-train or fine tune the model, right? And so that can be kind of like your last step fact checking that is very sample efficient, I think similar to RAG. I think similar to RAG. I think one of the things that I like about putting more and more into the language model is this kind of principle of just getting out of the way of the model and letting it learn, right? Like there's something beautiful about the fact that you have a very sophisticated AI model. You feed it examples and information, and it is able to synthesize together how it retrieves information or how it ranks pieces of data versus necessarily needing to code up the algorithm by hand, right? Like let the model figure it out. So one of the things we did before is we sort of said rag versus fine tuning. And it's sort of like you need both to kind of get to this sort of production grade LM application. Like, is it similar here with rag and long context? Like, what's at the end of this tunnel? Is this an illusion of choice or is it real? You know, Wiz, give me your take on this. And then we'll go to Leo before we kind of wrap this rag versus long it's just so tough to grok this sort of like when do I use either exactly you know and I think it's the new hottest question out there yeah I mean I mean uh for me this question is is pretty easy. It's both. I think that there are times when we're going to be able to use, you know, RAG and long context to make sure that we're doing, you know, getting everything that we need or up to everything that we need, especially if we're thinking about kind of API-based solutions where we're not actually needing to host those long context models, right, especially if we're thinking about kind of API based solutions where we're not actually needing to host those long context models, right, so that the flexibility of the context window is almost explicitly free for us, right? We're only paying if we use it. And especially if we're, you know, we're thinking about RAG across very specific sets of documents that don't change much or don't change frequently that we want to do retrieval across, we can kind of exploit the fact that we can hold this long context in memory, right? So we don't actually have to do these repeated loads and unloads. So we're going to wind up with less of a latency hit. The idea is that at some point, you know, especially as we continue to discover more about, you know, efficient long context, we're going to be able to do both. And, you know, it's one of those things where if one is 2% better on one task and the other is, you know, 2% better on the other task, and there's not a serious cost to use both then we should probably we should probably just use both right like it'll it'll be better than uh than either individually yeah yeah okay because you can use caching with both you can use sort of the these these cost memory saving sort of things with both you can uh are they are they both sort of becoming just, you know, table stakes here, Leo? Is that is that sort of fair to say? How would you like to sort of wrap this for the audience here? Yeah, yeah. I mean, I think that's like actually very probably like like a pretty deep statement in the fact that they're actually not that dissimilar at the end of the day. And yes, definitely using both. And my suspicion is that as we, again, super at the frontier of all of this stuff, but as we go forward, I imagine that rag and long context will just be parts of a spectrum. And the one piece of evidence I have in my mind is, again, don't mean to sound like a broken record, but like thinking about like people and humans, like the way our minds work is there is like some piece of information that we have like packed up back there, like somewhere in our brains that we reference and we look up and there's some pieces of information that we have at the forefront. Right. Like I was reviewing all the work we did for long context before this talk. And that's like in my current long context, we do both. So I think it stands to reason that AIs are going to do both just the same. Yeah. Yeah. You're studying for the exam. That's the long context. Right. But like the sort of like TED talk nuggets that like entered your brain and solidified and crystallized maybe that's the rag yeah super exactly things stuff guys okay I'm gonna give a little more context here before the next discussion thank you so much guys we'll have you back in just a moment so I want to talk about the context window now, because this is sort of an important point that I think goes misunderstood and understated. When we are thinking about the problem that we're solving with long context, we're talking about the fact that the model has only learned correlations across sequences of length, whatever it was trained on. So the Harry Potter books keep coming up. It doesn't know things across the whole series necessarily because it's never seen a single input sequence of that length. And this is where we get into this sort of distinction between context window and input sequence length. Let's take a look at some of the latest models. Llama3 said, we trained the models, quote, "'trained the models on sequences of 8,000 tokens." Trained the model. They didn't say the context window was this length. Whereas GPT-4 Omni, they say the context window, 128K. The thing that I want to tell all of you today is that really there is no sort of in and out when it comes to context window. There is only context window. So I was looking through the GPT, I guess, forums, the OpenAI forums, and there was just a great sort of upvoted, you know, hey, what exactly is the difference in input? The context window includes all input, output, and control tokens. Now, this is very important because the context window is to be thought of as all the stuff that goes in and all the stuff that can be held that comes out. So this is kind of a difficult thing to conceptualize. And I want to bring our experts back up to the stage to talk a little bit about how they think about this context window. I think oftentimes, guys, we see that there's a sort of context window length that you can put into an LLM and everybody thinks like that's the end of the story here but actually it's more than that it's the input and output that need to be considered and so like how do you think about context window and balancing your understanding of input i.e. sort of max input sequence length with context window in general. Let's kick it off with you, Leo, here. Yeah. I think like on a like slightly more like technical side of things, the whole like you're absolutely right, right? Like the context window is everything, right? It's like the system prompt it's what you put in and it's what you get out uh the reason that's true is is exactly what makes like these AI models so powerful is that it is able in its memory for figuring out the next token or the next word reference everything that's been it's seen before right and so like when it when it gets to its output in order to to tell you the right thing uh it needs to know about everything that you've inputted and also you know you mentioned open ai like everything open ai put in the system prompt um and and so that's exactly why like technically speaking the the context window is is everything and like you know when it gets to the very last word the output it needs to know everything that it's written before and everything that you gave it and also the system prompt um yeah and so like yeah but well go for it yeah keep going you're the scientist here we'd love to hear more of your perspective you know oh i was i was very rapidly reaching the end of what i wanted to say. I think probably you all have more to say about the relationship between input and output sizes. Yeah, so go for it. Yeah, I mean, if we're sort of thinking of this predicting the next token thing and this sort of idea of predicting the next word, everybody's like, oh, these GPT models, they're just next word predictors. oh, these GPT models, they're just next word predictors. I mean, Wiz, can you cast this into sort of an input and output context for us, let's say, as we think about the next token and sort of like, how many next tokens can we print out before we start to run into this issue of, I've now filled up my context window. I need a longer context window. How should we think about this? Yeah, I mean, the idea is it's like a tank that can fill up, right? If you add more tokens to the context window by, say, generating them, and then you look at all the tokens that you've had as input, as your system prompt, and that you've generated, eventually you'll have generated enough tokens that you're no longer, you've used all the context window. And something's got to give at that point, right? The models are not built to be hyper flexible in this sense. So we're going to have to tackle the fact that we're going to just start dropping context from the, say, the beginning of our input, right? So we can slide it along. We can continue for as long as we'd like, but if you have, say, a context window of 100, right? And you generate, you give it an input of 50 tokens, right? And it generates 100 tokens. When it's generating that 100th token, it's only looking at half of what you gave it, right? So this is the idea that's kind of floating out of context. And I think that, now I want to be clear, because there are literal hard-coded API limits for a lot of these services that you're going to use whenever you're serving models, whenever you're interacting with API models. Like the API or the application layer is going to put limits on how much stuff you can send it and then how much stuff it can generate. So you're going to see things like maximum input and maximum output, but those are, are, are artificial limits or it, I would say arbitrary, but that feels a little bit crude, but the idea is they are, they're well-reasoned limits that are, that are imposed by people. It's nothing to do with the actual nature of the model. It's just to do with the fact that like, typically, we don't want models to generate for very long time, especially if your business is paying for the compute, right? You don't want the model to generate 700,000 tokens because you're paying for it, and it's probably going to impact latency of other users. And then you're also, you know, you can't let people put in too much because you know, you're going to generate X number of tokens, right. Or up to X number of tokens. So you can't let people put in literally the entire context window, or you're immediately gonna lose context. I'm assuming that you're handling it gracefully even which some, uh, serving platforms don't do. So, okay. So quick follow-up on that. So if I'm generating tokens and then I'm sort of running out of context window, does that mean I'm losing the system prompt? Or can I be more flexible about the way I'm doing this? Is this kind of the way that most of these systems work? I think most of them just don't handle it, to be honest with you. So if you look at the popular serving platforms, they just don't let you do that when the context window gets too full, quote-unquote gets too full uh quote unquote gets too full they'll they'll simply stop working and in fact most of them are designed so that can't happen uh under reasonable circumstances so they limit the amount of input tokens and they limit the amount of potential output tokens so we never reach that point uh but of course with with the brilliant engineering teams that we have uh you could certainly design the system so it preferentially drops the input tokens versus a system. But at that point, what are we doing? Sure, sure, sure. Yeah, yeah. Okay, so then Leo, these long context windows can actually slide and have more input or more output. Is that right? Yeah. I mean, I actually think it might be a little hard to say exactly what would happen if you kind of like give it more context than it was trained on. Like, I don't think it's like necessarily clear what, like, you know, this kind of like goes back to, you know, what a model knows and what it doesn't know. You know, imagine if you yourself have like only ever read, I don't know, 100 page books, and someone gave you like a thousand page book and said, hey, go figure it out. Like you might do something reasonable, but you also might just start doing something completely unhinged, right? And so I think like, you know, trying to put my like product hat or like engineering hat on, I think imposing a clear restriction of, hey, let's keep the context within bounds of what the model knows about. Like maybe is a more like it is a clearer experience or a more expected experience than, you know, go ham and but but God help you with like what the model is going to say. That's right. That's right. Okay, so if the whole point is to be fact checkable and avoid hallucinations, we'd still want to get into this mess. Yeah, yeah, yeah, yeah, yeah. All right. That and just like a point of clarification here, Leo, is it true that like we put all of the tokens that are currently in the context window that is all input and all output each time to predict the next token? Is that sort of what's happening here? Sort of running everything back through each time? Yeah, yeah. There's like caching things that you can get into where like you don't have to like redo all of the computation from scratch. But yeah, exactly. Like you generate the next token by giving it everything that it's seen before. Yeah. OK. And yeah, join us next week. We're talking next token. Exactly how to do these predictions. So I want to go into training now a little bit, and I want to talk about how you guys actually created this long context setup. So like, Leo, I'd love for you to just talk a little bit about the training process and sort of technically, I know you guys had to get access to all this compute. How should we even think about what you did? You took this model that was massive off the shelf, and then you sort of, is this more of an unsupervised pre-training thing, more of a supervised fine-tuning thing, or is this sort of somewhere in between in this sort of continued pre-training space? Yeah, yeah, great question. So in training, kind of like a long context model, there's basically like two main challenges. One is just the pure computational challenge. You may have heard about attention is like quadratic complexity in the context link uh that's exactly because every token needs to be able to reference every other token and so like things that are quadratic don't necessarily scale well and so you like very quickly start like running out of memories these things get huge um and then the other side i think we already touched on is just the fact that like the model has never had to deal with tokens that are, you know, a million length apart. And so like there's probably some amount of like instruction or training. And you need to give it examples for it to like now be able to figure that out. And so those those two pieces, you know, we can get into it in more detail, but, you know, those are kind of like the two things that we had to figure it out. And then the other thing that you asked was about kind of the, you know, the unsupervised pre-training versus the fine tuning. tuning. And yeah, it's a bit of both, right? Like one of the things that we found works pretty well is to do unsupervised pre-training on long contexts, right? Like typically how the pre-training works is you give the model some text, you know, whether it's like code or a book or what have you, and you have it like use the previous part of the book to predict the next token. Do the same thing for long context, right? that's like a very straightforward thing to do um one of the uh issues with with just doing that is it becomes a kind of like a single a single purpose model right like all it knows is how to like give you the next part of whatever you give to it. And so if you want it to then be able to like chat with you, for example, you got to layer on other stuff on top of it. And so so that's what we did. Like we first did the pre-training and then we layered on. It's called like instruction tuning. But you tell it how to chat. That's right. So did you take the base model off the shelf then or did you take the instruction? OK. Oh, no. So sorry. Sorry. Sorry. Take that back. We took the instruction to model. I preempted the question. Yeah, yeah, yeah. Okay. Okay. All right. Yeah, yeah. That's, that's in line with I think all the things that are kind of viewers have heard from us, like, you know, take the instruction tuned off the shelf when, you know, why not when possible? Why not? It knows more stuff. Yeah. It can do more things. Yeah. Can you give us a sense too, of like the, the kind of like problem that you're solving when you really need this long context thing? I mean, you know, tell us Harry's entire life story or something. I can imagine, right. That's a sort of the Harry Potter example, but like, can you give us some sort of sense of what these data sets look like that you're using or the problem spaces that Enterprise is thinking about with these longer context windows? Yeah, they're definitely related. I would say on the on the data set side, one of the things that we ran into is there's just not that many uh like pre-existing long contexts out there like you know at one point we were like scouring like project gutenberg to see what are the longest books out there and it's like yeah like you got like war and peace i think it clocks in at like 700 and something k um I think you have like Les Mis. It's also like 600 K. There's only a few of them out there. So pretty quickly you have to get into like synthetic data generation. And yeah, we can chat about that too. On the use case side, there's a few that that we've thought of um maybe one of the more compelling ones is is on coding um you know for like right now like coding assistants they kind of like know pretty locally like around the the code that you're writing and and maybe like slightly more than that and so they're pretty good at like uh giving you like the the next part of the line that you're writing in Python or whatever. Right. But now imagine all of a sudden you can throw like your entire code base into it. Now we can basically if you get like much closer to this idea of describing the feature you want, maybe giving it enough detail, and it's able to pull in all the stuff that it needs. So that's one use case. There's a couple. Yeah, Wiz, what do you think about that? You think we got the AI programmer coming soon? That is a great use case. I mean, because it's like, I can't really think of things that are that long. You know, it's like the entire code base of something that that's potentially massive yeah so good um anything to add on this Wiz no I think yeah I think it's well covered I mean uh you know the AI programmer is always I feel like six weeks away that's it you know so yeah brought out to Alan and Kevin by the way yeah that's right uh watch out here comes kevin uh we got that on our youtube channel as of yesterday we'll still be six weeks away six weeks from now but i think it is that's right it's better right i mean like uh and when it comes to uh when it comes to something like the the long context uh in this idea of like basically using the the as like a glorified cache. It is an entire code base is a great thing to have it in cache if your company has a very large code base, right? It becomes worthwhile to answer really, you know, really complex questions that require that level of context, you know, quickly. So absolutely. I mean, I can see it being a super, super valid use case. Yeah. Yeah. Okay. Very cool, guys. Well, we're going to go ahead and start the last portion of our presentation today, where we talk about eval next. And what we have prepared a short demo for you guys today because we don't want to make it all panel discussion to give you some some more context on how people are evaluating these llms today a lot of times you'll see for instance this is from gradients post on the 8 billion 1 million context length and you'll see this sort of plot. And this is sort of called the needle in a haystack, where you have here on the y-axis, you have depth percent, sort of how far into the context window are we going? And are we looking for this needle in our haystack? looking for this needle in our haystack. And then we have sort of this token limit, token number that we're sort of looking versus this depth percent. Okay, so we're sort of how far into the different size context windows are we going to look? And you see here, we want this thing to kind of be all green and green of course is this one here. And it means you found it, right? Red down here means, ah, you missed it, right? And so we put this little sort of nugget, this little needle in the haystack, the sort of famous example is the sort of like, you know, kind of magic number. And we sort of just put it in the context and we see if the LLM can find it. So what we did today is we prepared a short demo where we're actually going to see if we can do this needle in a haystack approach on GPT-4 Omni, this 128,000 context length, long context, let's say, LLM that just came out last week. And Wiz is going to present this for us to give you guys a little bit of an insight into how this is happening and then we'll come back and we'll talk about exactly how gradient was doing this with all the models that it came out we'll talk about the future of evaluation and we'll wrap up for more q a in the slido, over to you. Oh yeah, okay, so this is pretty straightforward, thankfully, to some work done by this Greg. The basic idea of needle in a haystack is very straightforward, right? We are going to fill up the context window with stuff, and then we are going to ask a very specific question about something that we've added to the stuff. So first of all, let's just let's just talk about, you know, some dependencies, we need this needle haystack, we need this laying chain community. And we're going to grab the repo that has this, this library, because we also want to yep sorry you guys I will share the the link to the notebook here but the idea is right we're just gonna grab this repo so that we have all of Paul Graham's essays this is like a kind of the the classic example of this right is we use Paul Graham's essays as the haystack so the stuff we're to fill our context window up with is those essays. Now, we're going to need to use this nest async apply so that we can run the async loop in Colab, just necessary. We're going to use OpenAI as both our model that we're testing and our evaluator, right? So in order to prevent situations where the model answers with the correct response, but perhaps using slightly different wording and yada yada, we're going to use an LLM to determine whether or not the response contained the correct answer. So that's why we're using an LLM as the evaluator here. We're going to use a bunch of different parameters. The basic idea is pretty straightforward here. Like we need to make sure that we set everything up and then we can just kind of run this. It is going to take like a while because we are testing a lot. So in this case, we're only going to test a total of 10 intervals and 5 depth percents. So let's talk about what these things are doing, right? So model to test, straightforward. We're going to use OpenAI, and we're going to test GPT-4.0. Our evaluator is going to be GPT-4.0. It doesn't matter that these are the same because they're not doing the same thing. We're just using this evaluator model to see if our response contained the correct answer, so if it contained the needle. The true answer is Flaming Hot Cheetos. The question asked is, what specific food does Jerry like? This is homage to Jerry from Llamaspace, who used this prompt when doing some NIH a little while ago. And then our needle is Jerry likes Flaming Hot Cheetos. So this is basically, the needle is just some text. And the idea of needle and haystack is we fill the context up to some amount with uh you know paul graham's essays and then we place the phrase jerry likes flaming hot cheetos somewhere in that context now the the important piece is that we can place it at various depths which means say for instance zero depth is the start of the sequence and 100% depth is the end of the sequence. And then we can place it at various intervals along that, you know, beginning to end, right? So this is the idea that we want to see, does the model have blind spots potentially in, you know, in the middle, or, you know, towards, towards the end, or towards the beginning, right, this is kind of seeing where does this retrieval task, where is this retrieval task successful. Now, we can also set up the context lengths, so we go from 100 tokens, so that's only 100 tokens of Paul Graham's essays, plus our little phrase, all the way up to 118,000 tokens. 128K is the maximum for GBT4O, but we just chose 118 to give it some wiggle room, as the library sometimes overestimates and overshoots this number. And we're going to split it into 10 intervals. And then we're going to do this, you know, we're going to basically place the token along every 20% as we do the evaluation in that input sequence. Easy peasy. That is it. So the whole idea here is we fill up some amount of context, depending on what interval we're on, with polygram essays, and then we plop that token in at the according depth. And then we see if we can answer this question and get this response. So we run this. It runs for a while. It took about five minutes. And it can cost, right? Because we're just running an API endpoint. Luckily, GBT4O is cheap. So it's not costing too much. And then, of course, we can use our, we can create the table, right? So we'll create a data frame frame convert it to a pivot table and then we're going to use plot lead to plot it you'll notice that there's some blank spaces this is just where uh the the library failed to retrieve a response so they're nans so they show up as clear uh but the idea is and that's the not the model's fault that's the uh library's fault, but this is it, right? So the idea is, and how to read this chart, right, is at 80K, right, at each of these tested depths from zero to 100, the model always answered the question successfully in each of these trials, right? So it didn't matter where we placed the needle in 80K context, it always found it and same with up up to the maximum uh which is going to be 113k is the actual uh value that it used we are able to we are able to uh retrieve or fetch the token in all cases, all the way up to 118K, right? So what this means is that up to its maximum context length, we are always able to successfully answer the question. We'll just scroll up to double check that it's what specific food does Jerry like? And we always get the response, Jerry, or a response close to it, Jerry likes flaming hot Cheetos. So we always have something that says that Jerry likes flaming hot Cheetos. So we always have something that says that Jerry enjoys flaming hot Cheetos. And so what this means is that on this, albeit very simple task, this model is always able to retrieve the correct sequence that we've injected into those polygram essays. Now I do wanna talk about something very quickly, which is that this test is awesome, but it's only testing for that retrieval task. And so we're gonna go back to Greg and we're gonna hear a little bit more about, other ways that we can evaluate and what this truly means for evaluation. So I'll send it back to Greg. Yeah, Flamin' Hot Cheetos are fire. Very cool that Jerry's a fan too. So the needle in a haystack is all about putting something in there and trying to find it, right? We're looking how far in are we able to find it? And then we can sort of expand this idea. The sort of multi-needle in a haystack, we can look at, and this is from a blog that LankChain put out, more needles, right? Longer contexts, you see sort of number of needles is the number of unique facts here, context length, smaller one is green, larger one is red here. And so we sort of see this decreasing performance of how many needles we're able to actually retrieve with the longer context and more needles, right? Makes sense. And we can sort of extend this even further. So there was a paper put out called Ruler, and this was actually from NVIDIA. And the Ruler paper sort of combined the retrieval aspects here, the multi-needle in a haystack sort of retrieval aspects with then this kind of variable tracking approach that was sort of aimed at doing this multi-hop tracing. And so what we're trying to do here is we're trying to look at sort of multi-hop connections. We're trying to look at, you know, a step beyond sort of, Hey, do we found this sort of one thing? And then we can also kind of look collectively and do some aggregation as well, where we're looking at common or really, really frequent words, and we're kind of able to look a little bit more at the long range context through the variable tracking and looking at common and frequent words. And together, these sort of constitute this idea of this ruler approach. And there's a paper link in the slides. It's not popping up right here, but it'll be available post-session. I want to sort of bring our guests back up and kick off the Q&A for today. We're going to stick around for a couple extra minutes to answer some of your questions. But when we think about eval, and Leo, start with you on this, the needle in the haystack thing sort of is towards this kind of perfect retrieval system at least but it does seem that there's this real kind of you know if the motivation is to solve the problem of very long range you know correlations across big context then that's really a little bit different than the perfect retrieval, right, that we talked about earlier. It's, you know, are we evaluating both of these things well today? And if not, what's missing? Yeah, I think the discussion we've been having previously kind of like perfectly sets this up because when you were describing when you're both describing needle and haystack i was thinking about it and like yeah this is exactly a test that rags should just like completely knock out of the park right like literally what rag is designed to do um and so i think you know it's it's a very important primitive for long context, right? Like it's got to get this right. It has any hopes of doing anything more sophisticated, but it is very much the first step, right? Like being able to grab like information in history and replicate a pattern. I think this is like similar to tests folks used to do on language models like a couple of years ago. It's called these like induction head tests. And I think like there was I don't remember the name of the research paper, but it basically showed like a causal link between doing well on needle in a haystack like things and then being able to build up to in context learning so it's like an important first step um but like exactly as you put it for like perfect retrieval and all the stuff that we've been talking about um it's not just pulling out one fact but it's being able to pull out multiple interrelated facts and being able to reason about them um I really like that you talked about ruler because I think it's like 100% a step in the right direction, especially on some of those like later tasks that you mentioned, like the variable tracking, right? You have somewhere in the haystack that like x1 equals 10, somewhere else that x2 equals x1, somewhere else that x3 equals x2. And so like, again, it's got to be able to reason kind of like across these pieces X two. And so like, again, it's gotta be able to reason kind of like across these pieces of information and the same thing with all the other ones too. So yes on ruler and also yes on kind of like bridging that continuum between the very easy things and then kind of like, as you mentioned, perfect retrieval or where we're trying to head with kind of these like real world use cases. Yeah. Yeah. So can we expect like a paper on evaluation from you guys anytime soon or what's going on inside a gradient that is sort of related to this? Are you guys coming up with custom things or are you forced to confront this issue with enterprise? Kind? How's this manifesting for you specifically? Yeah, totally. I mean, I've actually just been talking to the ruler folks at Nvidia very recently. We've been talking about collaborating on a few things. I think they're maybe slightly tangential. But one of the interesting things that that comes up is this distinction between uh like the prompt and and the the task um and and and basically it's that um not every model like expects prompts to be written the same way there's actually some really interesting like uh like Lama3 and how it's actually better to have the instruction at the end of the prompt instead of before the example. And it's actually reverse of what GPT expects. But anyways, if you want to be testing the model's performance on long context, maybe it's like you should be optimizing the prompt a little bit to the model. So we've been talking about that kind of stuff to maybe give like a slightly maybe like clearer or more nuanced picture as to like how do you compare all of these different models. But like for us internally, again, like what we're super excited about is actually like using these models in the real world. And so that's the direction that we've been going. You know, like I think for learning how to improve the models, evals are great. And for comparing models, evals are great. But like I always say, hey, let's let's iterate quickly. Let's let's throw code base into this thing. Let's throw all of Shakespeare's works into this context and see if they can write like more poetry. So that's kind of the direction we've been going in. That's right. That's right. Yeah. Show everything in and then see if the person that does the job now says, yeah, I did a good job on that or not. Right. Very, very cool. Okay. So, you know, just we'll start Q&A now. In conclusion, rag and long context, both necessary. We're talking about this long range, also about retrieval. We heard this sort of four to 5 million is where we hit this diminishing return. So don't expect to see 10 million from them anytime soon, but we're expecting to see great things coming out of gradient and this eval is definitely still emerging. So please upvote your favorite questions. You know, In the spirit of building, I wanted to kick off with one of the questions that I saw from anonymous here that says, my app works-ish. Why should I consider an LLM with a long context? Won't this cost more and take longer? Leo, what do you think about that? Leo Sack I mean, I think it probably depends on the use case. You know, I think if you have an app that works-ish with short context, you know, maybe you don't need to add in kind of like superfluous or unnecessary context and pay for it. I do think there are other cases where adding in a bunch of context is helpful. So it's probably like fairly task dependent at this point. Maybe in the future, once these things get more efficient, once there's more kind of like engineering built on top of it, once the model itself is able to discern maybe a little bit more how much context to use potentially, then maybe the answer is like, just use it for everything. But yeah, I think for now it is a little bit on the user to decide where to employ this. Yeah, yeah, absolutely, absolutely. And I think this gets back to the performance versus cost issue. You know, if you're not gonna increase performance that much, maybe you shouldn't spend more, you know? Yeah, it's like, hey, let's do stuff that makes sense from an application standpoint. So technical question here, Leo, with attention scaling quadratically with context length, for one million context context are dense attention matrices still used or is there some sort of different uh tension setup that you're using yeah um great great question um so the way the way that we trained our models, it does do the full quadratic attention, right? So this is like as granular as you can get, like the model can directly compare every token kind of like on that full context window. And the tradeoff, right, we're talking about kind of like cost versus benefit. And this is, I think, like kind of like the really exciting piece of work that we're like about kind of like cost versus benefit um and this is i think like kind of like the really exciting piece of work that we're like now starting to look into is like you know maybe maybe that's that's not entirely necessary for all tasks uh maybe the context that that's pretty far back uh maybe you can compress that a little bit more right um and and at that point uh you start getting closer to the realm of uh efficiency that that's a little bit near kind of like the short context models um and where this kind of comes up is maybe a little bit less on the training because you know like yeah sure like there are uh you know races to speak, of like getting like the first whatever long context model out. But at the end of the day, you're doing training offline where this really comes into practice is for serving the model. Right. Like if you you probably don't want to give the model a question and have to go take like a 10 minute coffee break every time you do. And so like for these kinds of like actual serving use cases, I think sort of maybe like where this question is implying or like leading to looking at ways to compress information further back in time is like pretty important. Hmm, yeah, Wiz, this is something we've been talking about a lot recently, right? And a lot of people are paying attention to this, you know, with Grok coming on the scene and all this other stuff happening like there's training and then there's inference and like it's different and so like how should we be thinking about this wiz in terms of like long context versus rag training versus inference it seems like we covered a lot of questions today that are in the chat, and maybe this is a nice one to end on. Give us your thoughts, and then Leo, you can sort of close us up. Training versus inference, long context versus RAG. How do we sort of navigate this decision space, thinking about computing and leveraging these tools? Yeah, I mean, so the way I understand what you're saying, I think the big thing to look at is going to be inference, right? So for training compute, unless you're, you know, you're someone like Gradient where your job is serving awesome models to do awesome things, I don't know that you're going to be, like, wanting to spend highly in the training realm. You'll wanna do fine tuning for sure. You'll wanna leverage these PEF methods in order to do that cost efficiently. But otherwise I think you're not gonna have a ton of training budget. When it comes to inference though, you're not going to have a ton of training budget. When it comes to inference, though, inference is a space that is very quickly moving, right? So the folks at VLLM, folks at NVIDIA, of course, folks at even AMD, like everyone right now is, and of course, Grok and these kind of bespoke LLM chips or whatever, you know, they're going to call them. And I'm just going to not mention Google's TPUs, I guess. But the idea is, you know, there are so many different platforms that are doing so much great work right now. I think we're spoiled for choice right now uh and you know it it doesn't take much to get a 10 percent uh you know improvement right in in inference latency so uh versus like this rag uh context window thing i think that's a much more nuanced conversation um where you know we're not seeing kind of those huge jumps in performance. We don't even know how to evaluate base non-long context models well yet, right? That's still not a solved space. We have ways we can kind of do it. And we have golden data sets and this kind of thing that help us to do it. But in reality, we don't have even the tools to evaluate like small, smaller context models well, right? And so this idea that we're then going to be able to evaluate a longer context model super well becomes, you know, like Leo said, step in the right direction with ruler. We're always marching forward, right? We haven't marched back yet, but you don't even know how to evaluate them. And then with RAG, it's the same problem, but now we have six moving parts. So, you know, that's a very nuanced conversation. But the way I think about it is start with RAG and start with whatever your favorite, whatever platform you're currently on has inference optimization solutions, AWS, GCP, Azure, whatever system you're working through, they're going to have a system you can use to run model fast and then start with RAG. If you find yourself running against this wall where retrieval is just not enough and you have a pile of contacts that you need to be referencing I think long context becomes worth the cost very quickly um but it is a anyway I'll stop there yeah yeah and I just want to end like that was sort of a a messy question I think the fine point on it, Leo, is like, you guys released like six models. How should we pick which one to use? And how fast are they doing inference? Exactly. Do we have to wait five minutes unless we get a bunch of GPUs on the million or, you know, most people watching are going to pick up what and then most enterprises are going to pick up what? what and then most enterprises are going to pick up what yeah uh like like really good questions um you know i i think like and this i've been playing around with it recently for for some demos i think on a like you know eight uh gpu l40s cluster um it's doing like the 8B model is doing like 600K context length with like a 10 to 20 seconds, maybe closer to 20 second response time. Take that as you will, right? Like that's something that's like pretty easy to spin up just using like VLLM currently currently so you don't need a ton of like additional coding or magic on top of it um for for kind of like faster and and more efficient inference i i'm gonna say stay tuned like i know um you know we're we're definitely working on that um and in making these models a lot cheaper and easier to run inference on um I know other people are working on that as well. And I think, you know, like, it's a really interesting point of, like, thinking about, like, where to use RAG and long context and pre-training and fine-tuning. The thing that I'll add on top of that is putting examples in context, like in context learning is like way more sample efficient than doing fine tuning. And so like I think as like long context gets a little bit more developed, a little bit more like baked in and efficient, like you have this really cool paradigm paradigm of throw a bunch of examples into the rag, pick out even more of them to more than you could before into the long context and use that instead of fine tuning. And that to me feels like something that's much easier to work with than going through the whole fine tuning paradigm. And so that's personally what I'm like, pretty excited about as far as like upcoming use cases. Retrieve your way into fine tuning through the context window. Something like that. Okay. Very, very like that. Yeah. Yeah. All right, guys. Well, epic discussion. Thanks for sticking around to answer extra questions today. Wiz, Leo, discussion. Thanks for sticking around to answer extra questions today, Wiz, Leo. Really appreciate you guys. Great job. It's time to wrap up today. Thanks for joining us today. If you like this event, let us know. We are interested in getting more awesome speakers and brilliant scientists and engineers like Leo building, shipping, and sharing with us. Of course, like and subscribe. We will be back on YouTube next week talking about the details of next token prediction. It will really connect sort of this big zoomed out view we had today with a very, very zoomed in depth view. If you haven't joined the AIM community yet on Discord, jump in, start building, shipping and sharing with us. We don't bite and neither does anybody else in the community. If you have questions, you run into issues, error codes, go ahead and jump in and ask your question. There's people around to help. And then if you're interested in joining our very next cohort that kicks off next week on Tuesday. This is going to be AI Engineering Cohort 3. We just had an absolute banger of a Cohort 2 demo day last week. Check out those videos on YouTube. That is available. We still have seats left for that cohort. So check it out, get your application started. And even if you don't actually enroll, you'll learn a lot just going through the application process. But that's a wrap for today, everybody. Thanks for sticking around late with us. Please share any feedback that you have about our event in the feedback forms that you'll get post-event. And as always, until next time, keep building, shipping, and sharing, and we, as well as Gradient, will certainly keep doing the same. Thanks, everybody. Have a great rest of your week. We'll see you soon. | Long Context Windows: Extending Llama 3 | 4,351 | AI Makerspace | 20240523 | Join us live to discover how Gradient AI is pushing the boundaries of LLM technology with groundbreaking long-context versions of Llama 3! We'll explore how Gradient's small team outpaced Meta by releasing Llama 3 models with unprecedented context windows, reaching up to 4 million tokens. Learn the technical intricacies of expanding context window sizes from 8B to 70B parameters, the compute requirements, and the challenges faced. We'll delve into popular benchmarks like Greg Kamradt’s Needle in a Haystack and discuss with Gradient's Chief Scientist, Leo Pekelis, the future of RAG versus long-context LLMs. Don't miss this chance to understand the cutting-edge advancements in AI and get your questions answered live.
Event page: https://lu.ma/longllama
Have a question for a speaker? Drop them here:
https://app.sli.do/event/9kSLiGTxM1CzkJKmpk3VDS
Speakers:
Leonid Pekelis, Chief Scientist at Gradient
https://www.linkedin.com/in/leonid-pekelis/
Dr. Greg, Co-Founder & CEO
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO
https://www.linkedin.com/in/csalexiuk/
Apply for our new AI Engineering Bootcamp on Maven today!
https://bit.ly/aie1
For team leaders, check out!
https://aimakerspace.io/gen-ai-upskilling-for-teams/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/dRMWrwHM9kjGc4A97 | 2024-06-10T01:43:59.013198 |
https://www.youtube.com/watch?v=ulTvNAXI_1E&ab_channel=AIMakerspace | Hey, Wiz. Hey Wiz, so agents, they're pretty dope and we've explored them before. Does that mean multi-agents are even more dope? Yeah, Greg, I think it does mean that. They're multi-dope. We've reached the next level of dopeness so you're saying that we can build something dope today and then use multi-agents to even up the dopeness technically speaking all of that is true yes okay so we're gonna technically increase the dopeness of agents. Wiz, I cannot wait to see what you've got in store for us. We'll see you back in a bit, my man. Welcome everybody. I'm Dr. Gregg. That's the Wiz. We're talking multi-agent rag today. We're talking multi-agents today, multi-agent frameworks. We're going to root this discussion in the patterns of GenAI that you need to know if you're going to build LLM applications. There's a lot of complexity to untangle, and we've got a short time to spend together today. So if you've got questions along the way, please drop them in the YouTube live chat or in the Slido link that we're dropping below. We will prioritize Slido at the end. Let's go ahead and get right into it today. We're talking multi-agents. We're talking multi-agent RAG. We're going to be using Lang Chain and Lang Graft to do our build today. So there are some things we want to make sure that we get a handle on today. And as we align ourselves to what we'll get out of this session, we really want to sort of get under and understand multi-agent workflows as an LLM prototyping pattern. Of course, we want to build one and we are going to walk you through exactly how to build a multi-agent application. That's quite complex today. So to get into this, we want to sort of root ourselves in the patterns that we're all so familiar with if you're joining us for multi-agent rag today the patterns of spongebob here and then we want to extend it these are so important because they don't go anywhere just because we add a bunch of agents to our applications let's take a closer look Let's take a closer look. When we talk about the patterns, we have to start with prompting. The definition of prompting is to lead, to do something, to instruct. Done well, we might call this teaching or training even. If we take teaching and training far enough into an LLM, we provide one-shot, two-shot, few-shot examples, we run out of context window, where are we left? We're left with fine-tuning as a method to teach the LLM how to act on the other side of the task-specific spectrum. Of course, optimizing how the LLM acts is one part of the puzzle. We also want to optimize what we're putting into the context window. We want to use as much relevant reference material and knowledge that we can get our hands on. We want our applications to incorporate context well. context well. And of course, RAG is the focal point of so many of our applications, especially the ones that we're actually trying to use to create business value today. When we talk about agents, what we're typically talking about today is we're talking about giving the LLM access to tools of various kinds, but not incredibly various. There's sort of a category, a main category of tools that starts to connect some of our patterns back together again. But let's think about this fact that agents are a pattern. What pattern are they? Well, simply put, they are the reasoning action or the react pattern. And the way we want to think about this is we want to consider a typical simple agent loop. We ask a question. The question is routed to our LLM. This is where the reasoning takes place. LLM might decide, hey, I know the answer already. Boom. Done. Don't even need to pick up tools. I can solve this with my bare hands, the LLM says. Or maybe we need to go in and pick up a tool. Now, if you squint closely in, you can see we have tools like Archive, tools like Search, like Wikipedia, like, what is that, DuckDuckGo right there? right there now what all these tools have in common we're going to go get some information you might say retrieve some information and we're going to collect it and try to then incorporate it into our reasoning that we're doing about answering this question. We might need to go back and grab another tool. We might need to see what it gives us when we engage with it and then incorporate that back in to our reasoning before we give a final answer. So the LLM here is where we're sort of doing the reasoning part of the reasoning action pattern. And this is important. pattern. And this is important. Now, when we go to the tools, when we go to the retrieval of information, what are we doing? We're actually sort of augmenting the thread that's going that our LLM is considering and reasoning about with retrieved information. We're kind of doing rag, aren't we? In fact, I'm going to put to you today that Today, in most cases, agents are just fancy rag. And we'll see this as we walk through exactly what we will build today. Armed with the patterns of prompting, of fine-tuning, of rag, and of agents, we can put together a more complex picture of what a typical multi-agent system looks like. Let's think about multi-agents. Why would I need more than one agent, you might ask? Why would I need more than one agent? You might ask. Well, remember how the agent in our picture just a minute ago was doing the reasoning. Well, consider that we might want our reasoning machines to be specialized, right? We want our reasoning machines to be specialists, just like out there in the world. Now, if the reasoning machines are to be specialists, and I want a bunch of them, where does that ultimately lead to in the context of AI? Where does that ultimately lead to in the context of AI? One question you might ask is, well, does that mean that if I had an AGI LLM that I could just use one agent? I want to bring Wiz up to the stage here for a minute to comment on this. So if I'm looking at specialists and connecting them all up together, isn't it kind of in the limit that the artificial general intelligence is going to understand everything that all of the specialists understand? So it sort of makes the idea of multi-agents not necessary. Is this a crazy way to think about it, Wiz, or is this technically sound? No, I think that's probably true. I mean, eventually, right, when we have some AGI, we could just have one agent do everything. I mean, there's a lot to be said about potentially, you know, this idea of expertise might not ever leave and so maybe we have you know specialized intelligences that are better than these uh generalized intelligences but i think the way that people use the term agi right is uh means that we would only need that agent or that system right right? We wouldn't need those specialists because they would be no better than this AGI. I mean, of course, it depends on how you define AGI, right? I mean, it's like, yeah. Okay. Okay. Okay. All right. Let's get off our high horse. Thanks, Wiz. Thanks for the two cents on that. Let's go back down to earth, everybody, two cents on that. Let's go back down to earth, everybody, because we have a job to do today. Let's talk about multi-agent frameworks, because presumably we don't have AGI today. What we're talking about when we're talking about multi-agent frameworks is we're talking about using multiple independent agents that are each powered by a language model, an LLM, let's say, potentially an SLM in the future, a little small specialized language model potentially. And we basically want to consider two things. What are the agents and what are they good at? And how are they connected? Now, if you think too deeply about this, you start to get lost in the patterns a little bit. So we're going to try to make it easy for you to understand why this is useful. to do things a little bit more cleanly in short. We can group tools and sort of responsibilities, almost like job responsibilities together. We can separate prompts instead of having, of course the infinite context window issue will tell you that you can sort of just dump everything in there and maybe you can, but it makes it really hard to sort of debug exactly where things are going wrong and this separation of prompts can also actually not just provide a cleaner architecture but potentially even give better results and then just conceptually it's going to be a lot easier for other people to understand what you've built. Now, there are many ways to accomplish lots of things you might try to build. I wouldn't immediately jump to multi-agent in almost any case. In fact, I would love to hear if you're in the audience today, if you've heard of an actual use case that you're connected to creating massive business value from a multi-agent use case. These things are still rare today and they are difficult to implement, but there are tools out there. And some of the tools include things like AutoGen. In fact, some of the things we'll talk about today in Landgraf were inspired by the AutoGen paper. This came from Microsoft and they call this a conversation framework. We essentially want to allow these multiple agents to converse with one another. That's what AutoGen is all about. You might have also seen Crew AI. of getting the crew of agents together and operating in a real cohesive unit that's crushing it together. Just like if you're on crew. And obviously lots of people are using this stuff. This is more of a low-code solution. Now we're gonna use LandGraph today. And LandGraph is all about, quote, building stateful multi-actor applications. This is like you can put many agents within graphs, directed cyclic graphs that track the state of affairs as you move through the graph. We've talked about Langraph before, we won't belabor the point, but it's all about adding cycles to applications built on Langchain. And in terms of cognitive architectures that you can leverage within Langraph, the one we're gonna focus on today is the router. It's going to be particularly useful. Now you might say, what's a router? Well, the TLDR on routers is that they choose the action. Remember that reasoning that we did in the beginning to sort of choose the tool. You might choose the tool. You might choose the RAG system. You might choose to go to another agent that is actually just a different prompt. And so when we think about the flows of multi-agent set up, these are the ones that you'll see if you go to the Landgraf repo on GitHub. There's the multi-agent collaboration, the hierarchical team, and the agent supervisor. When we do multi-agent collaboration, we're essentially trying to get two agents to share the same context. And just as we heard in the autogen paper, to have a conversation. Here we have a researcher and a chart generator, as well as a router. All three of these guys are agents. But I'm gonna sort of highlight the routers in our story. The research agent is simply sort of designed with the prompt. You should provide accurate data for the chart generator to use. The research agent is simply sort of designed with the prompt. You should provide accurate data for the chart generator to use. Chart agent is designed with the prompt, any charts you display will be visible to the user. This is a quite simple setup. The router decides which tool to call, be it the chart generator or the researcher. We can make this into a slightly different setup by thinking about our kind of router as being a supervisor. And the supervisor might choose to go to any given agent to delegate specific tasks that the user asks. Now, if we combine these two ideas, we get into something that's a little bit more hierarchical, where we actually have different graphs nested within our top level graph, where our top level supervisor stands. So this is a supervisor that is a router and the research team and document authoring team here are also both represented as supervisor routers at the mid level. All of these have prompts associated with them. We're going to simplify this slightly and use it for our build today. We're going to build what we're calling AIM Editorial, AI Makerspace Editorial. And it is a cut across the hierarchical team setup we just saw that combines the supervisor as well as the sort of collaboration idea. What we'll do is we'll have a top level supervisor. We'll have a research team that's going to leverage Tavoli search, which is a search API specifically designed for use with LLMs and agents. We will build our own custom rag system and then we will create an initial writer, a researcher, a copy editor, and an editor that will make sure our writing is super dope in our document team. At its core, if we think about what's actually going on here, at the top level, our supervisor is directing. It's deciding, it's instructing, it's delegating. These slightly more specialized research team and document team supervisors are doing something similar. We've got retrieval that we can do through Tavoli search or through our custom rag system. And fundamentally, each of the agents in our document team are using prompts. So you can see here is that if we look close enough, we can start to see why we might say something like agents are just fancy rag. We have a goal today and we want to write a dope blog on something relevant. Well, what's something relevant? We want to talk about long context. We saw this pretty sweet paper, extending Lama3's context tenfold overnight. As Lama3 told us, they would do this over the next few months. Also, shout out to Gradient AI releasing the one million context link a week ago. That was a pretty gangster move. And then this one is a formal paper on it, though, 8K to 80k with Qlora fine-tuning what we're gonna do is we're gonna set up our research team we're going to use tabily search and we're gonna use this long context window paper to build a custom rag system we're gonna set up our document team and the writer we're gonna tell it something like you are an expert writing you are an expert in writing blogs below are your files in the current directory the note taker will tell it you are a senior researcher tasked with writing a technical blog outline and taking notes to craft a perfect technical blog the copywriter will be our grammar, copy, punctuation editor, very tactical. And then our dopeness editor is going to be an expert at punching up technical blogs so that they're more dope, extra lit, more cool. Let's see how we can set up these teams using Lang graph right now. Wiz, show us how to do it. Oh, yeah. Okay, so here we go. This is going to be a notebook. So I'm gonna zoom in a little bit. So you first this is what we want, right? This is like the this is the desired output, right? So we have some input to some kind of supervisor, agent, and then we receive output from it, right? It does the task does the thing. So I think when it comes to the, the goal here, we want to combine two things, right? We want to combine kind of this idea of rag that we have. And we want to add this uh you know this potential agentic uh you know piece on top of it and the way we're going to achieve this is in parts and we're going to start with uh some basic parts and then we're going to slowly get a little bit more crazier and crazier now uh this is Now, this is an adapted thing from Langchain's own documentation. They have a great hierarchical agent example. We've made some changes here and there just to demonstrate how easy it is to work with these Lang graph systems. And we're just going to go ahead and get started. And of course, it's based off of the auto gen research that was done. So first things first, we need a bunch of dependencies. It's not crazy, though, right? So we need lang graph, lang chain, lang chain open AI, and lang chain experimental. And then of course, we want Qdrink client or quadrant, sorry, client, client, PyMOO PDF and tick token. This is, for those of you familiar with it, this feels a lot like a rag dependencies and it sure is. We're also gonna grab two API keys. We have our OpenAI API key and our Tivili API key. Now the Tivili API key is something that you'll have to sign up for. And on their free version, you get like a thousand requests per unit time. And basically the idea is it's like Google search, right? But through it through a clean API. There you go. So it's free to trial. And then if you want to go crazy with this, you're gonna have to pay pay the piper as it were. Okay, so the first thing we're gonna do, right, right. So I mean, multi agent rag, if we don't do rag, you know, it feels like we missed the mark. So we're just going to set up some simple rag here. Just gonna be over a single PDF, right? So we've got a lot of content on more advanced rag and everything like that for this notebook that's already quite long. So we're going to just keep it to simple rag. But the way that Langraph, LCEL work, you can extend this to however absurd of a chain or rag system that you want, as long as it's wrappable in a Python function, and it can take in some text and it returns some text, right? So I think this is a very important piece of the puzzle. All of these components that you see are hot swappable, right? Like we can change them however we'd like. That's kind of the beauty of Landgraf, right? So first thing we're going to do, put the R in RAG, we need retrieval. We're just going to load up this long context paper from archive. So I'll show you here. It's just like, you know this this whole context window thing is is it the rag killer right all of this this other stuff right so let's write a little blog on extending rag or extending context windows next we're going to chunk it down to size this is classic we're just going to do some uh some naive splitting nothing fancy going on here uh some naive splitting nothing fancy going on here uh we're gonna turn our one uh document into 15 smaller documents let's go then of course we need an embedding model right if we're gonna do rag we need to have some vector representation of our text assuming that we want we care about semantic uh retrieval which in this case we definitely do uh we're also going to create a quadrant uh backed vector stores this power by quadrant Quadrant is just a great vector store I mean that's the reason we're using it that's really it uh it's very good at its job um and uh it can it can scale very well so even though this is clearly like a toy example um you know Quadrant can handle very much non-toy examples, which is great. And then, of course, we're going to grab a retriever. So, we're just going to modify our vector store into a retriever. Thanks, LaneChain. Nice and easy. Then we're going to do the ANRAG, which is augmented, right? So, this is where we're going to add our context to our question. And we're going to give some instructions, talk about how it's a helpful assistant. It uses available context to answer the question. And if you don't, if it doesn't know how to answer it, it should say, I don't know. And then finally, of course, generation, because this task doesn't need like a ton of like incredible reasoning skills. We can just use GPT-3, 5 turbo for this part. There's, there's no need to use like a GPT-4 for this. And then we build a simple rag chain, right? So this is just an LCL chain. It's going to take some, you know, question, pass it into the retriever to get the context. And then it's just going to pass the question forward to the next step, which is going to pipe into the rag prompt, which is going to pipe into the chat model, which is going to be parsed as a string. So we can do things like this, ragchain.invoke on a question. What does context along context refer to? And then we get a string response. The context along context refers to a coherent text such as a book or a long paper that contains multiple independent text. Yada, yada, yada. You get the idea. Okay. So first of all, there's some limitation to this particular approach. So I just want to acknowledge those. Again, it's just an illustrative example. But, you know, this is a specific PDF, right? So we'd love it if it could take dynamic PDFs. And it's obviously very naive rag. So we'd love for it to be a little bit more complex. And you can do all of those things. And as long as the ending result is an LCL chain, nothing else will change, right? So if you want to tinker around with this, make a more involved rag chain, you can do that. Absolutely. As long as you can get the output, you know, as long as you can get the whole object to be an LCL chain, you're going to be just fine, which is pretty dope. Then we're going to make some helper functions. We need some helper functions. We're going to do the same thing over and over again. Let's just wrap it in a helper function, right? So first of all, we're going to create agent nodes. So these are nodes with agents, right? So you'll notice all this agent node does is it wraps calling the agent in a function, and it takes what the agent returns, and it names it, you know, or it adds to the state, we're going to get into state in just a second, but it adds to the status human message with the output. And that's it. That's all it does. Right? So this is the idea. We want to wrap those nodes. The reason we wrap the nodes is so that they work as expected with LandGraph, right? Where it's gonna take some state agent name, and then it's gonna give us this object that's gonna be compatible with our state. Very cool. So we have this idea of an agent node and we're invoking an agent, but how are we creating these agents, right? With a create agent helper function, of course, let's go. A few key things to keep in mind here. Number one, you know, we want to have kind of this boilerplate prompt on our system prompt for all of our agents, right? Because all of our agents that we're creating with this are going to be, you know, very, very similar under the hood in terms of their prompts. This doesn't have to be true, right? You can custom make each agent, but for us, it makes a lot of sense to just use the same boilerplate at the end of every agent, right? Your other team members and other teams will collaborate with you during, with their own specialties. You were chosen for a reason. You're one of the following team members. And then this classic, do not ask for clarification, right? We just want to go to the agent, get some response based on the tools that it has, and then send that up the chain. So this is the idea. Of course, we're going to be able to modify that with a system prompt. So we're going to be able to define more exactly what this agent is. We just have this kind of suffix that's on each agent prompt. There you go. Okay. Next piece is big. We have our agent scratch pad. This is unique to this agent right here, right? Whatever agent we've created, this is unique to it, right? OpenAI function agent, this is unique, right? So in our executor, right? This is it's one executor, which has, or which is comprised of this create open AI functions agent, right? And these tools, which has its own scratchpad. Now, this is something that's super important. Okay. So each of these little sub agents are their own agent. So already we're in, we're in, we're in multi-agent before we even get into the first draft. Right. But the idea is that they all have their own scratch pad and we're just going to populate the important stuff up the chain to the supervisors. And this is the idea of how these systems can work so effectively, right? Each agents can be able to do like a bunch of stuff. But that stuff largely just doesn't matter to the supervisor agent, right? The super right, just like in real life, supervisors care about the output. They're like, What do I get at the end here guy, right? So that's why we have this individual scratchpad for each agent. And then we're going to see another layer of that as we continue through the notebook. So that's going to create all of our agents. Then we need this idea of a supervisor. Now, I hate to be the bearer of mundane news. Supervisor is just a router. It just routes from one thing to the next thing. So it takes in, you know, it takes in current context, and then it makes a decision, where do we go next? Which which tool agent or which agent do we go to next? Right? What's worker do we go to next? And then, you know, if the answer is we don't go to a another team member, we're just gonna straight up go, we're gonna finish, we're gonna be done, right? So the idea of this particular team supervisor is just to act as a router, right? So say, hey, this looks like it needs this work done. And then it gets response back. Now it looks like it needs this work done. Gets response back. And now it's done, right? This is all it's doing. It's a lot of code to get there. but basically this is just a function. And we're once again going to create this open AI function situation. This is all it's doing. It's not crazy. It's not insane. It's just straight up routing. Where do we go next? So now that we've got those helper functions, that's just the helper functions. It's a bit of a doozy, even notebook, I know. But we're now going to talk about the research team. So remember, our total system is going to be comprised of, and we'll just go, we'll go back to the diagram for a second here. This supervisor agent, which is going to interact with this research team agent. Okay. And this document team agent. So what we're going to talk about next, back down the notebooks, I know the scrollings, you know, not flying. So sorry about that guys, but to just want to reference that document. So first things first, we have a tool using agent. What do we need to do? We need to give us some tools, right? Batman's going to have his utility belt or else he's not Batman. So we're going to start by creating a tool for Tivilli. Now, you'll notice we don't really have to do anything here, right? We're just pulling it from the pre-made integrations from Langchain tool, but we can create our own tools if we want, right? So we're going to show that next. Now this is, so technically we don't need to create a tool from our RAG LCL chain because LCL components can be nodes in a graph. However, we're just going to show it so that you can see how that tool creation happens. There's no real reason other than that to do this. You can just use the LCL chain. That's going to be fine as long as you make sure that the inputs are coming in correctly. So you might have to put a formatting component on the top of your chain. But the idea is here, we're just going to show it in the tool so you can see how easy it is to make the tools. So we just wrap it in our tool decorator, right? This just modifies the function below. And then we create an annotated parameter with, you know know it expects a string and the annotations query to ask uh the retrieve information tool and then we give it this doc string this doc string is important right so one of the things you're going to notice whenever you're working with agents graphs lane chain ai engineering we're always prompting right we're always prompting all the time right this is this is a prompt the lm is going to see this and the time. Right? This is a prompt. The LLM is going to see this and it's going to use it as a prompt. So remember, when we're writing those doc strings, it's not just random text for humans. Right? This is how the LLM is going to interact with our system. So it's important to write clear doc strings here. And then all we do is return that chain invoked. Okay. So now we have our two tools, our Tivoli search tool, and we have our retrieve information tool, which is our RAG pipeline. Next up, we're going to create some state. So we're going to add three objects under state. We're going to have messages, which is just a list of messages. So everything we've done up to this point. Team members, that's the members we have on our team unsurprisingly and then who's up next right so this is going to help decide where where are we going next right who who am i passing the ball to next uh so this we just write about that a little bit there we're going to be using gbt 1106 preview uh gbt oh one uh i can't remember the rest of the numbers right right this exact time but the newer version, the January version, is actually a little bit worse than 1106 at this task. For some reason, it just gets lazy. I think they attempted to fix that. It didn't work, so we're going to use 1106. So this is our LLM. You'll notice, though, that we are using GBT-4 here. This is no longer like GBT-3.5 is going to do. We need a strong reasoner. We need an LLM that can do a great job. That's why we're using GBT4 here. So now that we have our LLM, we need to start creating some agents. So we're going to create first our search agent, which is going to be based on that GBT4 LLM. It's going to have access to this Tavilli tool, and it's going to have this description that explains when we should be using this tool. And then of course we're going to convert that to a node. So now we have this search agent and we have it tucked inside of a node. That's awesome. We're going to do the same thing for our rag pipeline and then tuck it inside of a node. You love to see it. Next we create our supervisor agent. We're going to pass in that same LLM, GBT4. We're going to give it this text. Now, remember, this text is in addition to the other text that exists in the boilerplate, but the idea is that it's going to be able to pass to these separate tools or finish. We're going to let it know which tools it has access to. And then we can move on to the best part, right? Making a graph. We initialize a graph with the research team state graph. We're going to add the node that we created to our graph. We're going to name it search. We're going to add the research node, which is the LLM or the rag node, right, to our graph. We're going to name it paper information retriever. These names are actually pretty important. They have to be in this format. They can't have spaces and this kind of thing. So, make sure that you're naming these correctly. And then, of course, we're going to add our supervisor node. So, now we just have, have like three nodes chilling in a graph. You know, they're not connected to each other at all. Okay. So we're going to then create edges. The edges are pretty straightforward, right? If we're at the search node, we're going to return to the supervisor node. If we're at the paper information retriever node, we're going to return to the supervisor node, right? These nodes all lead to back to the supervisor. Now, from the supervisor, dependent on what's next in our state, remember we defined this next in our state up here, right? Dependent on what's next is going to determine where we go next. So, if it's search, we'll go to the search node. If it's paper information retriever, we'll go to the paper information retriever node. And if it's finish, we'll go to the end node. Now, two things I want to be very clear about here, right? Basically, we can go to the search or paper information retriever nodes, which are powered by those agents, which have those tools. And then they return to the supervisor or we can end in the graph. Now, this graph has state. And that state is its own state. So now we have agents with their own scratch pads. And then we have these nodes which represent those agents. And the entire graph has its own state, right? So we've got two layers of kind of keeping information apart here, right? Very important to think about. And then we just compile it and that part's great. And we set an entry point, right? We enter through the supervisor, easy peasy. And then we can use Mermaid to display our graph. It doesn't look beautiful, but it is right, right? So we have this idea that we can go from our JSON output function parser, which is like, you know, where do I go next, we can go to the paper information retriever, which goes back to the supervisor agent, or we can go to search, which goes back. And then this can also lead us to the end or finish. So that's the mermaid image of our, of our graph of our research team graph, right? Now, because we intend this to operate with another graph, right? We have to have some way to get in here. And the way that we're going to do that is through this enter chain. And we're going to create an LCL chain from our entire graph. This is the beauty of LCL, right? This chain represents our entire graph, our whole graph. But we could just straight, you know, just tack on another LCL component. Easy peasy. And then we can test it out. And we can see things like, you know, we enter, the supervisor says, we're going to search. We do some kind of retrieval with search. We come back and the supervisor says, we're going back to search. We do that. And then eventually the supervisor says, hey, you know what? Actually, we're going, now we're going to the paper information retriever, right? So we, the graph decided we would go to search twice and paper information retriever right so we the the graph decided we would go to search twice and paper information retriever once then it felt like it had all the information that it would need um dope okay now so that's the research team side right we created a graph the graph does stuff we're now gonna immediately think this is a single unit. This entire research team now is just this bit right here. Where it does this thing. We give it some query and it tells us which tools to use to research information and then eventually we come up with a final response that we are going to pass back to our supervisor. So this is the research team supervisor. This next one is going to this the CEO or however you want to think about it. So next up, we have the document writing team, the document writing team, we're going to go through this a little bit quicker. It's the same idea exactly, except instead of tools that relate to search and information retrieval, it's related to document creation and document editing. So we have our create outline tool, which is gonna open a file and put an outline into it and then save that file. Then we have a read document tool, which is going to open a file and read it. Then we have our write document tool, which is going to unsurprisingly open a document and write to it. And then we have our edit document tool, which is gonna, it's gonna blow your mind, open a document and edit to it. And then we have our edit document tool, which is going to, it's going to blow your mind, open a document and edit it, right? So we have these few different tools that we can use, right? So we have the ability to create outlines, which are going to be a document. Then we can read those documents. We can write new documents, or we can edit documents. All awesome stuff to be able to do when you're trying to write a blog, right? We're going to create this state for our document writing team. And it's going to be exactly the same as our research team, except we're going to add this current files additional parameter. And what this is going to do is it's going to just tell us what files currently exist in the directory it's working in. We're going to also have this prelude. All this is doing is it's saying, hey, by the way, this is the current files you have, right? This is how many files that you have. And that's it. We create the graph. It's exactly the same as the one we created before, but with some additional nodes. The idea is that every node goes back to the supervisor. So all paths from all of the sub-agents lead back to the supervisor. And then the supervisor can send it to any one of those particular agents. And that's it, right? So this is the idea. Then we can look at this and we can see it happening, right? So we see you can come in here and it can go to the doc writer, the note taker, the copy editor, the dopeness editor, and then eventually it can finish. Now, one thing that I do want to just keep in mind when we add these tools up here, right, we're going to, for each of these entities, right, we're going to have access to specific abilities, right? So this is the idea, is that we want to give our specific team members, sorry about all the scrolling here again, specific team members are going to have access to specific abilities and that's important. Okay. Now that's all great so far. Next step, right? We're just going to wrap this in the same thing we did before for our team, our research team writer. to wrap this in the same thing we did before uh for our team our research team writer and then we're going to see it work you can see here we ask it to write a short outline a linear regression write it to disk and what does it do it goes to the doc writer node which does exactly that and then we get a short outline that's written to disk and then if we look in our this is this is the worst for sure but if we look here we can see there is a linear regression outline that's created in text right in a text file in that temp directory that we pointed it to pretty awesome okay so that's what we've done up to this point we've created our research team and we've created our document writing team. And now we're going to go back to Greg, who's going to show us how we can tie these two together into a truly multi-agentic experience. Back to you, Greg. Awesome. Okay. So we've got our research and our doc team set up, the ICs are ready to do their work. So you know what time it is. It's time for the supervisors. And the thing about the supervisors that's so interesting, we talked about this before, they're just routing. They all have the same prompt. You are a supervisor tasked with managing a conversation between the following workers. Worker one, worker two, worker three, worker four, whatever. Given the following user request, respond with the worker to act next. Each worker will perform a task and respond with their results and status. When finished, respond with finish. Okay. Seems simple enough. So supervisors are just routers then, right? It they doing any reasoning? Are they taking any action? What's the role they're playing, these supervisors, exactly? I'll leave that as a thought experiment for all of you watching. But it is worthwhile to think about in the 21st century. We know what the ICs are doing. We can see their output. But for now, it's time to create these supervisors, make sure the work is going to the right place, being routed properly for both the research team and the documents team, up to the meta supervisor who is oh so agentic at the very top. Wiz, back to you to close it out. muted. Sorry, guys. Sorry about that. Thanks, Greg. But, yes. All we need to do is we need to, you know, get a new LLM. It's just going to be the same one, right? But then we're going to create our supervisor node. And the supervisor node, thanks for all the reminders and chat guys sorry about that uh but the the idea is we have uh am i still muted uh if i'm still muted let me know okay good so the idea is we just need to create one more layer. And all that layer does is it takes us from, right? So before we created two different graphs, instead of graphs, let's just consider those nodes, right? So this new supervisor, all it does is it tells us when to go to the research team or the blog writing team. That's it. I mean, you can't make this stuff up, right? This is all it's doing. Like Greg said, it's a router. We create new state, right? Which is just going to, you know, we have less things we need to keep track of since we're not so worried about the team members. There's only two team members and we have our messages. And then of course we have our next. So that's who we're going to next. And then this next piece is the best. We only care about the last message going into the new graph. And we only care about the last message from that subgraph. So we can think of it this way. We have this parent graph and we have care about the last message from that subgraph. So we can think of it this way. We have this parent graph and we have this child graph. But the only communication between those two layers is going to be the most recent message from either of them, which means that the parent graph or the meta supervisor, the ultimate supervisor, the one right at the top, CEO, supervisor, right? The one right at the top, CEO, whatever you're going to call it, it only sees the last piece of work from the research team supervisor or the blog writing supervisor, right? So this is huge, right? We only pass that relevant information. This keeps this state very clean, lets it be a very effective reasoner and powerful tool. And then of course, we need to build the graph. Well, the graph is very easy to build because there's only two nodes and they both go back to the supervisor and then the supervisor decides if it's gonna go to the blog writing team, the research team, or it's gonna be done. And we can finally use the thing. And ultimately when finally use the thing. And ultimately, when we use this, it's going to, you know, send a, it's going to do this whole flow, right? And this whole flow is going to go through research team, blog writing team, research team, blog writing team. You know, it probably won't do that many iterations, to be honest with you. Usually it does two or three, but at the end we get this right this output just a full blog on the paper i mean listen is this the best blog that's ever been created okay i'm not gonna say that but it is a blog it was created from the paper it did go through dopeness editing copy editing right uh we can see that this is uh you know pretty dope results are nothing short of revolutionary that's pretty hype language. That's something that our dopeness editor helped with. This is the idea. This part is very straightforward. Each of those sub nodes, right? Each of the sub nodes or sub graphs, we just consider a node. That's the power of laying graph it's an entire agent graph but we're just like it's a node you know who cares uh and that is it uh so good with that we'll pass you back to greg so sorry about being muted guys thanks for calling me on the chat and uh also don't forget to like comment subscribe smash the notification bell i know it's kind of kind of silly but it does help we're here every wednesday we love talking about this kind of stuff and uh i'll pass it back to dr dr greg to uh bring us to q a bring that bell baby yeah you crushed it wiz thanks man so we got a gentic with the meta supervisor and now we can think about this idea of multi-agent RAG in the context of the patterns of generative AI that we're using to build LLM applications. We saw how this all came together in a single build, been through a lot in an hour together. And we can also start to see why we can say things like agents are just fancy rag. Now, remember, these are useful because the grouping is very helpful. the separating of prompts is very helpful, the conceptual models are very helpful. Again, let us know if you come up with any sort of must-have multi-agent use cases. I would love to hear about them. But the patterns, the patterns, the patterns, they're just everywhere. Tools all have prompts and supervisors or routers and searches, retrieval. It's a lot to get a handle on. And we hope that this helped you out today. If you have questions, please let us know in Slido. We're going to start taking them now. And we look forward to creating more multi-agent content in the future. We want you guys to take this notebook and create your own blogs with it. We will liken some to those and maybe we will. If we can get this good enough, dope enough, Chris, create our own AI Makerspace auto blogger, the AIM editorial. Okay. So two, Slido, which platform is better for multi-agent rag? Langchain or Lomindex? Langchain. Boom. Okay. All right. And can we get just a why real quick? How come? I mean, stuff like LCL is just, it's such an effort multiplier, right? We make one thing, we can just straight use it in the next thing. Yeah. It's tough to beat that right now. Yeah. I love the second question so much. Seems that everything can be done with a single agent. Only difference is the forced sequence of agents, of tools. Is there something else I missed? Anonymous. Yeah, I think maybe a little bit. So there's no forced sequence of tools here. The agent is free to select which tool to use when, in which order, how many times. Yeah, that's the idea. So I would say the different sequence of agents is kind of where we get this. Could it all be done with a single agent? Maybe, right? So you could just add all these tools to one agent. But the idea is that this compartmentalization is supposed to make the LLM has one fewer decision or sometimes four fewer decisions right if we're using the four writer tools right this is the idea is that we instead of choosing between like 12 tools is choosing between two tools or three tools or four tools and that is supposed to make it better yeah okay. I go back to the child at the grocery store. Which kind of mustard do you want, sweetie? Do you want it to have a hundred different mustards to choose from or three? And I think it is a great question to always ask though. Can it be done with a single agent? Can it be done with no agents? Of course, we were doing multi-agent RAG today, so we used multiple agents. Next question, is it possible to share variables like DICs, data frames, or any other between agents instead of just making them communicate with natural language? Yeah, yes, absolutely. So we can do that by passing different parts of state, different components of state. As you saw in this example, we only pass the last message into state, but we could add more things and add even more state. And I think that's going to be a decision that you need to make depending on your use case. But yes, the answer is you can absolutely pass. I'm not going to say whatever you want, because that's of course literally not true, but you can pass basically whatever you'd like. Okay. Nice, nice, nice. Okay. So when dealing with multi-agent RAG, it gets hard to cite or source responses in line. Is there an effective way to do this across all the receipt retrieved sources in line yeah okay so for citation that's a little bit harder uh you could add like a state that just keeps track of the various sources and citations and then populate those at the end in some kind of dump uh that would be the uh that would be the base way that I would want to approach this if you want to be sure that you're citing everything that you can. Some of these agents aren't going to produce citations because they're not really looking for those things. But yeah, with state, basically, you'd want to manage that context as you're passing it around. You can add that fairly straightforwardly. Okay. Can agents work in parallel? Yes, of course. Yeah. So the idea would be just to make sure I understand, like you can, some of these flows can be paralyzed, right? So if you need to search two different tools, you can search them at the same time and then synthesize a response once you receive a response from both of them, right? So that's already built in through LCEL. I believe it's coming to Landgraf soon, TM, but for right now, it's built into the LCEL components, and then I believe it's going to enter into Landgraf soon enough. Okay. And what are the techniques or design patterns to make our system faster and more responsive? This multi-agent setup can be potentially slow. Can't it? Oh, yeah, for sure. It's going to be slow. I'm not going to, you know, tell you that it's going to be fast. You can make it feel faster by using a lot of streaming, right? So streaming the responses is going to mean that the time to first token is very low, but it's still going to take the same amount of time to generate the full final answer. So it is going to be something that takes a little while, especially when we have like these six to seven different calls. Also one thing to touch on from that point of view. Right. This is where a tool and integration like Langsmith, which we didn't touch on in the notebook, but is easy to integrate, comes in and answers a lot of the questions we've seen in chat. How do we know how many tokens, how many calls? What path does it take? All of those can be added or tracked through through Langsmith. If you if you use that integration. Yeah. And I just want to sort of mention shout out to Garrett, big homie in our community. He's building deep writer and if you want to talk about how it's slow and potentially expensive to do multi agent stuff, He'd be a great resource for you to follow and to start a DM thread with. He's all about all of this and constantly building every day. So it looks like we've got a lot of other questions, but we are about at time. We will collect these questions and we will try to make a few posts in the week to come on LinkedIn. So give us a follow there. But that's it for today. We wanted to make sure that we end on time. We'll be back with more multi-agent stuff soon. You can count on that. Thanks so much, Wiz, for walking us through that. That was incredible. We will wait on publishing our first blog until we think it is truly dope enough. And let's go ahead and close it out for the day. If you guys enjoyed this and you don't know AI Makerspace yet, we'd love to see you on Discord real soon. We're building, shipping, and sharing with folks all the time. And we'd love to have you as a part of our community starting now. You can start learning for free of course on YouTube. We've got an open source course on LLM Ops that we taught last year. We look forward to open sourcing another course here very soon and we are always running our bootcamp courses. Our flagship one is the AI Engineering Bootcamp. It is an eight-week course that walks you through everything from your first LLM application through the patterns that you need to leverage and build up to multi-agent frameworks. We are starting our next cohort on May 28th. It's kind of a high bar and quite a bit of friction to get in. So apply now and start working your way through the AI Engineering Bootcamp Challenge. To check out more events, if you aren't familiar with us, check out our awesome AIM Index on GitHub. You get direct access to all of the code. There's always concepts and code with every event that you will join on YouTube. Again, we're back every week on Wednesday, same time, same place. We hope to see you again soon. Like and sub and in the meantime, keep building, shipping and sharing, and we will most certainly do the same. Have a great week, everybody. We'll see you soon. | Multi-Agent RAG | 3,649 | AI Makerspace | 20240508 | Discover how to integrate multiple independent agents to tackle complex problems effectively using the latest frameworks like AutoGen, Crew AI, and LangGraph. We'll dive into the innovative multi-agent systems, particularly focusing on the shared scratchpad approach in LangChain, and demonstrate building an advanced Agent Supervisor model. This system enhances problem-solving by coordinating agents, each with their own scratchpads, under a supervisor that manages the final outputs. Whether you're a developer or just fascinated by AI's potential, join us to learn, interact, and get hands-on with the future of collaborative AI technology. Click now to be part of this journey into multi-agent systems!
Event page: https://lu.ma/agentrag
Have a question for a speaker? Drop them here:
https://app.sli.do/event/wPDemMAc9nzV96DFmBzXz5
Speakers:
Dr. Greg, Co-Founder & CEO
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO
https://www.linkedin.com/in/csalexiuk/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
Apply for our new AI Engineering Bootcamp on Maven today!
https://bit.ly/aie1
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/6NNYtu7MiSUcnWAh6 | 2024-06-10T01:54:11.914263 |
https://www.youtube.com//watch?v=xmfPh1Fv2kk&t=1s&ab_channel=AIMakerspace | Hey, whiz, as we've been saying in class, as goes retrieval, so goes generation when it comes to rag. Is there like a right way to do retrieval? I don't know about right way, but there are certainly lots of awesome ways. Yeah, so once we get like a RAG system set up, we want to take to the next level. And how exactly are we supposed to do that? Well, it depends a lot what you're trying to do, the kind of data you have, the kind of information you want to retrieve. It turns out there's a lot of awesome ways to do it. And as always, we got this performance versus cost thing. Methods, methods, methods, algos, algos, algos coming out all the time. Do we need to care about all of them? Or are there just a few that we really should be focused on today for our tool belt? I think, you know, as it stands right now, there's a few we should be focused on making sure we have in our tool belt. Absolutely. Rock it. Yeah. All right. That's what we're going to do today. We're going to try to break down exactly which of these you should know about. And we're trying to give you some context for you to optimize your own context. Sound good, Wiz? Sounds awesome. All right. We'll see you back in a little bit man so today we want to talk advanced retrieval everybody and welcome this is the sort of second step of our kind of advanced rag sequence we talked about chunking last week and you'll realize today if you joined us for that one retrieval really builds directly on the back of chunking i'm excited for this my name is greg i'm co-founder and ceo of ai makerspace that's chris the wiz alexia co-founder and cto of ai makerspace and we're pumped to have you with us today throughout today's presentation if you have any questions along the way, please throw them into the Slido link that will drop in the YouTube chat now, or just go ahead and smash the chat. Let us know what you're thinking, and we'll see if we can answer each and every question along the way today. We've got quite a bit of background to get into today, so let's go ahead and rock and roll. Advanced retrieval, that's the name of the game today. And as we align towards today, we want to understand that it's not exactly a figured out science, this idea of how do you do retrieval in any given context. So we want to understand, first of all, how retrieval in any given context. So we wanna understand, first of all, how retrieval fits into RAG, and we're kind of zooming in on specific RAG pieces, but we wanna have the big picture in mind when we do that. We wanna be able to look at the different algorithms and compare performance across them. And importantly, we want to be able to understand the fine lines between things like chunking and retrieval and ranking or re-ranking. There's quite a few methods, but we have to set the context of our discussion first. We're going to talk about those methods. We're going to show you re-ranking. We're going to discuss it a little bit. But really, at the end of the day, this is about trying to get the big picture and understand where advanced retrieval methods fit in. We're going to walk you through what you need to know to get there now. Let's start with RAG. One of the common threads that we're going to run through today is we're going to be dealing with one of our favorite types of data, some movie review data, and we're going to be doing John Wick movie reviews. Now, one of the things that you could easily do is you could, for instance, use a simple chat with your PDF style model, and you could upload, let's say, some Wikipedia data that you just printed from the top page on, say, John Wick 4, and you could ask some questions. wick for postponed due to covid and how long for instance and we can use this simple retrieval chain i don't have any any information was it postponed due to covid and we we can read the page and we can see that it actually was postponed was the movie postponed due to covid and we can we can sort of start to understand a little bit about how important retrieval is because we can look at john wick 4 and we can look at the wikipedia page and we can see like it clearly was postponed due to covid the original we're just not able to see the screen that you're sharing right now yeah thanks a lot man and so what we'll what we'll want to do is we'll want to go ahead and So what we'll want to do is we'll want to go ahead and do this full share here. So what we want to do is we want to sort of say, okay, if I want to do a little John Wick 4, If I want to do a little John Wick 4 upload here, and I want to say, okay, was the movie postponed due to COVID? We can sort of see, like, yes, indeed it was. And we can see this directly on Wikipedia, for instance. but we can also see it directly in our source data and we can go and look at the different sources now we got three we got four sources here and not all of them mention covid in fact only one of them the one at the end mentions covid this is important because we're still returning source 0, source 1, source 2, and the question is, are these really adding value? We can look at exactly what's going on in this application, and we can look and see, for instance, what exactly is the chunk size that we're using in this application. We can see it's actually 1,000 out of the box, in this application. We can see it's actually a thousand out of the box and it's a zero or it's a 100 overlap. So that's kind of, we talked about recursive character text splitter last time. The question is, is this really enough? And the answer is potentially, you know, no, it's not enough for any given context. And so what we want to do is we want to make sure that we're returning all the right information for any specific thing that we want to be able to do. And we want to make sure that all this is towards the end in mind we have of avoiding hallucinations, making our answers fact checkable, returning the right reference material, and improving our answers as a result. But we want to avoid redundant reference material. We want to avoid useless reference material. We want to avoid basically everything in our context window that's not relevant because we're just pumping tokens in that aren't going to help us, that's additional cost. And so to really understand this process, we want to make sure that we have the correct overview of exactly what RAG is and where advanced retrieval fits in. So we can break down RAG into two key components. The first one is dense vector retrieval. This is, of course, the retrieval piece. And the other one is in context learning. What exactly are we putting in the context window? Both of these matter when it comes to looking at advanced retrieval methods. because we ask a question, that question gets turned in to a representation and embedding space. We search for similar stuff in embedding space to our question, for instance, in the example we just saw, and the similar stuff is returned. Now, what does similar mean? Well, we saw not necessarily exactly what we might expect, although some of the chunks are very relevant, not all of them are. When we find those similar chunks, we can set up our prompt template and we can return the reference material from the chunks in natural language. So this right here is the retrieval process using dense vectors. The piece where we're actually improving our generation is the part where we augment our context, we augment our prompt. This is in context learning. So this is sort of the RAG process, but it all happens prior to the generation. And the generation is where we actually get our better answer and we can, you know, yay, be happy that RAG was the right solution for us. So as we think about this R in RAG, we think about this retrieval process, we really need to break this thing down. We ask a question, we search this vector store, this vector database, this index for stuff related to our question that's similar in vector embedding space, we return that stuff we found in natural language. And the first piece of this is the piece where we're actually chunking and building our vector store. Okay, we need to set this up so we can ask a question in the first place. When we look at vector stores, there's many of them. We're going to use Qdrant today, and it's one of the hottest ones out there. If you're going to pick one off the shelf, go ahead and pick this one. But regardless of which ones you pick, it matters what vectors you're storing and why. So to create any vector store, you're going to have to take your documents. You're going to have to chunk them. You have to create embeddings for each chunk. And those embeddings are what gets stored. Chunking is simply breaking things into small pieces. And as we've said before, the last event we did on semantic rag, it's so important to remember that whatever gets chunked gets retrieved. And this is where we need to understand the fine line that exists at the vector store. Because how we chunk is one piece of the puzzle, but what we know about the chunks is a piece we're going to continue to explore today. We talked about chunking methods previously. Today we're going to continue to explore today. We talked about chunking methods previously. Today, we're going to leverage the baseline recursive character text splitter best practice chunking strategy that really takes the fixed size approach, the chunk size and overlap, and augments it slightly. Augments it so that we can make sure that we're not stopping in the middle of words. Rather, we're more likely to stop on double new lines, which would be sort of a new paragraph, perhaps a new chapter in a novel or single new lines. Alternatively, we wanna chunk where we have a space. We wanna really avoid chunking mid-word on character count, if at all possible. This recursive text splitting allows us to get close to our chunk size, but do a little bit better job. So here's an example from an Alice in Wonderland book, Down the Rabbit Hole, and we see we chunk on a double new line here and we chunk on a space here. This is with a simple chunk size 200 and overlap zero example at the beginning of this novel. Now there are more sophisticated ways to do this chunking. And one of those ways is to look at the meaning of each chunk. The meaning of things semantically is something we want to be leveraging during chunking as well as during retrieval. If we can afford the additional computation cost to see that improvement in performance. We talked about semantic chunking last time. It worked very well. We talked about semantic chunking last time. It worked very well. What we're going to talk about this time is we're going to talk about this retrieval piece where we're returning the stuff. In the retriever that is essentially a wrapper for the vector store, we're returning natural language and the chunking happens upstream where retrieval happens downstream. We can kind of look at these two things as separate, although keep in, there is a fine line. And as we mentioned in the beginning here, as goes retrieval, so goes generation. And when we do retrieval right, we get better at generation. better at generation. That's because we're augmenting the context. We're giving the model more high-quality context to be able to do its generation with. This idea of in-context learning goes back to the GPT-3 paper called Language Models Are Few-Shot Learn learners. As we move from instruction only zero shot to one shot with big and performance enough models, we get a huge increase in accuracy for our generations across many different tasks. And so this idea of in-context learning really comes from scaling up the LLMs big enough to see prompting become a very powerful pattern that we can use without fine tuning. And of course, when we do prompt engineering, we want to make sure that we have those clear and specific instructions. We're giving it the best possible context. And this is our focus today. Specifically, we're focused on the retrieved context. And another way to think about this is as you prototype, you want to always start with prompting. You can always start with prompting, but as you then move beyond prompting, you're often picking up RAG as the next pattern to leverage. This is simply this context optimization that we're doing here. When we move from prompting to retrieval augmented generation. People often ask, well, do I even need RAG because I have these super long context windows? Well, what do you mean by optimization? Are you trying to optimize how much context you can actually put in? Or are you trying to optimize how high quality each piece of context is? And as a result, optimize the efficiency of your inference, as well as the cost, if it's, say, on a per-token basis. And so this context optimization, what we're focused on is we're focused on this piece specifically, getting this right. We're focused on this piece, We're focused on this piece, getting this right. And in fact, this is the whole shebang. Whether you talk to Langchain, who's really focused on enabling LLM applications that leverage context and reasoning, or Lama Index, who will tell you that there are data framework for LLM applications that benefit from context augmentation. Both chunking and retrieval, both of them affect this context. Now, what is good context? Well, there's a number of ways we can measure this, and we've talked about this before. This is a leading framework called the RAG assessment framework. We encourage you guys to check it out if you haven't yet. But ultimately, what good context means is your problem as the AI engineer building this system or the AI engineering leader. If you want to look at Ragas a bit closer, we'll see it in action today. We encourage you to really get this retrieved context right, we want to get the most relevant information. We want to minimize redundance. We want to make sure that it is the stuff that's going to help us do a better generation. It's meaningful and relevant. And this is what we're focused on when we do advanced retrieval. So today we're going to focus on looking at this through the lens of IMDB movie review data from the John Wick movies. We're going to take this data and we're going to create a vector store. Now, the way we do this kind of matters as well as the way we engage with it. And so let's talk about the different stages of retrieval. We've already discussed this, but this is sort of broken down to a slightly more zoomed in level. When we get our data from IMDB, we're going to load it, we're going to transform it. This chunking, this deciding how to tokenize, how to turn it into embeddings before putting it into the vector store, these are all key components. You might say chunking happens here and retrieval happens here. But we would put forth for you today that there is a fine line that exists at the vector store. What is this fine line exactly and how should we think about it? Well, this is where we can look at each of the most useful advanced retrieval methods and get some insight. So first off, simple, super naive, regular retrieval. Let's base on ourselves with that. Then let's look at parent document retrieval, followed by self-query retrieval, followed by time-weighted retrieval, followed by contextual compression retrieval. And hopefully we can start to see this fine line come into focus for us. When we're doing naive retrieval, all we're doing is we're finding nearest neighbors through a simple cosine similarity. It's a simple dot product. And we're returning naively anything that's similar in embedding space. As we saw in the John Wick example in the beginning, not everything is necessarily super duper relevant to the question that we asked. So maybe we can do better. And that really motivates this entire space. When you think of naive retrieval, there are a few things you can tweak, but it's kind of what Langchain calls vector store backed retrieval. It's just simple vector store, cosine similarity based. You can set the similarity threshold. You can also specify that you only want the top K results back. You don't want 50 or a hundred. You want, let's say the top five or 10. This is a great place to start. And if you're prototyping, you should start here. But after you've prototyped, after you've set the baseline for your evaluation, after you've started to make sure that you understand what the performance is in the current state, and you wanna take it to the next level, you can start investigating things like the parent document retriever. And the big idea here is that small docs, small chunks, they're pretty good because they're more precise. They really are a little bit more accurate in terms of semantic meaning and what is going on in that particular part of the document. But big chunks are good too, because, you know, there's a lot more context, you know, and we're back to this word here. So think about a sentence is always within a paragraph. A paragraph is typically within a chapter or a section, chapters within a novel. I mean, there's a hierarchy to this. And this idea of leveraging the hierarchy is what the parent document retriever is all about. And the technique here is pretty simple. You search for the small chunks and you return the big ones. So it's interesting here, right? Because we're still talking chunks. We're doing retrieval. I thought retrieval happened after chunking. Well, what needs to happen here is that during the chunking process, we actually have to generate metadata for each of the parent-child chunks. Specifically, the child chunks are the thing that are real. They're in the vector store. Parent chunks are held in memory. That's really a separate issue. But you want to think of the child chunk as knowing who his parents are. And so what you do is you look for the child chunks and you return the parent chunks. But you have to have the metadata created for each chunk at the point of creating the vector store. This metadata is essential. Having this metadata created at this point allows us to do an advanced retrieval technique. And so these two things are really inherently linked. As we can see in multiple methods, when we look at the self-query retrieval, for instance, self-query is essentially a way to think about this is to use this text-to-SQL type functionality, meaning kind of chat with your SQL database a little bit. You want to think about this self-query as being able to chat with your database a little bit. And the way that that's done is through, again, metadata filtering. What did Barr say about Foo, for instance? We're going to filter on some metadata we have. This is sort of too general to make it kind of useful to really understand in terms of what we're doing today. But what we're able to leverage in our example with the John Wick movies is if you look at a simple movie review, we have metadata on each of our chunks. Let's say each chunk is an entire John Wick review. They're not very long, so we can make them just one size. We also have star rating. We have the title. We have the reviewer, the date, the actual review text. That's our chunk. And then how many people found it helpful? There's lots of metadata that we can draw from. And so if we give our retriever permission and access to look at that metadata, we can potentially very much improve our results. And this is a very nice, very clean, sophisticated way to take retrieval to the next level. Again, it's all about that metadata that we're creating at the step that we're generating our vector store. Thinking about that fine line, we can also do things like time-weighted retrieval. Time-weighted retrieval out of the box essentially says, what was the date of the last lookup that was similar to this? So kind of the most recently accessed is the big idea. Is the most useful with the kind of the most recently accessed is the big idea is the most useful with the kind of idea that if you're accessing something very frequently it's probably pretty useful now for us in this movie review example we're gonna want to answer based on what movies came out most recently in the spirit of the most recently accessed data. So we're going to modify it a little bit and you'll see how this works. And that one's pretty straightforward. Time weighted retrieval. It's probably kind of what you thought. Maybe not exactly, but you're sort of looking at what is most recent to retrieve, potentially very useful for your application. And then finally, we can think about contextual compression retrieval. Now, this is exactly what it sounds like. We're compressing the context. And so when we think about this returning of the natural language, the thing that's interesting here is we're doing this to the natural language, the language that we return specifically, the reference material to the context. So, you know, you can think about compressing a paragraph down to a few words or many chunks down to just a few chunks. And this compression idea, especially when we think many chunks down to a few chunks, could start to really give us insight into this idea of what is the proper rank order of chunks and how many chunks should I have if I want to let's say If I want to, let's say, minimize redundancy, maximize signal to noise, and really get the best possible generation, best to be defined by you and your application. Well, this is kind of heading us down the path of the re-ranking algorithm. And re-ranking is potentially one of the easiest things to apply, and we would highly encourage you to try it out of the box, as we'll show you in today's build. Again, we're going to use the data from John Wick movies. We're going to use Langchain and a Qdrant vector store for this. We're going to use OpenAI models for both embeddings and for our chat model. And we're going to walk through each of the methods of retrieval from naive to parent document to self query to time weighted and ultimately to contextual compression and re-ranking get ready everybody it's time for advanced retrieval for rag with the whiz. Let's walk them through the notebook, my man. Oh, yeah. Okay. So advanced retrieval with laying chain is conceptually maybe difficult, but, you know, the actual code, thanks, laying chain, pretty straightforward. So what we're going to do today is we're going to do a couple things. We're going to look at this advanced retrieval system, how we, you know, use it, how we think about it. And then we're going to see it on a number of different RAG pipelines. Now, we're going to be exploring a lot of different options. We will be releasing as well a notebook that's going to talk about uh how to assess these things against each other we're not going to go over that in the uh in the demo today just because it's going to be rather dense but the idea is that if we pit all these things together we should be able to figure out which one is best for our use case the first thing we're going to want to do is just grab straight up uh dependencies. So we do need our dependencies. We're going to use Langchain, Langchain OpenAI, and Langchain Cohere. This is because we're going to be using OpenAI as our LLM and our embedding model, and Cohere as our re-ranker when we get to contextual compression. We're also going to be using Quadrant today as our vector DB just because I like it, and Lark because we need it for our self-query retriever. We're going to provide our open AI key and cohere API key. So far, so good. This is all the normal stuff. So now we're going to collect some data. And what data do we need? Well, we need John Wick review data. So we have available on our GitHub a bunch of different John Wick review CSVs, so JW1234 that you're welcome to use. You can create your own as well. If you do create your own, just make sure that the metadata is aligned properly. That will be a necessary step. So we get all this data. That's awesome. Okay, once we have all this data, we're gonna wanna do something with it. And what we're gonna do is we're gonna create documents out of it. Now we're gonna get kind of a little bit manual here in our document creation because we do want to make sure that we have all of the appropriate metadata. So what we're gonna do is we're going to do is we're going to add a bunch of metadata to our CSV documents. So we're going to leave the review itself as its own, you know, that's like the base content, right? Then we're going to add these metadata columns of review date, review title, review URL, author, and rating, right? And then we're going to add the movie title for each movie. Now we're going to be kind of naive about this. We're just going to call them John Wick 1, 2, 3, and 4. They technically have different names, but good enough for right now. We're also going to cast our rating to an int. If it exists, else we'll just make it a zero. If they didn't provide a rating, you can think about this a little bit more and make a different decision here, or maybe get the, did they like it or not from some kind of classifier. So we could build that and then determine the actual review score, but we're just going to set it to zero because it's straightforward. And then we're going to do this last accessed at. We're kind of cheating here, right? We're just kind of declaring when these were last accessed at. And we're going to say that the first movie was last accessed, you know, basically three days ago. And then the last movie or the fourth movie was accessed today or now. And so the reason we're doing this is to kind of illustrate what the time-weighted vector store is going to be doing. Obviously, this is not literally what it's for. It doesn't really care when the documents were added. It more cares when they're accessed last so that, you know, very, very frequently accessed documents are going to be kind of boosted. But we're just going to use this to illustrate what's happening behind the scenes. So let's just make sure that our documents worked out. And indeed, they worked out. We have our page content, which is our review. And then we have a bunch of metadata. We have the source. This is the CSV it's from. We have the row content, which is our review. And then we have a bunch of metadata. We have the source. This is the CSV it's from. We have the row it's from. We have the date that it's from. We have the title. We have the URL. We have the author. We have the rating. We've got the title of the actual movie. And we have this last access stat. So let's go. Next, we're going to set up our quadrant vector store. And I did confirm it is called Quadrant, not Qdrant. So I've been calling it Qdrant this whole time. It happens. We're going to realize that it's called Quadrant now and apologize deeply to the company. But other than that, we need an embeddings model. We're just going to use TextEmbedding3.small because it's small. It's cost effective. Easy peasy. Small, it's cost effective, easy peasy. Yes, Vignesh review title was part of the original data. So if we look at the kind of original CSV here, so if we go to the data, I'll zoom way in here so we can see it. You can see we have these columns that already existed. We have this row, which is our index. We have a review date, author, rating, review title, and review. All of those were already populated for us. And so we're just indicating in the CSV loader that we want those to be metadata. We do not want them to be part of what's called page content. So that's why we did that. Great question, though. So we're going to create our embeddings model and then we're going to create our vector store, which is going to be quadrant from documents. And it's easy enough, right? We're going to use the location being memory, right? So this is an in-memory vector store. You can use a hosted self or otherwise quadrant vector store, but we're just going to keep it simple with memory so that no one's got API issues. And we're gonna name the collection John Wick because why not? I mean, it's about John Wick. So the first chain we're gonna create is the naive rag chain. Since we're focusing on the R in rag today, we'll create our retriever first. We're gonna have a retrieval. We're gonna retrieve 10 documents. Now this is just gonna use the simple cosine similarity. That's it, right? That's all this is doing. It's looking at your query. It's looking at all the documents or a subset of the documents because it's using approximate nearest neighbors. And then it's saying, these are the 10 most related. Here you go. So easy peasy there. We love to see it. We are retrieving 10 documents, which is a lot of documents. And we're doing this because we know down the line we're going to use contextual compression to weed that down to the smallest number of documents. Right. So we're going to we're going to compress that 10 down to a to a smaller number. Then we put the A in RAG, which is just our prompt. We're just kind of sticking with a simple one, right? The star of the show today is R. So A, you know, just going to be kind of normal. And then G, the generator, going to be kind of normal, just using a GPT-35 turbo. It's a heck of a model. We're going to create our naive retrieval chain, and you'll see we've got this big old blob. This is a familiar blob to some of you if you've been following along with us as we've gone through everything. But, you know, the idea is, I've put on comments, I should help explain what this is doing. Because we're, because we care so deeply about retrieval, right? We really want to make sure that we are getting the documents that we are retrieving. So we have to make sure that we populate those documents at the end. We can't just get the output because that's only going to let us see what's happening end to end, which is useful, but not super useful. So we're going to need to do this kind of slightly more complicated pattern to make sure that at the end, we get both our response from the model and we get the context that we retrieved, right? Makes sense. We want to think about retrieval, so we're going to retrieve the documents and populate them. Now, we're going to see how this symbol chain does in a few different prompts, right? Put a little comment here. You might think that we've cherry-picked prompts that showcase the individual skill. We sure have, right? So that's, you'd be correct. The first query, did people generally like John Wick? And we're just going to look at the response because only Ragus really is going to care about the context in a way that's meaningful. So did people generally like it? And yeah, it seems like they did, right? Did any reviews have a rating of 10? If so, can I have the URLs to those reviews? And we get a rating of 10. We get a URL that has a rating of 10? If so, can I have the URLs to those reviews? And we get a rating of 10. We get a URL that has a rating of 10. That's great, right? Let's go. And then what happened in John Wick? And then we get the, basically, this is the plot of John Wick 1. Okay, so that's important to keep in mind when we get into a later retriever. This is the plot of John Wick 1. Okay, so that retriever is like, it's fine. It does the job. I mean, we're happy about it, right? Nothing's super odd. The results are not spectacular. All right, fine. So what about parent document retriever? Well, as Greg said, basically what we're going to do is we're going to take each raw document and or a large chunk of document, okay? Then we're going to store those larger chunks or raw documents in our memory store. So the memory store is not a vector store. It's just kind of chilling in memory, right? So we're not going to do any semantic similarity searching on those big chunks or those parent chunks. We're going to chunk each of those parent chunks into smaller documents, and we're going to associate them with the parents, and then store those in a vector store. Okay, so the idea is we're going to take one big document, turn it into a bunch of smaller documents, and associate it back to the big document. Okay, then we're going to put those into our vector store. So how this looks is we're going to search for those smaller chunks that are in our vector store, but we're going to return the parent chunk that they're associated with, right? So let's say we have a parent chunk that's been split into four different chunks, and our semantic similarity search returns that, you know, three of the four of those chunks are the most related documents. We're just going to return the one big parent document chunk, right? We don't have to return those smaller child chunks. And this is basically the way that we're going to think about this. I got a question in chat. Is RAG only for Q&A or can it be used effectively for summarization? You can definitely use RAG to augment summarization pipelines. It is not just for Q&A, though, of course, that is the cleanest way to demonstrate a lot of its power. So how do we actually make this whole child chunks, parent chunks, associate, all that? Well, basically, we're going to first just define our documents to be our parent documents. Then we're going to implement a recursive character text splitter. And that's going to be our child splitter. So that's what's going to turn our parent documents into our child documents. We're going to, again, create a quadrant vector store. This time we're going to do kind of the verbose way to do this. So we're going to create a quadrant client. We're going to add a collection. We're going to define the vectors that we need. So this is going to be the size of the text embedding three model from OpenAI. And then we're going to define our distance metric. Then we're going to create that vector store with quadrant. We're going to point it to full documents. And we're going to point it to full documents and we're going to point it to uh you know the correct embedding model and the correct client easy peasy then we're going to create a uh parent document retriever we're going to have our vector store as our parent document vector store our doc store as store this is in memory store and our child splitter as child splitter easy peasy thanks laying chain and then we add our documents. Now, as we add our documents into this retriever, what's going to happen? It's going to get a chunk. It's going to turn that into a bunch of small chunks and associate them through metadata. So now when we actually call this, under the hood, the content is going to be, we're going to get the full chunks back, but we're going to search for the smaller chunks, and that's it. And again, I mean, it looks fine. It looks like it's not too much, you know, so, okay. So far, so good. We're liking that. I assume that, a question from chat, I assume that in production, you'd want to store parent docs and something other than memory on disk vector. Yeah, that's correct. We can start however we want. Probably just leaving a bunch of the memory is less than ideal though. Okay, so that's parent document retriever. Makes sense. Seems cool. We search for small, we get big. The idea right here is that what we want is we want to search little pieces of information because they're likely to contain single ideas, right, which is likely to be best captured by the embedding process. But we know that, like, even if a sentence is very related to our query, likely the sentences or the structure or the text around it is going to be useful context for that small piece of information. And that's the idea of parent document retriever. For self-query, it's a lot more straightforward. We have all of this amazing metadata, right? We have movie title, review date, review title, review URL, author, rate. We have all this stuff and we're just not using it, right? Like we're just not using it at all. And so instead of not using it, which seems just not using it at all. Instead of not using it, which seems maybe not great, we should use it. That's what self-query does. It helps to understand the structure for metadata and then make filtered queries based on that metadata. Whenever you see this description, you want to make sure that you write a natural language description of that because it might be used by the LLM, right? And we are going to define the types so we know what kind of filtering we can do. The kind of filtering we can do is different if it's an integer, then if it's a string, then if it's a date, and so on, etc. We also want a default description. So this is the actual page content, right? So if you remember, our page content is just the review. So we're going to define that as the review. And then, of course, we're going to use GBT35 Turbo. We're going to pass in the vector store we already created. This is the same vector store that we used in our in our actual original naive implementation, right? Because all that metadata is already in there. We don't need a different vector store. We just need a different layer on top of it. We're going to pass in that document content description and then all of our metadata field info, which is here. And then when we ask questions, you know, did people generally like John Wick? Okay, we get kind of the same thing, right? you know, did people generally like John Wick? Okay, we get kind of the same thing, right? We get, yes, they generally like John Wick. But if we ask a question like, do any reviews have a rating of 10? If so, can I have the URLs of those reviews? We get a much more, you know, correct response because we're actually able to filter based on movies that are reviews that have a rating of 10 since we have rating as one of our metadata fields, right? So we get a much better answer here. Just by looking at it, you can tell it's a better answer. And then of course, you know, what happened to John Wick? I mean, this is not really meant to help this perform better. And so it doesn't really, at least by looking at it. So that's our self-query, basically smart filtering. You love to see it. Time-weighted vector store, this one's pretty straightforward as well. We want to penalize pieces of context that aren't often used. And so what we're gonna do is we're going to set this up the same way we did before. This is just creating a new quadrant client. So we've already seen this. You know, this is just the verbose way to do it. We're going to basically scale our semantic similarity scores based on this little formula here. The idea being that the higher we put our decay rate, the more aggressively we're going to penalize our old data, right? So the basic idea here is that if the data is very new, its score is going to be higher. If the data is very old, its score will be lower. And its score is just a combination of these two things, the semantic similarity, sorry about that, the semantic similarity, and this little formula. Now you can set a aggressive or a non-aggressive decay rate, right? If you set the decay rate to be close to one, that means that it's going to very quickly return to just kind of base semantic similarity. If you set the decay rate very close to zero, it's going to very quickly return to just kind of base semantic similarity. If you set the decay rate very close to zero, it's going to take a very long time for it to taper off over the course of the document's lifetime. And so that's it. The way we set this up is, again, very straightforward. We're going to use a time-weighted vector store retriever. We're going to pass under decay rate as a fairly aggressive 0.6. And then we're going to set this k equal to 2, so we retrieve two relevant documents for this, and then we just have to add our documents, and we get a little confirmation that these documents have been added. Sounds great. Now, when we use our time-weighted vector, our retrieval chain, right, so our time-weighted retriever here. You can see, did people generally like John Wick? Yes. People generally like John Wick 4 based on the reviews provided. Well, that's interesting, right? Because we didn't ask about John Wick 4, but because all of that John Wick 4 data is weighted more highly, right? We get that John Wick 4 data. Do any reviews have a rating of 10? If so, can I have the URLs? I'm sorry, but there are no reviews with a rating of 10 in the provided context because there were no reviews of John Wick 4 that were rated a 10. And again, what happened in John Wick? In John Wick 4, there is a lot of action, intense fight scenes, seems to impress some viewers with high energy of fast-paced sequences. Again, the idea is because we waited for higher, it's retrieving John Wick 4 information. And the last one that we have is contextual compression. Contextual compression is, again, the words sound intimidating, but it's really quite straightforward. We retrieve a lot of documents that are very likely related to our query vector, and then we compress those documents into a smaller set of more related documents. More here meaning that they're more related, they have higher relatedness. And that's it, right? You can do this in a number of different ways. So you can actually shrink the context itself. So the text only extracts the relevant text out of each of the contexts. Or you can shrink the number of contexts. So we're going to be using this re-ranking method, which is working on that number of contexts access. And basically, we're going to make a basic retriever, just like we had in our naive example. And then we're going to add this compressor, aka our re-ranker for this example. And the way that looks is straightforward. Again, we define this compressor, and then we define this compression retriever. and then we can we define this compression retriever Bob's your uncle there you go again when we pass information in we ask questions we get good responses you know we get basically what you would hope to get and what we're going to do is post in the YouTube comments a notebook that describes all of the all of the performance as defined by ragas um but uh with that i've we've gone through a lot of content so i'm gonna pass you back to greg uh don't forget to like uh comment subscribe ring the bell notification i know it's kind of a little bit uh a little bit funny but it does does really help because we go live every Wednesday, and we love to see you out here. So with that, I'll pass you back to Greg. Yes, awesome. Thank you so much. Wiz, in case you missed that last piece, that last little fine line, I just want to double-click in on this re-ranking idea. This is maybe the easiest thing to pick up and use straightaway as you start going after naive and into the next sort of advanced piece. And to understand really exactly what re-ranking is doing, we can just recall this semantic rag piece we discussed last week, where the hypothesis was that we could use embeddings of individual sentence to make more meaningful chunks. Well, when you look at Cohere's RERANK algorithm, the idea is that you can optimize your retrieval to improve generations through this quote semantic boost, which is fundamentally sort of giving us a more meaningful list of chunks retrieved. Semantic meaning is at the center of all things NLP, and it's no different here. Finally, it's important to note as a takeaway and a quick trailhead that we can use embedding models directly for re-ranking. Now that we've got this screen share dialed in, I can show you directly on the massive text embedchmark Leaderboard, you'll see there's a re-ranking benchmark list. So I encourage you guys to check this out if this is something that you're potentially interested in looking more into. And finally, we have made our way through quite a lot of content and advanced methods today. We see that there is a fine line, not just between contextual compression and re-ranking, but there's also a quite fine line between the stages of chunking and retrieval that meet at the vector store. Keep this fine line in mind as you're building and you're designing how your data is chunked, what metadata you're gonna leverage to do retrieval. And given your specific questions, given your specific things you want to be able to answer or summarize, you're gonna potentially wanna choose different metadata, chunking strategies, retrievers. And again, we'll drop the quantitative analysis we did. That was just too much for today in the YouTube comments. If you want us to go deeper on any of this stuff, we'd love to hear from you. But otherwise, we've got a few minutes to answer questions that we'll go ahead and get started with from the Slido. Whiz, that was a lot today, man. That was a lot. Okay, maybe we overshot the mark a little bit, but we've got some questions. Paolo asks, hey, nice hat. Thanks, Paolo. Does chunking work for tables in documents? Mute it. And we can't hear you, Chris. Oh, sorry. Yes, I think it can. It kind of depends on what you mean. I would say loosely, though, we don't really want to chunk tables. We would like to keep them together, right? Because they describe a kind of single way to think about something, right? So if you had a very massive table that you wanted to chunk, you would kind of want to retain that structure. So I'm not sure that we would want to chunk tables, but you certainly can. Yeah, why not? Okay. Okay. Manny asks, what's up Manny? When returning results, is there a command for the prompt template that can make the LLM return citations it has with the probability of accuracy in percentage? I guess I'm not quite sure what accuracy is in this case. So kind of. What we can do is we can say things like, so in the case of this notebook, we're returning two things. We're returning the response and the contexts. So basically, that's the citations that it used. That context is what it used to give you the actual original answer. So that's why we set up our chain in that way. When it comes to this probability of accuracy in percent, what we can do is we can see whatever measure we use, we can forward that score, right? So like if we're talking about cosine similarity, we can forward that score. The meaningfulness of that, or a way to directly port it to like percentage accuracy based, that part's a little bit less clear, but it's absolutely true that we can forward the score from our retrieval pipeline, even though it might not mean much in a vacuum. Okay. Manny has a couple either or here. Any vector store specifically required to achieve these retrieval methods or can we do an independent vector store? You do have to use a vector store that supports things like metadata, metadata filtering. Absolutely. Yes. So Quadrant, Chroma, kind of all of the normal ones, even Face, as long as you set it up correctly with an index. So you can make a vector sort of powered by face, but it is not going to work out of the box with every single vector store you can imagine. Quadrant, shout out to quadrant. Learn something new every day. Yeah. All right. Got it. And then can we do both of the, all of these methods in both Langchain and Lama index today? Yeah. Okay. All right. Cool. Cool. Then Richard asks, which are the best rag techniques when we have an app that will add new documents all the time and which for very long documents? Yeah. So when we're adding a lot of new documents, you could see something like the time weighted vector store being useful, right? The idea is that you could add documents to it, and those would be the most recent versions of the documents. So if you're adding, say, blogs to your vector store, right, you could weight the new ones higher. And obviously, that's going to depend on how people interact with them, but that kind of thing is going to work very well. Otherwise, it's up to you about, you know, how that kind of thing is going to work very well. Otherwise, it's up to you about, you know, how that information fits in. You can do, you can add it with specific metadatas and you self-query, you know, just depends on what you need to do with new documents. You know, is it that new ones are more important? Is it that new ones should be categorized based on existing categories? You know, you get, you get the idea. And for long, uh, documents, it's, they're, they're all going to be great. Uh, obviously we can retrieve less of them depending on our context window. Um, but, uh, you know, long documents, something like contextual compression is probably going to be very useful for you as you're going to be able to squeeze out just the valuable juice from each of those, uh, those long chunks or documents. Okay. And then, you know, I're going to be able to squeeze out just the valuable juice from each of those long chunks or documents. Okay. And then I'm going to ask one of these, we got a bunch of straight classic questions, fine tuning, Langchamber's Lama index, et cetera, et cetera. Tell us more about why you chose Quadrant. What are the technical reasons here people want to know it's just so good uh it's very efficient it's very good big scale uh there you go that's that's like i mean quadrant will will will be good for you from when you have one uh you know daily active user to when you have 10 million yeah you know also the most recent company to raise money so that says anything to you it says something to me um go quadrant all right uh if you have more questions please uh throw them on to the youtube comments after the session we will get to them thank you so much everybody uh wiz thanks for walking us through that heck of a sesh today all right time to close out and thank you so much for walking us through that heck of a session today. All right, time to close out. And thank you so much for joining us today. If you enjoyed the session, then you might really enjoy continuing to join us every week. We're back on Wednesday all the time. You can join us same time, same place. We'd love to see you both now on LinkedIn, as well as on YouTube. And if you also want to go a little bit deeper, maybe check out joining our Discord channel. We almost have a thousand people in our Discord and it's pretty popping most of the time. We've got a lot of great projects people are working on. If you're looking for one, it might be a cool place to make some connections. And then if you want to start really taking your game to the next level and you want to start for free, we recommend checking out our open source LLM Ops course. It's free on GitHub and on YouTube. You can get all the boilerplate code and concepts you need straight through just a few videos. You can do it in really a day or two and get that really base of LLM ops. If you're ready to truly accelerate and take it to the next level, figure out all these details, Langchain Lama Index, what's fine tuning? What's the deal? How do I make these decisions? Maybe check out and consider applying to our AI engineering bootcamp. We just kicked off cohort two and cohort three is coming up in May. We'd love to have you if you're interested in getting more one-on-one time with us, with peers that have graduated in the past, getting access to hiring partners and learning the core concepts and code of AI engineering today, then it might be a good option for you. And that's a wrap for today, everybody. Any feedback that you have, we'd love to hear it. I know we had a couple of snags today and we'll continue to improve on our end as we keep building, shipping and sharing. We hope that you will do the same to build, ship and share something awesome this week and tell not just everybody in your network, but maybe even everybody in the AI Makerspace community. So until next time, keep getting after it, keep building, shipping and sharing, and we'll do the same. See you soon, everybody. Bye, guys. | Advanced Retrieval Methods for RAG | 3,671 | AI Makerspace | 20240411 | In this event, we will break down the retrieval algorithms that AI Engineering practitioners should know and have at hand within their toolbox. Algorithms known to provide greater precision and results at the retrieval step of RAG include the Parent Document Retriever, Self Query Retriever, Contextual Compression, and Time-Weighted Vector Store.
RSVP: https://lu.ma/retrieval4rag
Have a question for a speaker? Drop them here:
https://app.sli.do/event/3eFnpQg7xrgcnb3TgMGQL6
Speakers:
Dr. Greg, Co-Founder & CEO
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO
https://www.linkedin.com/in/csalexiuk/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
Apply for our new AI Engineering Bootcamp on Maven today!
https://bit.ly/aie1
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/jKdAm5kLb4fRMa8UA | 2024-06-10T02:00:11.043606 |
https://www.youtube.com/watch?v=dt1Iobn_Hw0&t=1s&ab_channel=AIMakerspace | " Hey Wiz, we've talked quite a lot about the black art of chunking in our courses over the past six(...TRUNCATED) | Semantic Chunking for RAG | 3,795 | AI Makerspace | 20240328 | "In this event, we’ll learn how the semantic chunking algorithm works! Text is split into sentence(...TRUNCATED) | 2024-06-10T02:06:06.423209 |
https://www.youtube.com/live/SEA3eJrDc-k | " how they work and what it means to be agentic. Yeah, that's absolutely true, Craig. Absolutely tru(...TRUNCATED) | Agentic RAG with LangChain | 3,717 | AI Makerspace | 20240320 | "In this event, we’ll provide a brief history of Agents before spending time on the details of (...TRUNCATED) | 2024-06-10T02:14:38.693446 |
https://youtube.com/live/K_8a056X4ys | " A-whiz. So this REFT, this new reasoning with reinforced fine-tuning, is it basically just RLHF or(...TRUNCATED) | Aligning LLMs: ReFT | 3,630 | AI Makerspace | 20240314 | "In this event, we’ll break down the steps of ReFT, which consists of two stages: the warm-up stag(...TRUNCATED) | 2024-06-10T02:20:56.509046 |
https://www.youtube.com/live/Jp-6hyf_CoE | " Hey, Wiz. So supervised fine tuning and prompt engineering are kind of on like a spectrum, right? (...TRUNCATED) | Practical Fine-Tuning of LLMs | 3,737 | AI Makerspace | 20240307 | "GPT-4 Summary: Unravel the complexities of fine-tuning in LLM applications at our enlightening even(...TRUNCATED) | 2024-06-10T02:32:08.913282 |
https://youtube.com/live/Anr1br0lLz8 | " Hey, Wiz, is there a way to know what comes out of any RAG application that we build is right or c(...TRUNCATED) | RAG with LangChain v0.1 and RAG Evaluation with RAGAS (RAG ASessment) v0.1 | 3,842 | AI Makerspace | 20240207 | "GPT-4 Summary: Join us for an enlightening YouTube event that delves into the critical art of evalu(...TRUNCATED) | 2024-06-10T02:37:31.643024 |
https://youtube.com/live/XOb-djcw6hs | " Hey Chris, is it true that we can improve on our PEFT-LORA approach with this quantization thing? (...TRUNCATED) | Fine-tuning with QLoRA (Quantized Low-Rank Adaptation) | 3,710 | AI Makerspace | 20240111 | "GPT-4 Summary: Discover how to supercharge your LLM application development by mastering quantiz(...TRUNCATED) | 2024-06-10T02:44:23.704976 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 64
Size of downloaded dataset files:
3.44 MB
Size of the auto-converted Parquet files:
3.44 MB
Number of rows:
182