About experiment configuration details
In the application of RAG part, two parameters W (split chunk size) and K (top_k similar documents) are proposed. May I know what are the exact values for these two parameters?
In addition to that, for the llama index implementation in parsing the documents, I notice that you split the documents based on token length. May I know if other semantic parsing methods are adopted (e.g., use the MarkdownNodeParser to parse the documents first, and then use TokenTextSplitter to limit the chunk size?
The "W" introduced in the paper refers to the embedding model's context size. Regarding the top-k similarity, we used a range of 3 to 5.
The document splitting was done in a way that respects the structure of the document. The implementation you suggested would likely achieve what we have already deployed.