Unbounded context with memory
I have implemented a memory plugin (https://github.com/codelion/optillm/blob/main/optillm/plugins/memory_plugin.py) in optillm for adding short-term memory and unbounded context to LLMs. We used FRAMES to evaluate the memory plugin with Gemma2 model from Google. Gemma2 has a context window of 8192 so, in the paper when Google reported the results they only reported it for naive prompt which doesn't include the text retrieved via RAG.
However, by using the memory plugin in optillm we can make the context of any LLM to be unbounded. We managed to boost the accuracy to 30.1% v/s 5.1% as reported by Google in the paper.
Also, we were able to get almost the same accuracy as Gemini with just gpt-4o-mini using optillm memory even though gpt-4o-mini has a context window that is 1/10 that of Gemini.