What You Say = What You Want? Teaching Humans to Articulate Requirements for LLMs
Paper
•
2409.08775
•
Published
Prompt is text-based memory. System II prompting is updating memory. Parametric memory is long-term, while prompt-based are short-tem.
Note Projecting document chunk embedding vector directly into hidden space for xRAG ! Explicit memory is expensive and dumb for RAG, mid-term memory relies on a 'projector', long-term memory updates on the langauge decoding part of the model. I guess that could be the next step here.
Note Addition of new concept into VLM via soft-prompt tuning. Extra id token in vocabulary plus k visual feature embeddings enables customizing VLM towards personalized knowledge.
Note Same thing. Personalization with soft-prompt user embedding, this one is on text-modality, less exciting than Yo'LLaVA in some sense.