Spaces:
Running
Running
Merge branch 'main' into manage-conversational-memory
Browse files- CHANGELOG.md +31 -10
- README.md +6 -2
CHANGELOG.md
CHANGED
@@ -4,27 +4,49 @@ All notable changes to this project will be documented in this file.
|
|
4 |
|
5 |
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
## [0.2.0] β 2023-10-31
|
9 |
|
10 |
### Added
|
|
|
11 |
+ Selection of chunk size on which embeddings are created upon
|
12 |
-
+ Mistral model to be used freely via the Huggingface free API
|
13 |
|
14 |
### Changed
|
15 |
-
|
|
|
16 |
+ Moved settings on the sidebar
|
17 |
+ Disable NER extraction by default, and allow user to activate it
|
18 |
+ Read API KEY from the environment variables and if present, avoid asking the user
|
19 |
+ Avoid changing model after update
|
20 |
|
21 |
-
|
22 |
-
|
23 |
## [0.1.3] β 2023-10-30
|
24 |
|
25 |
### Fixed
|
26 |
|
27 |
-
+ ChromaDb accumulating information even when new papers were uploaded
|
28 |
|
29 |
## [0.1.2] β 2023-10-26
|
30 |
|
@@ -36,9 +58,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
|
36 |
|
37 |
### Fixed
|
38 |
|
39 |
-
+ Github action build
|
40 |
-
+ dependencies of langchain and chromadb
|
41 |
-
|
42 |
|
43 |
## [0.1.0] β 2023-10-26
|
44 |
|
@@ -54,8 +75,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
|
54 |
+ Kick off application
|
55 |
+ Support for GPT-3.5
|
56 |
+ Support for Mistral + SentenceTransformer
|
57 |
-
+ Streamlit application
|
58 |
-
+ Docker image
|
59 |
+ pypi package
|
60 |
|
61 |
<!-- markdownlint-disable-file MD024 MD033 -->
|
|
|
4 |
|
5 |
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
6 |
|
7 |
+
## [0.3.1] - 2023-11-22
|
8 |
+
|
9 |
+
### Added
|
10 |
+
|
11 |
+
+ Include biblio in embeddings by @lfoppiano in #21
|
12 |
+
|
13 |
+
### Fixed
|
14 |
+
|
15 |
+
+ Fix conversational memory by @lfoppiano in #20
|
16 |
+
|
17 |
+
## [0.3.0] - 2023-11-18
|
18 |
+
|
19 |
+
### Added
|
20 |
+
|
21 |
+
+ add zephyr-7b by @lfoppiano in #15
|
22 |
+
+ add conversational memory in #18
|
23 |
+
|
24 |
+
## [0.2.1] - 2023-11-01
|
25 |
+
|
26 |
+
### Fixed
|
27 |
+
|
28 |
+
+ fix env variables by @lfoppiano in #9
|
29 |
|
30 |
## [0.2.0] β 2023-10-31
|
31 |
|
32 |
### Added
|
33 |
+
|
34 |
+ Selection of chunk size on which embeddings are created upon
|
35 |
+
+ Mistral model to be used freely via the Huggingface free API
|
36 |
|
37 |
### Changed
|
38 |
+
|
39 |
+
+ Improved documentation, adding privacy statement
|
40 |
+ Moved settings on the sidebar
|
41 |
+ Disable NER extraction by default, and allow user to activate it
|
42 |
+ Read API KEY from the environment variables and if present, avoid asking the user
|
43 |
+ Avoid changing model after update
|
44 |
|
|
|
|
|
45 |
## [0.1.3] β 2023-10-30
|
46 |
|
47 |
### Fixed
|
48 |
|
49 |
+
+ ChromaDb accumulating information even when new papers were uploaded
|
50 |
|
51 |
## [0.1.2] β 2023-10-26
|
52 |
|
|
|
58 |
|
59 |
### Fixed
|
60 |
|
61 |
+
+ Github action build
|
62 |
+
+ dependencies of langchain and chromadb
|
|
|
63 |
|
64 |
## [0.1.0] β 2023-10-26
|
65 |
|
|
|
75 |
+ Kick off application
|
76 |
+ Support for GPT-3.5
|
77 |
+ Support for Mistral + SentenceTransformer
|
78 |
+
+ Streamlit application
|
79 |
+
+ Docker image
|
80 |
+ pypi package
|
81 |
|
82 |
<!-- markdownlint-disable-file MD024 MD033 -->
|
README.md
CHANGED
@@ -14,6 +14,8 @@ license: apache-2.0
|
|
14 |
|
15 |
**Work in progress** :construction_worker:
|
16 |
|
|
|
|
|
17 |
## Introduction
|
18 |
|
19 |
Question/Answering on scientific documents using LLMs: ChatGPT-3.5-turbo, Mistral-7b-instruct and Zephyr-7b-beta.
|
@@ -25,9 +27,11 @@ Additionally, this frontend provides the visualisation of named entities on LLM
|
|
25 |
|
26 |
The conversation is kept in memory up by a buffered sliding window memory (top 4 more recent messages) and the messages are injected in the context as "previous messages".
|
27 |
|
|
|
|
|
28 |
**Demos**:
|
29 |
-
- (
|
30 |
-
- (
|
31 |
|
32 |
## Getting started
|
33 |
|
|
|
14 |
|
15 |
**Work in progress** :construction_worker:
|
16 |
|
17 |
+
<img src="https://github.com/lfoppiano/document-qa/assets/15426/f0a04a86-96b3-406e-8303-904b93f00015" width=300 align="right" />
|
18 |
+
|
19 |
## Introduction
|
20 |
|
21 |
Question/Answering on scientific documents using LLMs: ChatGPT-3.5-turbo, Mistral-7b-instruct and Zephyr-7b-beta.
|
|
|
27 |
|
28 |
The conversation is kept in memory up by a buffered sliding window memory (top 4 more recent messages) and the messages are injected in the context as "previous messages".
|
29 |
|
30 |
+
(The image on the right was generated with https://huggingface.co/spaces/stabilityai/stable-diffusion)
|
31 |
+
|
32 |
**Demos**:
|
33 |
+
- (stable version): https://lfoppiano-document-qa.hf.space/
|
34 |
+
- (unstable version): https://document-insights.streamlit.app/
|
35 |
|
36 |
## Getting started
|
37 |
|