Spaces:
Sleeping
Sleeping
Merge branch 'master' into huggingface
Browse files- .gitignore +1 -0
- README.md +13 -8
- config.py +4 -1
- crazy_functional.py +35 -2
- crazy_functions/crazy_functions_test.py +9 -3
- crazy_functions/对话历史存档.py +42 -0
- crazy_functions/解析JupyterNotebook.py +145 -0
- crazy_functions/解析项目源代码.py +50 -9
- crazy_functions/询问多个大语言模型.py +29 -0
- crazy_functions/谷歌检索小助手.py +17 -15
- docs/WithFastapi.md +43 -0
- main.py +27 -10
- request_llm/README.md +2 -2
- request_llm/bridge_all.py +6 -6
- request_llm/bridge_chatglm.py +18 -4
- request_llm/bridge_chatgpt.py +10 -7
- toolbox.py +88 -8
- version +2 -2
.gitignore
CHANGED
@@ -145,3 +145,4 @@ cradle*
|
|
145 |
debug*
|
146 |
private*
|
147 |
crazy_functions/test_project/pdf_and_word
|
|
|
|
145 |
debug*
|
146 |
private*
|
147 |
crazy_functions/test_project/pdf_and_word
|
148 |
+
crazy_functions/test_samples
|
README.md
CHANGED
@@ -32,20 +32,20 @@ If you like this project, please give it a Star. If you've come up with more use
|
|
32 |
一键中英互译 | 一键中英互译
|
33 |
一键代码解释 | 可以正确显示代码、解释代码
|
34 |
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
35 |
-
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) |
|
36 |
-
模块化设计 |
|
37 |
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
38 |
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
39 |
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
|
40 |
-
Latex
|
41 |
批量注释生成 | [函数插件] 一键批量生成函数注释
|
42 |
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
43 |
-
Markdown中英互译 | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
44 |
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
45 |
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
46 |
-
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt
|
47 |
-
公式/图片/表格显示 | 可以同时显示公式的tex
|
48 |
-
多线程函数插件支持 | 支持多线调用chatgpt
|
49 |
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
50 |
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
51 |
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
@@ -183,6 +183,8 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
|
183 |
2. 使用WSL2(Windows Subsystem for Linux 子系统)
|
184 |
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
185 |
|
|
|
|
|
186 |
|
187 |
## 安装-代理配置
|
188 |
1. 常规方法
|
@@ -278,12 +280,15 @@ docker run --rm -it --net=host --gpus=all gpt-academic bash
|
|
278 |
|
279 |
<div align="center">
|
280 |
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
|
|
|
|
|
281 |
</div>
|
282 |
|
283 |
|
284 |
|
285 |
## Todo 与 版本规划:
|
286 |
-
- version 3.
|
|
|
287 |
- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
|
288 |
- version 3.0: 对chatglm和其他小型llm的支持
|
289 |
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
|
|
32 |
一键中英互译 | 一键中英互译
|
33 |
一键代码解释 | 可以正确显示代码、解释代码
|
34 |
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
35 |
+
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
|
36 |
+
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
37 |
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
|
38 |
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
|
39 |
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
|
40 |
+
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
|
41 |
批量注释生成 | [函数插件] 一键批量生成函数注释
|
42 |
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
|
43 |
+
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
|
44 |
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
45 |
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
46 |
+
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
47 |
+
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
48 |
+
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
|
49 |
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
|
50 |
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
|
51 |
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
|
|
|
183 |
2. 使用WSL2(Windows Subsystem for Linux 子系统)
|
184 |
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
|
185 |
|
186 |
+
3. 如何在二级网址(如`http://localhost/subpath`)下运行
|
187 |
+
请访问[FastAPI运行说明](docs/WithFastapi.md)
|
188 |
|
189 |
## 安装-代理配置
|
190 |
1. 常规方法
|
|
|
280 |
|
281 |
<div align="center">
|
282 |
<img src="https://user-images.githubusercontent.com/96192199/233575247-fb00819e-6d1b-4bb7-bd54-1d7528f03dd9.png" width="800" >
|
283 |
+
<img src="https://user-images.githubusercontent.com/96192199/233779501-5ce826f0-6cca-4d59-9e5f-b4eacb8cc15f.png" width="800" >
|
284 |
+
|
285 |
</div>
|
286 |
|
287 |
|
288 |
|
289 |
## Todo 与 版本规划:
|
290 |
+
- version 3.3+ (todo): NewBing支持
|
291 |
+
- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合)
|
292 |
- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡
|
293 |
- version 3.0: 对chatglm和其他小型llm的支持
|
294 |
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
|
config.py
CHANGED
@@ -57,6 +57,9 @@ CONCURRENT_COUNT = 100
|
|
57 |
# [("username", "password"), ("username2", "password2"), ...]
|
58 |
AUTHENTICATION = []
|
59 |
|
60 |
-
# 重新URL重新定向,实现更换API_URL
|
61 |
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
|
62 |
API_URL_REDIRECT = {}
|
|
|
|
|
|
|
|
57 |
# [("username", "password"), ("username2", "password2"), ...]
|
58 |
AUTHENTICATION = []
|
59 |
|
60 |
+
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
|
61 |
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
|
62 |
API_URL_REDIRECT = {}
|
63 |
+
|
64 |
+
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
|
65 |
+
CUSTOM_PATH = "/"
|
crazy_functional.py
CHANGED
@@ -19,12 +19,25 @@ def get_crazy_functions():
|
|
19 |
from crazy_functions.解析项目源代码 import 解析一个Lua项目
|
20 |
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
|
21 |
from crazy_functions.总结word文档 import 总结word文档
|
|
|
|
|
22 |
function_plugins = {
|
23 |
|
24 |
"解析整个Python项目": {
|
25 |
"Color": "stop", # 按钮颜色
|
26 |
"Function": HotReload(解析一个Python项目)
|
27 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
"批量总结Word文档": {
|
29 |
"Color": "stop",
|
30 |
"Function": HotReload(总结word文档)
|
@@ -168,7 +181,7 @@ def get_crazy_functions():
|
|
168 |
"AsButton": False, # 加入下拉菜单中
|
169 |
"Function": HotReload(Markdown英译中)
|
170 |
},
|
171 |
-
|
172 |
})
|
173 |
|
174 |
###################### 第三组插件 ###########################
|
@@ -181,7 +194,7 @@ def get_crazy_functions():
|
|
181 |
"Function": HotReload(下载arxiv论文并翻译摘要)
|
182 |
}
|
183 |
})
|
184 |
-
|
185 |
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
186 |
function_plugins.update({
|
187 |
"连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": {
|
@@ -191,5 +204,25 @@ def get_crazy_functions():
|
|
191 |
}
|
192 |
})
|
193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
194 |
###################### 第n组插件 ###########################
|
195 |
return function_plugins
|
|
|
19 |
from crazy_functions.解析项目源代码 import 解析一个Lua项目
|
20 |
from crazy_functions.解析项目源代码 import 解析一个CSharp项目
|
21 |
from crazy_functions.总结word文档 import 总结word文档
|
22 |
+
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
23 |
+
from crazy_functions.对话历史存档 import 对话历史存档
|
24 |
function_plugins = {
|
25 |
|
26 |
"解析整个Python项目": {
|
27 |
"Color": "stop", # 按钮颜色
|
28 |
"Function": HotReload(解析一个Python项目)
|
29 |
},
|
30 |
+
"保存当前的对话": {
|
31 |
+
"AsButton":False,
|
32 |
+
"Function": HotReload(对话历史存档)
|
33 |
+
},
|
34 |
+
"[测试功能] 解析Jupyter Notebook文件": {
|
35 |
+
"Color": "stop",
|
36 |
+
"AsButton":False,
|
37 |
+
"Function": HotReload(解析ipynb文件),
|
38 |
+
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
39 |
+
"ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
|
40 |
+
},
|
41 |
"批量总结Word文档": {
|
42 |
"Color": "stop",
|
43 |
"Function": HotReload(总结word文档)
|
|
|
181 |
"AsButton": False, # 加入下拉菜单中
|
182 |
"Function": HotReload(Markdown英译中)
|
183 |
},
|
184 |
+
|
185 |
})
|
186 |
|
187 |
###################### 第三组插件 ###########################
|
|
|
194 |
"Function": HotReload(下载arxiv论文并翻译摘要)
|
195 |
}
|
196 |
})
|
197 |
+
|
198 |
from crazy_functions.联网的ChatGPT import 连接网络回答问题
|
199 |
function_plugins.update({
|
200 |
"连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": {
|
|
|
204 |
}
|
205 |
})
|
206 |
|
207 |
+
from crazy_functions.解析项目源代码 import 解析任意code项目
|
208 |
+
function_plugins.update({
|
209 |
+
"解析项目源代码(手动指定和筛选源代码文件类型)": {
|
210 |
+
"Color": "stop",
|
211 |
+
"AsButton": False,
|
212 |
+
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
213 |
+
"ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示
|
214 |
+
"Function": HotReload(解析任意code项目)
|
215 |
+
},
|
216 |
+
})
|
217 |
+
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
|
218 |
+
function_plugins.update({
|
219 |
+
"询问多个GPT模型(手动指定询问哪些模型)": {
|
220 |
+
"Color": "stop",
|
221 |
+
"AsButton": False,
|
222 |
+
"AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False)
|
223 |
+
"ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示
|
224 |
+
"Function": HotReload(同时问询_指定模型)
|
225 |
+
},
|
226 |
+
})
|
227 |
###################### 第n组插件 ###########################
|
228 |
return function_plugins
|
crazy_functions/crazy_functions_test.py
CHANGED
@@ -108,6 +108,13 @@ def test_联网回答问题():
|
|
108 |
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
109 |
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
111 |
# test_解析一个Python项目()
|
112 |
# test_Latex英文润色()
|
113 |
# test_Markdown中译英()
|
@@ -116,9 +123,8 @@ def test_联网回答问题():
|
|
116 |
# test_总结word文档()
|
117 |
# test_下载arxiv论文并翻译摘要()
|
118 |
# test_解析一个Cpp项目()
|
119 |
-
|
120 |
-
test_
|
121 |
-
|
122 |
|
123 |
input("程序完成,回车退出。")
|
124 |
print("退出。")
|
|
|
108 |
print("当前问答:", cb[-1][-1].replace("\n"," "))
|
109 |
for i, it in enumerate(cb): print亮蓝(it[0]); print亮黄(it[1])
|
110 |
|
111 |
+
def test_解析ipynb文件():
|
112 |
+
from crazy_functions.解析JupyterNotebook import 解析ipynb文件
|
113 |
+
txt = "crazy_functions/test_samples"
|
114 |
+
for cookies, cb, hist, msg in 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
115 |
+
print(cb)
|
116 |
+
|
117 |
+
|
118 |
# test_解析一个Python项目()
|
119 |
# test_Latex英文润色()
|
120 |
# test_Markdown中译英()
|
|
|
123 |
# test_总结word文档()
|
124 |
# test_下载arxiv论文并翻译摘要()
|
125 |
# test_解析一个Cpp项目()
|
126 |
+
# test_联网回答问题()
|
127 |
+
test_解析ipynb文件()
|
|
|
128 |
|
129 |
input("程序完成,回车退出。")
|
130 |
print("退出。")
|
crazy_functions/对话历史存档.py
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import CatchException, update_ui
|
2 |
+
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
3 |
+
|
4 |
+
def write_chat_to_file(chatbot, file_name=None):
|
5 |
+
"""
|
6 |
+
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
7 |
+
"""
|
8 |
+
import os
|
9 |
+
import time
|
10 |
+
if file_name is None:
|
11 |
+
file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
12 |
+
os.makedirs('./gpt_log/', exist_ok=True)
|
13 |
+
with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f:
|
14 |
+
for i, contents in enumerate(chatbot):
|
15 |
+
for content in contents:
|
16 |
+
try: # 这个bug没找到触发条件,暂时先这样顶一下
|
17 |
+
if type(content) != str: content = str(content)
|
18 |
+
except:
|
19 |
+
continue
|
20 |
+
f.write(content)
|
21 |
+
f.write('\n\n')
|
22 |
+
f.write('<hr color="red"> \n\n')
|
23 |
+
|
24 |
+
res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}')
|
25 |
+
print(res)
|
26 |
+
return res
|
27 |
+
|
28 |
+
@CatchException
|
29 |
+
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
30 |
+
"""
|
31 |
+
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
32 |
+
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
33 |
+
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
34 |
+
chatbot 聊天显示框的句柄,用于显示给用户
|
35 |
+
history 聊天历史,前情提要
|
36 |
+
system_prompt 给gpt的静默提醒
|
37 |
+
web_port 当前软件运行的端口号
|
38 |
+
"""
|
39 |
+
|
40 |
+
chatbot.append(("保存当前对话", f"[Local Message] {write_chat_to_file(chatbot)}"))
|
41 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
42 |
+
|
crazy_functions/解析JupyterNotebook.py
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from toolbox import update_ui
|
2 |
+
from toolbox import CatchException, report_execption, write_results_to_file
|
3 |
+
fast_debug = True
|
4 |
+
|
5 |
+
|
6 |
+
class PaperFileGroup():
|
7 |
+
def __init__(self):
|
8 |
+
self.file_paths = []
|
9 |
+
self.file_contents = []
|
10 |
+
self.sp_file_contents = []
|
11 |
+
self.sp_file_index = []
|
12 |
+
self.sp_file_tag = []
|
13 |
+
|
14 |
+
# count_token
|
15 |
+
from request_llm.bridge_all import model_info
|
16 |
+
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
17 |
+
def get_token_num(txt): return len(
|
18 |
+
enc.encode(txt, disallowed_special=()))
|
19 |
+
self.get_token_num = get_token_num
|
20 |
+
|
21 |
+
def run_file_split(self, max_token_limit=1900):
|
22 |
+
"""
|
23 |
+
将长文本分离开来
|
24 |
+
"""
|
25 |
+
for index, file_content in enumerate(self.file_contents):
|
26 |
+
if self.get_token_num(file_content) < max_token_limit:
|
27 |
+
self.sp_file_contents.append(file_content)
|
28 |
+
self.sp_file_index.append(index)
|
29 |
+
self.sp_file_tag.append(self.file_paths[index])
|
30 |
+
else:
|
31 |
+
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
|
32 |
+
segments = breakdown_txt_to_satisfy_token_limit_for_pdf(
|
33 |
+
file_content, self.get_token_num, max_token_limit)
|
34 |
+
for j, segment in enumerate(segments):
|
35 |
+
self.sp_file_contents.append(segment)
|
36 |
+
self.sp_file_index.append(index)
|
37 |
+
self.sp_file_tag.append(
|
38 |
+
self.file_paths[index] + f".part-{j}.txt")
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
def parseNotebook(filename, enable_markdown=1):
|
43 |
+
import json
|
44 |
+
|
45 |
+
CodeBlocks = []
|
46 |
+
with open(filename, 'r', encoding='utf-8', errors='replace') as f:
|
47 |
+
notebook = json.load(f)
|
48 |
+
for cell in notebook['cells']:
|
49 |
+
if cell['cell_type'] == 'code' and cell['source']:
|
50 |
+
# remove blank lines
|
51 |
+
cell['source'] = [line for line in cell['source'] if line.strip()
|
52 |
+
!= '']
|
53 |
+
CodeBlocks.append("".join(cell['source']))
|
54 |
+
elif enable_markdown and cell['cell_type'] == 'markdown' and cell['source']:
|
55 |
+
cell['source'] = [line for line in cell['source'] if line.strip()
|
56 |
+
!= '']
|
57 |
+
CodeBlocks.append("Markdown:"+"".join(cell['source']))
|
58 |
+
|
59 |
+
Code = ""
|
60 |
+
for idx, code in enumerate(CodeBlocks):
|
61 |
+
Code += f"This is {idx+1}th code block: \n"
|
62 |
+
Code += code+"\n"
|
63 |
+
|
64 |
+
return Code
|
65 |
+
|
66 |
+
|
67 |
+
def ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
|
68 |
+
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
|
69 |
+
|
70 |
+
enable_markdown = plugin_kwargs.get("advanced_arg", "1")
|
71 |
+
try:
|
72 |
+
enable_markdown = int(enable_markdown)
|
73 |
+
except ValueError:
|
74 |
+
enable_markdown = 1
|
75 |
+
|
76 |
+
pfg = PaperFileGroup()
|
77 |
+
|
78 |
+
for fp in file_manifest:
|
79 |
+
file_content = parseNotebook(fp, enable_markdown=enable_markdown)
|
80 |
+
pfg.file_paths.append(fp)
|
81 |
+
pfg.file_contents.append(file_content)
|
82 |
+
|
83 |
+
# <-------- 拆分过长的IPynb文件 ---------->
|
84 |
+
pfg.run_file_split(max_token_limit=1024)
|
85 |
+
n_split = len(pfg.sp_file_contents)
|
86 |
+
|
87 |
+
inputs_array = [r"This is a Jupyter Notebook file, tell me about Each Block in Chinese. Focus Just On Code." +
|
88 |
+
r"If a block starts with `Markdown` which means it's a markdown block in ipynbipynb. " +
|
89 |
+
r"Start a new line for a block and block num use Chinese." +
|
90 |
+
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
91 |
+
inputs_show_user_array = [f"{f}的分析如下" for f in pfg.sp_file_tag]
|
92 |
+
sys_prompt_array = ["You are a professional programmer."] * n_split
|
93 |
+
|
94 |
+
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
|
95 |
+
inputs_array=inputs_array,
|
96 |
+
inputs_show_user_array=inputs_show_user_array,
|
97 |
+
llm_kwargs=llm_kwargs,
|
98 |
+
chatbot=chatbot,
|
99 |
+
history_array=[[""] for _ in range(n_split)],
|
100 |
+
sys_prompt_array=sys_prompt_array,
|
101 |
+
# max_workers=5, # OpenAI所允许的最大并行过载
|
102 |
+
scroller_max_len=80
|
103 |
+
)
|
104 |
+
|
105 |
+
# <-------- 整理结果,退出 ---------->
|
106 |
+
block_result = " \n".join(gpt_response_collection)
|
107 |
+
chatbot.append(("解析的结果如下", block_result))
|
108 |
+
history.extend(["解析的结果如下", block_result])
|
109 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
110 |
+
|
111 |
+
# <-------- 写入文件,退出 ---------->
|
112 |
+
res = write_results_to_file(history)
|
113 |
+
chatbot.append(("完成了吗?", res))
|
114 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
115 |
+
|
116 |
+
@CatchException
|
117 |
+
def 解析ipynb文件(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
118 |
+
chatbot.append([
|
119 |
+
"函数插件功能?",
|
120 |
+
"对IPynb文件进行解析。Contributor: codycjy."])
|
121 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
122 |
+
|
123 |
+
history = [] # 清空历史
|
124 |
+
import glob
|
125 |
+
import os
|
126 |
+
if os.path.exists(txt):
|
127 |
+
project_folder = txt
|
128 |
+
else:
|
129 |
+
if txt == "":
|
130 |
+
txt = '空空如也的输入栏'
|
131 |
+
report_execption(chatbot, history,
|
132 |
+
a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
133 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
134 |
+
return
|
135 |
+
if txt.endswith('.ipynb'):
|
136 |
+
file_manifest = [txt]
|
137 |
+
else:
|
138 |
+
file_manifest = [f for f in glob.glob(
|
139 |
+
f'{project_folder}/**/*.ipynb', recursive=True)]
|
140 |
+
if len(file_manifest) == 0:
|
141 |
+
report_execption(chatbot, history,
|
142 |
+
a=f"解析项目: {txt}", b=f"找不到任何.ipynb文件: {txt}")
|
143 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
144 |
+
return
|
145 |
+
yield from ipynb解释(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, )
|
crazy_functions/解析项目源代码.py
CHANGED
@@ -11,7 +11,7 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
11 |
history_array = []
|
12 |
sys_prompt_array = []
|
13 |
report_part_1 = []
|
14 |
-
|
15 |
assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
|
16 |
############################## <第一步,逐个文件分析,多线程> ##################################
|
17 |
for index, fp in enumerate(file_manifest):
|
@@ -63,10 +63,10 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
63 |
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
64 |
i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string})。'
|
65 |
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
66 |
-
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
67 |
this_iteration_history.append(last_iteration_result)
|
68 |
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
69 |
-
inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
70 |
history=this_iteration_history, # 迭代之前的分析
|
71 |
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
72 |
report_part_2.extend([i_say, result])
|
@@ -222,8 +222,8 @@ def 解析一个Golang项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|
222 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
223 |
return
|
224 |
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
225 |
-
|
226 |
-
|
227 |
@CatchException
|
228 |
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
229 |
history = [] # 清空历史,以免输入溢出
|
@@ -243,9 +243,9 @@ def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|
243 |
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
|
244 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
245 |
return
|
246 |
-
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
247 |
-
|
248 |
-
|
249 |
@CatchException
|
250 |
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
251 |
history = [] # 清空历史,以免输入溢出
|
@@ -263,4 +263,45 @@ def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
|
|
263 |
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
|
264 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
265 |
return
|
266 |
-
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
history_array = []
|
12 |
sys_prompt_array = []
|
13 |
report_part_1 = []
|
14 |
+
|
15 |
assert len(file_manifest) <= 512, "源文件太多(超过512个), 请缩减输入文件的数量。或者,您也可以选择删除此行警告,并修改代码拆分file_manifest列表,从而实现分批次处理。"
|
16 |
############################## <第一步,逐个文件分析,多线程> ##################################
|
17 |
for index, fp in enumerate(file_manifest):
|
|
|
63 |
current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
|
64 |
i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string})。'
|
65 |
inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
|
66 |
+
this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
|
67 |
this_iteration_history.append(last_iteration_result)
|
68 |
result = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
69 |
+
inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
|
70 |
history=this_iteration_history, # 迭代之前的分析
|
71 |
sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
|
72 |
report_part_2.extend([i_say, result])
|
|
|
222 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
223 |
return
|
224 |
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
225 |
+
|
226 |
+
|
227 |
@CatchException
|
228 |
def 解析一个Lua项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
229 |
history = [] # 清空历史,以免输入溢出
|
|
|
243 |
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何lua文件: {txt}")
|
244 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
245 |
return
|
246 |
+
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
247 |
+
|
248 |
+
|
249 |
@CatchException
|
250 |
def 解析一个CSharp项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
251 |
history = [] # 清空历史,以免输入溢出
|
|
|
263 |
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何CSharp文件: {txt}")
|
264 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
265 |
return
|
266 |
+
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
267 |
+
|
268 |
+
|
269 |
+
@CatchException
|
270 |
+
def 解析任意code项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
271 |
+
txt_pattern = plugin_kwargs.get("advanced_arg")
|
272 |
+
txt_pattern = txt_pattern.replace(",", ",")
|
273 |
+
# 将要匹配的模式(例如: *.c, *.cpp, *.py, config.toml)
|
274 |
+
pattern_include = [_.lstrip(" ,").rstrip(" ,") for _ in txt_pattern.split(",") if _ != "" and not _.strip().startswith("^")]
|
275 |
+
if not pattern_include: pattern_include = ["*"] # 不输入即全部匹配
|
276 |
+
# 将要忽略匹配的文件后缀(例如: ^*.c, ^*.cpp, ^*.py)
|
277 |
+
pattern_except_suffix = [_.lstrip(" ^*.,").rstrip(" ,") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^*.")]
|
278 |
+
pattern_except_suffix += ['zip', 'rar', '7z', 'tar', 'gz'] # 避免解析压缩文件
|
279 |
+
# 将要忽略匹配的文件名(例如: ^README.md)
|
280 |
+
pattern_except_name = [_.lstrip(" ^*,").rstrip(" ,").replace(".", "\.") for _ in txt_pattern.split(" ") if _ != "" and _.strip().startswith("^") and not _.strip().startswith("^*.")]
|
281 |
+
# 生成正则表达式
|
282 |
+
pattern_except = '/[^/]+\.(' + "|".join(pattern_except_suffix) + ')$'
|
283 |
+
pattern_except += '|/(' + "|".join(pattern_except_name) + ')$' if pattern_except_name != [] else ''
|
284 |
+
|
285 |
+
history.clear()
|
286 |
+
import glob, os, re
|
287 |
+
if os.path.exists(txt):
|
288 |
+
project_folder = txt
|
289 |
+
else:
|
290 |
+
if txt == "": txt = '空空如也的输入栏'
|
291 |
+
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
292 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
293 |
+
return
|
294 |
+
# 若上传压缩文件, 先寻找到解压的文件夹路径, 从而避免解析压缩文件
|
295 |
+
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
296 |
+
if len(maybe_dir)>0 and maybe_dir[0].endswith('.extract'):
|
297 |
+
extract_folder_path = maybe_dir[0]
|
298 |
+
else:
|
299 |
+
extract_folder_path = project_folder
|
300 |
+
# 按输入的匹配模式寻找上传的非压缩文件和已解压的文件
|
301 |
+
file_manifest = [f for pattern in pattern_include for f in glob.glob(f'{extract_folder_path}/**/{pattern}', recursive=True) if "" != extract_folder_path and \
|
302 |
+
os.path.isfile(f) and (not re.search(pattern_except, f) or pattern.endswith('.' + re.search(pattern_except, f).group().split('.')[-1]))]
|
303 |
+
if len(file_manifest) == 0:
|
304 |
+
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何文件: {txt}")
|
305 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
306 |
+
return
|
307 |
+
yield from 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
|
crazy_functions/询问多个大语言模型.py
CHANGED
@@ -25,6 +25,35 @@ def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
|
|
25 |
retry_times_at_unknown_error=0
|
26 |
)
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
history.append(txt)
|
29 |
history.append(gpt_say)
|
30 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
|
|
25 |
retry_times_at_unknown_error=0
|
26 |
)
|
27 |
|
28 |
+
history.append(txt)
|
29 |
+
history.append(gpt_say)
|
30 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
31 |
+
|
32 |
+
|
33 |
+
@CatchException
|
34 |
+
def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
35 |
+
"""
|
36 |
+
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
37 |
+
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
38 |
+
plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行
|
39 |
+
chatbot 聊天显示框的句柄,用于显示给用户
|
40 |
+
history 聊天历史,前情提要
|
41 |
+
system_prompt 给gpt的静默提醒
|
42 |
+
web_port 当前软件运行的端口号
|
43 |
+
"""
|
44 |
+
history = [] # 清空历史,以免输入溢出
|
45 |
+
chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……"))
|
46 |
+
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
47 |
+
|
48 |
+
# llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
49 |
+
llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔
|
50 |
+
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
51 |
+
inputs=txt, inputs_show_user=txt,
|
52 |
+
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
53 |
+
sys_prompt=system_prompt,
|
54 |
+
retry_times_at_unknown_error=0
|
55 |
+
)
|
56 |
+
|
57 |
history.append(txt)
|
58 |
history.append(gpt_say)
|
59 |
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
crazy_functions/谷歌检索小助手.py
CHANGED
@@ -70,6 +70,7 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|
70 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
71 |
try:
|
72 |
import arxiv
|
|
|
73 |
from bs4 import BeautifulSoup
|
74 |
except:
|
75 |
report_execption(chatbot, history,
|
@@ -80,25 +81,26 @@ def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
|
|
80 |
|
81 |
# 清空历史,以免输入溢出
|
82 |
history = []
|
83 |
-
|
84 |
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
93 |
-
inputs=i_say, inputs_show_user=inputs_show_user,
|
94 |
-
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
95 |
-
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown格式。你必须逐个文献进行处理。"
|
96 |
-
)
|
97 |
|
98 |
-
|
99 |
-
|
100 |
|
101 |
-
chatbot.append(["状态?",
|
|
|
102 |
msg = '正常'
|
103 |
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
104 |
res = write_results_to_file(history)
|
|
|
70 |
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
71 |
try:
|
72 |
import arxiv
|
73 |
+
import math
|
74 |
from bs4 import BeautifulSoup
|
75 |
except:
|
76 |
report_execption(chatbot, history,
|
|
|
81 |
|
82 |
# 清空历史,以免输入溢出
|
83 |
history = []
|
|
|
84 |
meta_paper_info_list = yield from get_meta_information(txt, chatbot, history)
|
85 |
+
batchsize = 5
|
86 |
+
for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)):
|
87 |
+
if len(meta_paper_info_list[:batchsize]) > 0:
|
88 |
+
i_say = "下面是一些学术文献的数据,提取出以下内容:" + \
|
89 |
+
"1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \
|
90 |
+
f"以下是信息源:{str(meta_paper_info_list[:batchsize])}"
|
91 |
|
92 |
+
inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}批"
|
93 |
+
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
94 |
+
inputs=i_say, inputs_show_user=inputs_show_user,
|
95 |
+
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
96 |
+
sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown表格。你必须逐个文献进行处理。"
|
97 |
+
)
|
|
|
|
|
|
|
|
|
|
|
98 |
|
99 |
+
history.extend([ f"第{batch+1}批", gpt_say ])
|
100 |
+
meta_paper_info_list = meta_paper_info_list[batchsize:]
|
101 |
|
102 |
+
chatbot.append(["状态?",
|
103 |
+
"已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write a \"Related Works\" section about \"你搜索的研究领域\" for me."])
|
104 |
msg = '正常'
|
105 |
yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
|
106 |
res = write_results_to_file(history)
|
docs/WithFastapi.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Running with fastapi
|
2 |
+
|
3 |
+
We currently support fastapi in order to solve sub-path deploy issue.
|
4 |
+
|
5 |
+
1. change CUSTOM_PATH setting in `config.py`
|
6 |
+
|
7 |
+
``` sh
|
8 |
+
nano config.py
|
9 |
+
```
|
10 |
+
|
11 |
+
2. Edit main.py
|
12 |
+
|
13 |
+
```diff
|
14 |
+
auto_opentab_delay()
|
15 |
+
- demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
16 |
+
+ demo.queue(concurrency_count=CONCURRENT_COUNT)
|
17 |
+
|
18 |
+
- # 如果需要在二级路径下运行
|
19 |
+
- # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
20 |
+
- # if CUSTOM_PATH != "/":
|
21 |
+
- # from toolbox import run_gradio_in_subpath
|
22 |
+
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
23 |
+
- # else:
|
24 |
+
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
25 |
+
|
26 |
+
+ 如果需要在二级路径下运行
|
27 |
+
+ CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
28 |
+
+ if CUSTOM_PATH != "/":
|
29 |
+
+ from toolbox import run_gradio_in_subpath
|
30 |
+
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
31 |
+
+ else:
|
32 |
+
+ demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
33 |
+
|
34 |
+
if __name__ == "__main__":
|
35 |
+
main()
|
36 |
+
```
|
37 |
+
|
38 |
+
|
39 |
+
3. Go!
|
40 |
+
|
41 |
+
``` sh
|
42 |
+
python main.py
|
43 |
+
```
|
main.py
CHANGED
@@ -45,7 +45,7 @@ def main():
|
|
45 |
|
46 |
gr_L1 = lambda: gr.Row().style()
|
47 |
gr_L2 = lambda scale: gr.Column(scale=scale)
|
48 |
-
if LAYOUT == "TOP-DOWN":
|
49 |
gr_L1 = lambda: DummyWith()
|
50 |
gr_L2 = lambda scale: gr.Row()
|
51 |
CHATBOT_HEIGHT /= 2
|
@@ -89,9 +89,12 @@ def main():
|
|
89 |
with gr.Row():
|
90 |
with gr.Accordion("更多函数插件", open=True):
|
91 |
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
92 |
-
with gr.
|
93 |
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
|
94 |
-
with gr.
|
|
|
|
|
|
|
95 |
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
96 |
with gr.Row():
|
97 |
with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up:
|
@@ -101,7 +104,7 @@ def main():
|
|
101 |
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
102 |
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
103 |
max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",)
|
104 |
-
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
105 |
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
106 |
|
107 |
gr.Markdown(description)
|
@@ -123,11 +126,12 @@ def main():
|
|
123 |
ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))})
|
124 |
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
|
125 |
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
|
|
|
126 |
if "底部输入区" in a: ret.update({txt: gr.update(value="")})
|
127 |
return ret
|
128 |
-
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2] )
|
129 |
# 整理反复出现的控件句柄组合
|
130 |
-
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt]
|
131 |
output_combo = [cookies, chatbot, history, status]
|
132 |
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo)
|
133 |
# 提交按钮、重置按钮
|
@@ -154,14 +158,19 @@ def main():
|
|
154 |
# 函数插件-下拉菜单与随变按钮的互动
|
155 |
def on_dropdown_changed(k):
|
156 |
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
|
157 |
-
|
158 |
-
|
|
|
|
|
|
|
|
|
|
|
159 |
def on_md_dropdown_changed(k):
|
160 |
return {chatbot: gr.update(label="当前模型:"+k)}
|
161 |
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
|
162 |
# 随变按钮的回调函数注册
|
163 |
def route(k, *args, **kwargs):
|
164 |
-
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
165 |
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
166 |
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
167 |
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
|
@@ -179,7 +188,7 @@ def main():
|
|
179 |
print(f"如果浏览器没有自动打开,请复制并转到以下URL:")
|
180 |
print(f"\t(亮色主题): http://localhost:{PORT}")
|
181 |
print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
|
182 |
-
def open():
|
183 |
time.sleep(2) # 打开浏览器
|
184 |
webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
|
185 |
threading.Thread(target=open, name="open-browser", daemon=True).start()
|
@@ -189,5 +198,13 @@ def main():
|
|
189 |
auto_opentab_delay()
|
190 |
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png")
|
191 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
192 |
if __name__ == "__main__":
|
193 |
main()
|
|
|
45 |
|
46 |
gr_L1 = lambda: gr.Row().style()
|
47 |
gr_L2 = lambda scale: gr.Column(scale=scale)
|
48 |
+
if LAYOUT == "TOP-DOWN":
|
49 |
gr_L1 = lambda: DummyWith()
|
50 |
gr_L2 = lambda scale: gr.Row()
|
51 |
CHATBOT_HEIGHT /= 2
|
|
|
89 |
with gr.Row():
|
90 |
with gr.Accordion("更多函数插件", open=True):
|
91 |
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
|
92 |
+
with gr.Row():
|
93 |
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
|
94 |
+
with gr.Row():
|
95 |
+
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
|
96 |
+
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
|
97 |
+
with gr.Row():
|
98 |
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
|
99 |
with gr.Row():
|
100 |
with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up:
|
|
|
104 |
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
|
105 |
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
|
106 |
max_length_sl = gr.Slider(minimum=256, maximum=4096, value=512, step=1, interactive=True, label="Local LLM MaxLength",)
|
107 |
+
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
|
108 |
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
|
109 |
|
110 |
gr.Markdown(description)
|
|
|
126 |
ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))})
|
127 |
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
|
128 |
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
|
129 |
+
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
|
130 |
if "底部输入区" in a: ret.update({txt: gr.update(value="")})
|
131 |
return ret
|
132 |
+
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] )
|
133 |
# 整理反复出现的控件句柄组合
|
134 |
+
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
|
135 |
output_combo = [cookies, chatbot, history, status]
|
136 |
predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo)
|
137 |
# 提交按钮、重置按钮
|
|
|
158 |
# 函数插件-下拉菜单与随变按钮的互动
|
159 |
def on_dropdown_changed(k):
|
160 |
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
|
161 |
+
ret = {switchy_bt: gr.update(value=k, variant=variant)}
|
162 |
+
if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
|
163 |
+
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
|
164 |
+
else:
|
165 |
+
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
|
166 |
+
return ret
|
167 |
+
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] )
|
168 |
def on_md_dropdown_changed(k):
|
169 |
return {chatbot: gr.update(label="当前模型:"+k)}
|
170 |
md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] )
|
171 |
# 随变按钮的回调函数注册
|
172 |
def route(k, *args, **kwargs):
|
173 |
+
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
|
174 |
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
|
175 |
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
|
176 |
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
|
|
|
188 |
print(f"如果浏览器没有自动打开,请复制并转到以下URL:")
|
189 |
print(f"\t(亮色主题): http://localhost:{PORT}")
|
190 |
print(f"\t(暗色主题): http://localhost:{PORT}/?__dark-theme=true")
|
191 |
+
def open():
|
192 |
time.sleep(2) # 打开浏览器
|
193 |
webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
|
194 |
threading.Thread(target=open, name="open-browser", daemon=True).start()
|
|
|
198 |
auto_opentab_delay()
|
199 |
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png")
|
200 |
|
201 |
+
# 如果需要在二级路径下运行
|
202 |
+
# CUSTOM_PATH, = get_conf('CUSTOM_PATH')
|
203 |
+
# if CUSTOM_PATH != "/":
|
204 |
+
# from toolbox import run_gradio_in_subpath
|
205 |
+
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
|
206 |
+
# else:
|
207 |
+
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
|
208 |
+
|
209 |
if __name__ == "__main__":
|
210 |
main()
|
request_llm/README.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
#
|
2 |
|
3 |
## ChatGLM
|
4 |
|
@@ -15,7 +15,7 @@ LLM_MODEL = "chatglm"
|
|
15 |
|
16 |
|
17 |
---
|
18 |
-
## Text-Generation-UI (TGUI)
|
19 |
|
20 |
### 1. 部署TGUI
|
21 |
``` sh
|
|
|
1 |
+
# 如何使用其他大语言模型
|
2 |
|
3 |
## ChatGLM
|
4 |
|
|
|
15 |
|
16 |
|
17 |
---
|
18 |
+
## Text-Generation-UI (TGUI,调试中,暂不可用)
|
19 |
|
20 |
### 1. 部署TGUI
|
21 |
``` sh
|
request_llm/bridge_all.py
CHANGED
@@ -1,12 +1,12 @@
|
|
1 |
|
2 |
"""
|
3 |
-
该文件中主要包含2
|
4 |
|
5 |
-
|
6 |
-
1. predict
|
7 |
|
8 |
-
|
9 |
-
2. predict_no_ui_long_connection
|
10 |
"""
|
11 |
import tiktoken
|
12 |
from functools import lru_cache
|
@@ -210,7 +210,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
|
|
210 |
return_string_collect.append( f"【{str(models[i])} 说】: <font color=\"{colors[i]}\"> {future.result()} </font>" )
|
211 |
|
212 |
window_mutex[-1] = False # stop mutex thread
|
213 |
-
res = '<br/>\n\n---\n\n'.join(return_string_collect)
|
214 |
return res
|
215 |
|
216 |
|
|
|
1 |
|
2 |
"""
|
3 |
+
该文件中主要包含2个函数,是所有LLM的通用接口,它们会继续向下调用更底层的LLM模型,处理多模型并行等细节
|
4 |
|
5 |
+
不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程
|
6 |
+
1. predict(...)
|
7 |
|
8 |
+
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
|
9 |
+
2. predict_no_ui_long_connection(...)
|
10 |
"""
|
11 |
import tiktoken
|
12 |
from functools import lru_cache
|
|
|
210 |
return_string_collect.append( f"【{str(models[i])} 说】: <font color=\"{colors[i]}\"> {future.result()} </font>" )
|
211 |
|
212 |
window_mutex[-1] = False # stop mutex thread
|
213 |
+
res = '<br/><br/>\n\n---\n\n'.join(return_string_collect)
|
214 |
return res
|
215 |
|
216 |
|
request_llm/bridge_chatglm.py
CHANGED
@@ -32,6 +32,7 @@ class GetGLMHandle(Process):
|
|
32 |
return self.chatglm_model is not None
|
33 |
|
34 |
def run(self):
|
|
|
35 |
# 第一次运行,加载参数
|
36 |
retry = 0
|
37 |
while True:
|
@@ -53,17 +54,24 @@ class GetGLMHandle(Process):
|
|
53 |
self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
|
54 |
raise RuntimeError("不能正常加载ChatGLM的参数!")
|
55 |
|
56 |
-
# 进入任务等待状态
|
57 |
while True:
|
|
|
58 |
kwargs = self.child.recv()
|
|
|
59 |
try:
|
60 |
for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
|
61 |
self.child.send(response)
|
|
|
|
|
|
|
|
|
62 |
except:
|
63 |
self.child.send('[Local Message] Call ChatGLM fail.')
|
|
|
64 |
self.child.send('[Finish]')
|
65 |
|
66 |
def stream_chat(self, **kwargs):
|
|
|
67 |
self.parent.send(kwargs)
|
68 |
while True:
|
69 |
res = self.parent.recv()
|
@@ -92,8 +100,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|
92 |
|
93 |
# chatglm 没有 sys_prompt 接口,因此把prompt加入 history
|
94 |
history_feedin = []
|
|
|
95 |
for i in range(len(history)//2):
|
96 |
-
history_feedin.append(["What can I do?", sys_prompt] )
|
97 |
history_feedin.append([history[2*i], history[2*i+1]] )
|
98 |
|
99 |
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
@@ -130,11 +138,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|
130 |
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
131 |
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
132 |
|
|
|
133 |
history_feedin = []
|
|
|
134 |
for i in range(len(history)//2):
|
135 |
-
history_feedin.append(["What can I do?", system_prompt] )
|
136 |
history_feedin.append([history[2*i], history[2*i+1]] )
|
137 |
|
|
|
138 |
for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
139 |
chatbot[-1] = (inputs, response)
|
140 |
-
yield from update_ui(chatbot=chatbot, history=history)
|
|
|
|
|
|
|
|
|
|
32 |
return self.chatglm_model is not None
|
33 |
|
34 |
def run(self):
|
35 |
+
# 子进程执行
|
36 |
# 第一次运行,加载参数
|
37 |
retry = 0
|
38 |
while True:
|
|
|
54 |
self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
|
55 |
raise RuntimeError("不能正常加载ChatGLM的参数!")
|
56 |
|
|
|
57 |
while True:
|
58 |
+
# 进入任务等待状态
|
59 |
kwargs = self.child.recv()
|
60 |
+
# 收到消息,开始请求
|
61 |
try:
|
62 |
for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
|
63 |
self.child.send(response)
|
64 |
+
# # 中途接收可能的终止指令(如果有的话)
|
65 |
+
# if self.child.poll():
|
66 |
+
# command = self.child.recv()
|
67 |
+
# if command == '[Terminate]': break
|
68 |
except:
|
69 |
self.child.send('[Local Message] Call ChatGLM fail.')
|
70 |
+
# 请求处理结束,开始下一个循环
|
71 |
self.child.send('[Finish]')
|
72 |
|
73 |
def stream_chat(self, **kwargs):
|
74 |
+
# 主进程执行
|
75 |
self.parent.send(kwargs)
|
76 |
while True:
|
77 |
res = self.parent.recv()
|
|
|
100 |
|
101 |
# chatglm 没有 sys_prompt 接口,因此把prompt加入 history
|
102 |
history_feedin = []
|
103 |
+
history_feedin.append(["What can I do?", sys_prompt])
|
104 |
for i in range(len(history)//2):
|
|
|
105 |
history_feedin.append([history[2*i], history[2*i+1]] )
|
106 |
|
107 |
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
|
|
|
138 |
if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
|
139 |
inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
|
140 |
|
141 |
+
# 处理历史信息
|
142 |
history_feedin = []
|
143 |
+
history_feedin.append(["What can I do?", system_prompt] )
|
144 |
for i in range(len(history)//2):
|
|
|
145 |
history_feedin.append([history[2*i], history[2*i+1]] )
|
146 |
|
147 |
+
# 开始接收chatglm的回复
|
148 |
for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
|
149 |
chatbot[-1] = (inputs, response)
|
150 |
+
yield from update_ui(chatbot=chatbot, history=history)
|
151 |
+
|
152 |
+
# 总结输出
|
153 |
+
history.extend([inputs, response])
|
154 |
+
yield from update_ui(chatbot=chatbot, history=history)
|
request_llm/bridge_chatgpt.py
CHANGED
@@ -21,7 +21,7 @@ import importlib
|
|
21 |
|
22 |
# config_private.py放自己的秘密如API和代理网址
|
23 |
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
24 |
-
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys
|
25 |
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
|
26 |
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
27 |
|
@@ -145,7 +145,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|
145 |
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
146 |
return
|
147 |
|
148 |
-
history.append(inputs); history.append("
|
149 |
|
150 |
retry = 0
|
151 |
while True:
|
@@ -198,14 +198,17 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|
198 |
chunk_decoded = chunk.decode()
|
199 |
error_msg = chunk_decoded
|
200 |
if "reduce the length" in error_msg:
|
201 |
-
|
202 |
-
history = []
|
|
|
|
|
|
|
203 |
elif "does not exist" in error_msg:
|
204 |
-
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist.
|
205 |
elif "Incorrect API key" in error_msg:
|
206 |
-
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY
|
207 |
elif "exceeded your current quota" in error_msg:
|
208 |
-
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI
|
209 |
elif "bad forward key" in error_msg:
|
210 |
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
211 |
elif "Not enough point" in error_msg:
|
|
|
21 |
|
22 |
# config_private.py放自己的秘密如API和代理网址
|
23 |
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
|
24 |
+
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
|
25 |
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
|
26 |
get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
|
27 |
|
|
|
145 |
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
|
146 |
return
|
147 |
|
148 |
+
history.append(inputs); history.append("")
|
149 |
|
150 |
retry = 0
|
151 |
while True:
|
|
|
198 |
chunk_decoded = chunk.decode()
|
199 |
error_msg = chunk_decoded
|
200 |
if "reduce the length" in error_msg:
|
201 |
+
if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
|
202 |
+
history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
|
203 |
+
max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
|
204 |
+
chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
|
205 |
+
# history = [] # 清除历史
|
206 |
elif "does not exist" in error_msg:
|
207 |
+
chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
|
208 |
elif "Incorrect API key" in error_msg:
|
209 |
+
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.")
|
210 |
elif "exceeded your current quota" in error_msg:
|
211 |
+
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.")
|
212 |
elif "bad forward key" in error_msg:
|
213 |
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
|
214 |
elif "Not enough point" in error_msg:
|
toolbox.py
CHANGED
@@ -24,23 +24,23 @@ def ArgsGeneralWrapper(f):
|
|
24 |
"""
|
25 |
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
26 |
"""
|
27 |
-
def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, *args):
|
28 |
txt_passon = txt
|
29 |
if txt == "" and txt2 != "": txt_passon = txt2
|
30 |
# 引入一个有cookie的chatbot
|
31 |
cookies.update({
|
32 |
-
'top_p':top_p,
|
33 |
'temperature':temperature,
|
34 |
})
|
35 |
llm_kwargs = {
|
36 |
'api_key': cookies['api_key'],
|
37 |
'llm_model': llm_model,
|
38 |
-
'top_p':top_p,
|
39 |
'max_length': max_length,
|
40 |
'temperature':temperature,
|
41 |
}
|
42 |
plugin_kwargs = {
|
43 |
-
|
44 |
}
|
45 |
chatbot_with_cookie = ChatBotWithCookies(cookies)
|
46 |
chatbot_with_cookie.write_list(chatbot)
|
@@ -219,7 +219,7 @@ def markdown_convertion(txt):
|
|
219 |
return content
|
220 |
else:
|
221 |
return tex2mathml_catch_exception(content)
|
222 |
-
|
223 |
def markdown_bug_hunt(content):
|
224 |
"""
|
225 |
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
@@ -227,7 +227,7 @@ def markdown_convertion(txt):
|
|
227 |
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
228 |
content = content.replace('</script>\n</script>', '</script>')
|
229 |
return content
|
230 |
-
|
231 |
|
232 |
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
233 |
# convert everything to html format
|
@@ -248,7 +248,7 @@ def markdown_convertion(txt):
|
|
248 |
def close_up_code_segment_during_stream(gpt_reply):
|
249 |
"""
|
250 |
在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
|
251 |
-
|
252 |
Args:
|
253 |
gpt_reply (str): GPT模型返回的回复字符串。
|
254 |
|
@@ -511,7 +511,7 @@ class DummyWith():
|
|
511 |
它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。
|
512 |
上下文管理器是一种Python对象,用于与with语句一起使用,
|
513 |
以确保一些资源在代码块执行期间得到正确的初始化和清理。
|
514 |
-
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
|
515 |
在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用,
|
516 |
而在上下文执行结束时,__exit__()方法则会被调用。
|
517 |
"""
|
@@ -520,3 +520,83 @@ class DummyWith():
|
|
520 |
|
521 |
def __exit__(self, exc_type, exc_value, traceback):
|
522 |
return
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
"""
|
25 |
装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
|
26 |
"""
|
27 |
+
def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg, *args):
|
28 |
txt_passon = txt
|
29 |
if txt == "" and txt2 != "": txt_passon = txt2
|
30 |
# 引入一个有cookie的chatbot
|
31 |
cookies.update({
|
32 |
+
'top_p':top_p,
|
33 |
'temperature':temperature,
|
34 |
})
|
35 |
llm_kwargs = {
|
36 |
'api_key': cookies['api_key'],
|
37 |
'llm_model': llm_model,
|
38 |
+
'top_p':top_p,
|
39 |
'max_length': max_length,
|
40 |
'temperature':temperature,
|
41 |
}
|
42 |
plugin_kwargs = {
|
43 |
+
"advanced_arg": plugin_advanced_arg,
|
44 |
}
|
45 |
chatbot_with_cookie = ChatBotWithCookies(cookies)
|
46 |
chatbot_with_cookie.write_list(chatbot)
|
|
|
219 |
return content
|
220 |
else:
|
221 |
return tex2mathml_catch_exception(content)
|
222 |
+
|
223 |
def markdown_bug_hunt(content):
|
224 |
"""
|
225 |
解决一个mdx_math的bug(单$包裹begin命令时多余<script>)
|
|
|
227 |
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
|
228 |
content = content.replace('</script>\n</script>', '</script>')
|
229 |
return content
|
230 |
+
|
231 |
|
232 |
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
|
233 |
# convert everything to html format
|
|
|
248 |
def close_up_code_segment_during_stream(gpt_reply):
|
249 |
"""
|
250 |
在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的```
|
251 |
+
|
252 |
Args:
|
253 |
gpt_reply (str): GPT模型返回的回复字符串。
|
254 |
|
|
|
511 |
它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。
|
512 |
上下文管理器是一种Python对象,用于与with语句一起使用,
|
513 |
以确保一些资源在代码块执行期间得到正确的初始化和清理。
|
514 |
+
上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
|
515 |
在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用,
|
516 |
而在上下文执行结束时,__exit__()方法则会被调用。
|
517 |
"""
|
|
|
520 |
|
521 |
def __exit__(self, exc_type, exc_value, traceback):
|
522 |
return
|
523 |
+
|
524 |
+
def run_gradio_in_subpath(demo, auth, port, custom_path):
|
525 |
+
def is_path_legal(path: str)->bool:
|
526 |
+
'''
|
527 |
+
check path for sub url
|
528 |
+
path: path to check
|
529 |
+
return value: do sub url wrap
|
530 |
+
'''
|
531 |
+
if path == "/": return True
|
532 |
+
if len(path) == 0:
|
533 |
+
print("ilegal custom path: {}\npath must not be empty\ndeploy on root url".format(path))
|
534 |
+
return False
|
535 |
+
if path[0] == '/':
|
536 |
+
if path[1] != '/':
|
537 |
+
print("deploy on sub-path {}".format(path))
|
538 |
+
return True
|
539 |
+
return False
|
540 |
+
print("ilegal custom path: {}\npath should begin with \'/\'\ndeploy on root url".format(path))
|
541 |
+
return False
|
542 |
+
|
543 |
+
if not is_path_legal(custom_path): raise RuntimeError('Ilegal custom path')
|
544 |
+
import uvicorn
|
545 |
+
import gradio as gr
|
546 |
+
from fastapi import FastAPI
|
547 |
+
app = FastAPI()
|
548 |
+
if custom_path != "/":
|
549 |
+
@app.get("/")
|
550 |
+
def read_main():
|
551 |
+
return {"message": f"Gradio is running at: {custom_path}"}
|
552 |
+
app = gr.mount_gradio_app(app, demo, path=custom_path)
|
553 |
+
uvicorn.run(app, host="0.0.0.0", port=port) # , auth=auth
|
554 |
+
|
555 |
+
|
556 |
+
def clip_history(inputs, history, tokenizer, max_token_limit):
|
557 |
+
"""
|
558 |
+
reduce the length of history by clipping.
|
559 |
+
this function search for the longest entries to clip, little by little,
|
560 |
+
until the number of token of history is reduced under threshold.
|
561 |
+
通过裁剪来缩短历史记录的长度。
|
562 |
+
此函数逐渐地搜索最长的条目进行剪辑,
|
563 |
+
直到历史记录的标记数量降低到阈值以下。
|
564 |
+
"""
|
565 |
+
import numpy as np
|
566 |
+
from request_llm.bridge_all import model_info
|
567 |
+
def get_token_num(txt):
|
568 |
+
return len(tokenizer.encode(txt, disallowed_special=()))
|
569 |
+
input_token_num = get_token_num(inputs)
|
570 |
+
if input_token_num < max_token_limit * 3 / 4:
|
571 |
+
# 当输入部分的token占比小于限制的3/4时,裁剪时
|
572 |
+
# 1. 把input的余量留出来
|
573 |
+
max_token_limit = max_token_limit - input_token_num
|
574 |
+
# 2. 把输出用���余量留出来
|
575 |
+
max_token_limit = max_token_limit - 128
|
576 |
+
# 3. 如果余量太小了,直接清除历史
|
577 |
+
if max_token_limit < 128:
|
578 |
+
history = []
|
579 |
+
return history
|
580 |
+
else:
|
581 |
+
# 当输入部分的token占比 > 限制的3/4时,直接清除历史
|
582 |
+
history = []
|
583 |
+
return history
|
584 |
+
|
585 |
+
everything = ['']
|
586 |
+
everything.extend(history)
|
587 |
+
n_token = get_token_num('\n'.join(everything))
|
588 |
+
everything_token = [get_token_num(e) for e in everything]
|
589 |
+
|
590 |
+
# 截断时的颗粒度
|
591 |
+
delta = max(everything_token) // 16
|
592 |
+
|
593 |
+
while n_token > max_token_limit:
|
594 |
+
where = np.argmax(everything_token)
|
595 |
+
encoded = tokenizer.encode(everything[where], disallowed_special=())
|
596 |
+
clipped_encoded = encoded[:len(encoded)-delta]
|
597 |
+
everything[where] = tokenizer.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char
|
598 |
+
everything_token[where] = get_token_num(everything[where])
|
599 |
+
n_token = get_token_num('\n'.join(everything))
|
600 |
+
|
601 |
+
history = everything[1:]
|
602 |
+
return history
|
version
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"version": 3.
|
3 |
"show_feature": true,
|
4 |
-
"new_feature": "添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4
|
5 |
}
|
|
|
1 |
{
|
2 |
+
"version": 3.2,
|
3 |
"show_feature": true,
|
4 |
+
"new_feature": "保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
|
5 |
}
|