Edit model card

KwaiAgents (Github) is a series of Agent-related works open-sourced by the KwaiKEG from Kuaishou Technology. The open-sourced content includes:

  1. KAgentSys-Lite: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper.
  2. KAgentLMs: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper.
  3. KAgentInstruct: Fine-tuned data of instructions generated by the Meta-agent in the paper.
  4. KAgentBench: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.

User Guide

Direct usage

Tutorial can refer to baichuan-inc/Baichuan2-13B-Base

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("kwaikeg/kagentlms_baichuan2_13b_mat", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("kwaikeg/kagentlms_baichuan2_13b_mat", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))

AgentLMs as service

We recommend using vLLM and FastChat to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):

pip install "fschat[model_worker,webui]"
pip install vllm==0.2.0
pip install transformers==4.33.2

To deploy KAgentLMs, you first need to start the controller in one terminal.

python -m fastchat.serve.controller

Secondly, you should use the following command in another terminal for single-gpu inference service deployment:

python -m fastchat.serve.vllm_worker --model-path $model_path --trust-remote-code

Where $model_path is the local path of the model downloaded. If the GPU does not support Bfloat16, you can add --dtype half to the command line.

Thirdly, start the REST API server in the third terminal.

python -m fastchat.serve.openai_api_server --host localhost --port 8888

Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example:

curl http://localhost:8888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "kagentlms_baichuan2_13b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'

Citation

@article{pan2023kwaiagents,
  author    = {Haojie Pan and
               Zepeng Zhai and
               Hao Yuan and
               Yaojia Lv and
               Ruiji Fu and
               Ming Liu and
               Zhongyuan Wang and
               Bing Qin
               },
  title     = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models},
  journal   = {CoRR},
  volume    = {abs/2312.04889},
  year      = {2023}
}
Downloads last month
13
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Datasets used to train kwaikeg/kagentlms_baichuan2_13b_mat

Collection including kwaikeg/kagentlms_baichuan2_13b_mat