Edit model card

csg-wukong-1B-VL-v0.1 [中文] [English]

OpenCSG

[OpenCSG Community] [github] [wechat] [Twitter]

OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.

The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.

Model Description

CSG-VL is a family of small but strong multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Wukong-1B, Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5 and Phi-2.

Quickstart

Here we show a code snippet to show you how to use the model with transformers.

Before running the snippet, you need to install the following dependencies:

pip install torch transformers accelerate pillow

If the CUDA memory is enough, it would be faster to execute this snippet by setting CUDA_VISIBLE_DEVICES=0.

Users especially those in Chinese mainland may want to refer to a OpenCSG.com.

import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
torch.set_default_device('cpu')  # or 'cuda'

model_name = 'opencsg/csg-wukong-1B-VL-v0.1'
# create model
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    trust_remote_code=True)

# text prompt
prompt = 'What is the astronaut holding in his hand?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)
image = Image.open('example_1.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

Software

OpenCSG介绍

OpenCSG

[OpenCSG 社区] [github] [微信] [推特]

OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。

OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。

模型介绍

CSG-VL 是一个小型但强大的多模式模型系列。它提供多种即插即用视觉编码器,如 EVA-CLIP、SigLIP 和语言主干,包括 Wukong-1B、Llama-3-8B、Phi-1.5、StableLM-2、Qwen1.5 和 Phi-2。

快速开始

下面我们将展示一个代码片段,告诉您如何使用带有转换器的模型。

在运行该代码段之前,您需要安装以下依赖项:

pip install torch transformers accelerate pillow

如果 CUDA 内存足够,通过设置 CUDA_VISIBLE_DEVICES=0 来执行此代码片段会更快。

用户尤其是中国大陆的用户可能需要参考 OpenCSG.com

import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
torch.set_default_device('cpu')  # or 'cuda'

model_name = 'opencsg/csg-wukong-1B-VL-v0.1'
# create model
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    trust_remote_code=True)

# text prompt
prompt = 'What is the astronaut holding in his hand?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)
image = Image.open('example_1.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

硬件资源

  • GPU数量: 16 H800
  • 训练时间: 43天

软件使用

Downloads last month
26
Safetensors
Model size
1.5B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including opencsg/csg-wukong-1B-VL-v0.1