|
--- |
|
license: other |
|
license_name: deepseek-license |
|
license_link: LICENSE |
|
--- |
|
|
|
|
|
|
|
### 1. Introduction of Deepseek Coder |
|
|
|
Deepseek Coder comprises a series of code language models trained on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. |
|
|
|
- **Massive Training Data**: Trained on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages. |
|
|
|
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements. |
|
|
|
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks. |
|
|
|
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks. |
|
|
|
|
|
|
|
### 2. Model Summary |
|
deepseek-coder-1.3b-base is a 1.3B parameter model with Multi-Head Attention trained on 1 trillion tokens. |
|
- **Home Page:** [DeepSeek](https://deepseek.com/) |
|
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder) |
|
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/) |
|
|
|
|
|
### 3. How to Use |
|
Here give some examples of how to use our model. |
|
#### 1)Code Completion |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda() |
|
input_text = "#write a quick sort algorithm" |
|
inputs = tokenizer(input_text, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, max_length=128) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
#### 2)Code Insertion |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import torch |
|
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda() |
|
input_text = """<|fim▁begin|>def quick_sort(arr): |
|
if len(arr) <= 1: |
|
return arr |
|
pivot = arr[0] |
|
left = [] |
|
right = [] |
|
<|fim▁hole|> |
|
if arr[i] < pivot: |
|
left.append(arr[i]) |
|
else: |
|
right.append(arr[i]) |
|
return quick_sort(left) + [pivot] + quick_sort(right)<|fim▁end|>""" |
|
inputs = tokenizer(input_text, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, max_length=128) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):]) |
|
``` |
|
|
|
#### 3)Repository Level Code Completion |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda() |
|
|
|
input_text = """#utils.py |
|
import torch |
|
from sklearn import datasets |
|
from sklearn.model_selection import train_test_split |
|
from sklearn.preprocessing import StandardScaler |
|
from sklearn.metrics import accuracy_score |
|
|
|
def load_data(): |
|
iris = datasets.load_iris() |
|
X = iris.data |
|
y = iris.target |
|
|
|
# Standardize the data |
|
scaler = StandardScaler() |
|
X = scaler.fit_transform(X) |
|
|
|
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) |
|
|
|
# Convert numpy data to PyTorch tensors |
|
X_train = torch.tensor(X_train, dtype=torch.float32) |
|
X_test = torch.tensor(X_test, dtype=torch.float32) |
|
y_train = torch.tensor(y_train, dtype=torch.int64) |
|
y_test = torch.tensor(y_test, dtype=torch.int64) |
|
|
|
return X_train, X_test, y_train, y_test |
|
|
|
def evaluate_predictions(y_test, y_pred): |
|
return accuracy_score(y_test, y_pred) |
|
#model.py |
|
import torch |
|
import torch.nn as nn |
|
import torch.optim as optim |
|
from torch.utils.data import DataLoader, TensorDataset |
|
|
|
class IrisClassifier(nn.Module): |
|
def __init__(self): |
|
super(IrisClassifier, self).__init__() |
|
self.fc = nn.Sequential( |
|
nn.Linear(4, 16), |
|
nn.ReLU(), |
|
nn.Linear(16, 3) |
|
) |
|
|
|
def forward(self, x): |
|
return self.fc(x) |
|
|
|
def train_model(self, X_train, y_train, epochs, lr, batch_size): |
|
criterion = nn.CrossEntropyLoss() |
|
optimizer = optim.Adam(self.parameters(), lr=lr) |
|
|
|
# Create DataLoader for batches |
|
dataset = TensorDataset(X_train, y_train) |
|
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True) |
|
|
|
for epoch in range(epochs): |
|
for batch_X, batch_y in dataloader: |
|
optimizer.zero_grad() |
|
outputs = self(batch_X) |
|
loss = criterion(outputs, batch_y) |
|
loss.backward() |
|
optimizer.step() |
|
|
|
def predict(self, X_test): |
|
with torch.no_grad(): |
|
outputs = self(X_test) |
|
_, predicted = outputs.max(1) |
|
return predicted.numpy() |
|
#main.py |
|
from utils import load_data, evaluate_predictions |
|
from model import IrisClassifier as Classifier |
|
|
|
def main(): |
|
# Model training and evaluation |
|
""" |
|
inputs = tokenizer(input_text, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, max_new_tokens=140) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
|
|
|
|
### 4. License |
|
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use. |
|
|
|
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details. |
|
|
|
### 5. Contact |
|
|
|
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). |
|
|