Edit model card

The DeepSeek-1.3B model fine-tuned with CVEFix for vulnerability fixing

Training Details

  • batch size: 2
  • learning rate: 3e-5
  • epochs: 2
  • training steps: 5516

Prompt

{at most 20 lines before the buggy lines} {buggy lines} {at most 20 lines after the buggy lines}

TfLiteStatus HardSwishEval(TfLiteContext* context, TfLiteNode* node) {\n  HardSwishData* data = static_cast<HardSwishData*>(node->user_data);
// buggy lines start:
  const TfLiteTensor* input = GetInput(context, node, 0);
  TfLiteTensor* output = GetOutput(context, node, 0);
// buggy lines end
  switch (input->type) {
    case kTfLiteFloat32: {
      if (kernel_type == kReference) {
        reference_ops::HardSwish(
            GetTensorShape(input), GetTensorData<float>(input),
            GetTensorShape(output), GetTensorData<float>(output));
      } else {
        optimized_ops::HardSwish(
            GetTensorShape(input), GetTensorData<float>(input),
            GetTensorShape(output), GetTensorData<float>(output));
      }
      return kTfLiteOk;
    } break;
    case kTfLiteUInt8: {
      HardSwishParams& params = data->params;
      if (kernel_type == kReference) {
        reference_ops::HardSwish(
            params, GetTensorShape(input), GetTensorData<uint8_t>(input),
            GetTensorShape(output), GetTensorData<uint8_t>(output));
      } else {
// fixed lines:

The model is trained to take the prompt and generate the fixed lines replacing the buggy lines

  const TfLiteTensor* input;
  TF_LITE_ENSURE_OK(context, GetInputSafe(context, node, 0, &input));
  TfLiteTensor* output;
  TF_LITE_ENSURE_OK(context, GetOutputSafe(context, node, 0, &output));
Downloads last month
4
Safetensors
Model size
1.35B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.