Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,782 Bytes
bbc7768
 
 
1542010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: mit
---
---
pipeline_tag: text-generation
tags:
- code
model-index:
- name: VeriGen
  results:
  - task:
      type: text-generation
    dataset:
      type: 
      name: 
    
extra_gated_prompt: >-
  ## Model License Agreement

  Please read the BigCode [OpenRAIL-M
  license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
  agreement before accepting it.
    
extra_gated_fields:
  I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---


# VeriGen


##  Table of Contents

1. [Dataset Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [License](##license)
5. [Citation](##citation)

## Dataset Summary

- The dataset comprises Verilog modules as entries. The entries were retrieved from the GitHub dataset on BigQuery. 
- For training [models (https://huggingface.co/shailja/fine-tuned-codegen-2B-Verilog)], we filtered entries with no of characters exceeding 20000 and duplicates (exact duplicates ignoring whitespaces).

- **Paper:** [ Benchmarking Large Language Models for Automated Verilog RTL Code Generation](https://arxiv.org/abs/2212.11140)
- **Point of Contact:** [contact@shailja](mailto:[email protected])
- **Languages:** Verilog (Hardware Description Language)

### Data Splits

The dataset only contains a train split.

### Use
```python
# pip install datasets

from datasets import load_dataset

ds = load_dataset("shailja/Verilog_GitHub", streaming=True, split="train")
print(next(iter(ds)))

#OUTPUT:


```
 
### Intended Use

The dataset consists of source code from a range of GitHub repositories. As such, they can potentially include non-compilable, low-quality, and vulnerable code.

### Attribution & Other Requirements

The pretraining dataset of the model was not filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. 


# License
The dataset is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@misc{https://doi.org/10.48550/arxiv.2212.11140,
  doi = {10.48550/ARXIV.2212.11140},
  url = {https://arxiv.org/abs/2212.11140},
  author = {Thakur, Shailja and Ahmad, Baleegh and Fan, Zhenxing and Pearce, Hammond and Tan, Benjamin and Karri, Ramesh and Dolan-Gavitt, Brendan and Garg, Siddharth},
  title = {Benchmarking Large Language Models for Automated Verilog RTL Code Generation},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
```