Datasets:
ArXiv:
DOI:
License:
polish, Santacoder emissions calculation
#8
by
sebpaquet
- opened
README.md
CHANGED
@@ -37,9 +37,10 @@ As code LLMs are developed with data from the open-source community, we believe
|
|
37 |
|
38 |
The overarching technical goal envisioned before the project was announced was to train and release a 12-billion parameter model that matches Codex as described in [this research paper](https://arxiv.org/abs/2107.03374). This model from OpenAI is not released and is only available as API service under a cushman-001, although it’s not entirely clear if this model matches the one described in the paper. It has also been [suggested](https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals#other-random-tidbits) that this model is used in Github CoPilot. Our original plan was to compare model performance on HumanEval and APPS, but along the way, we recognized the need for creating an extensive evaluation suite for Code LLMs.
|
39 |
|
40 |
-
The project ended up breaking the challenge into development phases, starting with the collection of permissively licensed repositories from Github. This
|
|
|
41 |
|
42 |
-
Two cycles of model development were conducted by the BigCode community. The first cycle took place in November-December 2022, and culminated with the release of SantaCoder, a 1.1B parameter model trained on the Java, JavaScript, and Python code from The Stack. In the next cycle, which was held from January to April 2023, the community scaled
|
43 |
|
44 |
|
45 |
### Social Impact Dimensions and Considerations
|
@@ -81,7 +82,7 @@ In general, we expect applicants to be affiliated with a research organization (
|
|
81 |
|
82 |
BigCode has 675 participants with 629 members across the research community (including from Hugging Face and ServiceNow) from 62 countries. The top 5 countries include USA (222), India (60), UK (36), Canada (35), and Germany (30). The community communicates across a total of 48 Slack channels, including Steering Committee (3 channels), Working Groups (7 channels), Task Forces (25 channels), and General Community (13 channels).
|
83 |
|
84 |
-
Everyone who joins the project is required to follow the [BigCode Code of Conduct](https://www.bigcode-project.org/docs/about/code_of_conduct/), understand [how we manage intellectual property](https://www.bigcode-project.org/docs/about/ip/), and are encouraged to introduce themselves, and to join any working group or task force that aligns to their own interests. If a group does not cover their interests, they are encouraged to pitch their ideas and to take a leadership role for a new working group or task force with the approval of the Steering Committee. Researchers
|
85 |
|
86 |
|
87 |
### Project Governance
|
@@ -117,7 +118,7 @@ On [December 14, 2022](https://youtu.be/Kh8yXfJJfU4) Hugging Face and ServiceNow
|
|
117 |
|
118 |
On [December 22, 2022](https://twitter.com/BigCodeProject/status/1605958778330849281?s=20) we released [SantaCoder,](https://huggingface.co/bigcode/santacoder) a 1.1B multilingual large language model for code that outperforms much larger open-source models on both left-to-right generation, and infilling.
|
119 |
|
120 |
-
The SantaCoder models are licensed under an open & responsible AI model license (CodeML [OpenRAIL-M v0.1](https://huggingface.co/spaces/bigcode/license)). These are AI-specific licenses enabling free use and distribution of the model while setting specific use restrictions (e.g. malware generation). We published a [detailed technical report ](https://arxiv.org/abs/2301.03988)that included details of all the key
|
121 |
|
122 |
On [February 1, 2022](https://twitter.com/utopiah/status/1620722505664319488?s=20) Members of the BigCode core team were invited to meet with the European Parliament Innovation Lab. At this meeting we [shared details](https://twitter.com/utopiah/status/1620735424351322114?s=20) of the project and answered questions from members of the Lab. Engaging with policymakers and regulators is an important part of the journey to inform and educate key stakeholders from the broader AI ecosystem.
|
123 |
|
@@ -125,17 +126,21 @@ On [March 20, 2022 ](https://twitter.com/BigCodeProject/status/16378747056455843
|
|
125 |
|
126 |
On [April 13, 2023](https://twitter.com/harmdevries77/status/1646524056538316805?s=20) Inspired by discussions in the training working group, Harm de Vries shared an analysis of Chinchilla scaling laws on how much additional compute resources are needed to create smaller LLMs. These insights suggest we have not reached the limit of training smaller models on more tokens - an important consideration for future research.
|
127 |
|
128 |
-
On May 4, 2023 BigCode announced StarCoder and StarCoderBase, two code LLMs trained on permissively licensed data from GitHub, including from 80+ programming languages, git commits, GitHub issues, and Jupyter notebooks. Similar to [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), StarCoderBase is a ~15B parameter model
|
129 |
|
130 |
|
131 |
### Supporting Resources and Funding
|
132 |
|
133 |
Understanding the costs of a project like BigCode can help ground conversations about the trade-offs involved in the development of code LLM technology more broadly, helping understand how various private and public institutions may participate in this development and allocate resources to maximize its overall benefits. We outline the major costs in terms of computation resources, human participation, and organization.
|
134 |
|
135 |
-
**
|
136 |
-
|
137 |
-
|
138 |
-
|
|
|
|
|
|
|
|
|
139 |
|
140 |
**ServiceNow and Hugging Face employees working on BigCode**
|
141 |
The estimated time commitment for the duration of the project for employees of the host institutions corresponds to 6 full-time employees for the duration of the project.
|
@@ -143,7 +148,7 @@ The estimated time commitment for the duration of the project for employees of t
|
|
143 |
**Estimated volunteer hours across the project**
|
144 |
The time commitment from volunteers is harder to estimate given the large number of participants and the variety of time investments across phases and participants. At a minimum, we estimate overall time commitment from volunteers matched time commitment from employees of the host institutions.
|
145 |
|
146 |
-
**Community events and appreciation** ServiceNow and Hugging Face organized a community meetup that coincided with NeurIPS 2022 in New Orleans, USA. The budget for the event was approximately \$6,000 from ServiceNow Research for the venue with hospitality. Hugging face also provided promotional items including stickers and tshirts at the event, and sent named contributors to the research paper
|
147 |
|
148 |
**Data annotation** Hugging Face funded the data annotation services from Toloka, with a total outlay of $39,000 paid to crowd workers. Since this was a research project, Toloka provided free consulting and agreed to waive the fees for running the annotation tasks on their platform.
|
149 |
|
@@ -163,16 +168,16 @@ The legal basis for data collection under fair use and with regards to GDPR and
|
|
163 |
**The Stack Dataset Access and Management** The StarCoder model was trained on The Stack v1.2, which exclusively contains 6.4TB of [permissively licensed](https://blueoakcouncil.org) data from GitHub repositories, processed from an original source dataset of 102TB. Access and management follow the following schema:
|
164 |
|
165 |
* **What data can be accessed:** the 6.4TB of processed data can be accessed through the Hugging Face Hub, while the original 102TB are only accessible to the stewards of the project for the purposes of enabling the research and to support future internal and external requirements that may arise, for example to search the full dataset to recall licenses, determine code provenance, and attribution.
|
166 |
-
* **What are the conditions for accessing the data:** users are able to inspect the dataset via the Dataset Card and embedded Dataset Preview, but are required to agree to the [Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) for The Stack before being able to download it. This
|
167 |
-
* **How can a data subject request that their data be removed:** we provide an opt-out form that lets people opt out of having any code or text they put on GitHub be included in The Stack. Additionally, anyone who is concerned about specific data they have encountered in The Stack, for example relating to PII, malicious code, or code that has an incorrect license or attribution can email contact@ bigcode-project.org.
|
168 |
-
* **How often is the data updated:** For as long as we are maintaining The Stack dataset, we will provide regular updates to the dataset to remove data that has been flagged since the last version. This includes data that has been opted out, and data that was flagged as malicious code since the previous release. The current plan is to update the dataset every 3 months, although the schedule may change based on the volume of requests received. If we are not in a position to continue maintaining the dataset, we plan to stop distributing it in its current format and update its terms of use to limit its range of applications further.
|
169 |
|
170 |
**PII Dataset Access and Management** In order to support our efforts to mitigate the risk that the model may leak private information, we selected 12,000 samples of code from The Stack and annotated them to detect PII using crowd-sourcing. The resulting dataset was used to train a PII detection model that we used to detect and then mask PII (Names, Emails, IP addresses, Keys, Passwords) from our StarCoder training dataset.
|
171 |
|
172 |
|
173 |
|
174 |
-
* **What data can be accessed:** the data
|
175 |
-
* **What are the conditions for accessing the data:** researchers who want to access the dataset need to request access and be approved by the maintainers as well as agree with the
|
176 |
* **How can a data subject request that their data be removed:** as a derived dataset of The Stack, the PII dataset will be updated to reflect data that has been opted out from the source dataset.
|
177 |
* **How often is the data updated:** similarly, following The Stack terms of use, the PII Dataset will be updated as often as the Stack if some of the files it contains have been opted out.
|
178 |
|
|
|
37 |
|
38 |
The overarching technical goal envisioned before the project was announced was to train and release a 12-billion parameter model that matches Codex as described in [this research paper](https://arxiv.org/abs/2107.03374). This model from OpenAI is not released and is only available as API service under a cushman-001, although it’s not entirely clear if this model matches the one described in the paper. It has also been [suggested](https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals#other-random-tidbits) that this model is used in Github CoPilot. Our original plan was to compare model performance on HumanEval and APPS, but along the way, we recognized the need for creating an extensive evaluation suite for Code LLMs.
|
39 |
|
40 |
+
The project ended up breaking the challenge into development phases, starting with the collection of permissively licensed repositories from Github. This initial phase was
|
41 |
+
by the ServiceNow team over several months prior to the official launch of BigCode. It involved inventorying active GitHub repository names, managing the effort to download those repositories, filtering to exclude large files and duplicates, and detecting the licenses used for each repository. This effort ultimately resulted in the creation of [The Stack](https://arxiv.org/abs/2211.15533), a source code dataset that marked the first milestone for the project.
|
42 |
|
43 |
+
Two cycles of model development were conducted by the BigCode community. The first cycle took place in November-December 2022, and culminated with the release of SantaCoder, a 1.1B parameter model trained on the Java, JavaScript, and Python code from The Stack. In the next cycle, which was held from January to April 2023, the community scaled up their efforts and trained 15.5B parameter models on 1T tokens from The Stack. The resulting StarCoder models either match or surpass the code-cushman-001 model on a variety of coding benchmarks.
|
44 |
|
45 |
|
46 |
### Social Impact Dimensions and Considerations
|
|
|
82 |
|
83 |
BigCode has 675 participants with 629 members across the research community (including from Hugging Face and ServiceNow) from 62 countries. The top 5 countries include USA (222), India (60), UK (36), Canada (35), and Germany (30). The community communicates across a total of 48 Slack channels, including Steering Committee (3 channels), Working Groups (7 channels), Task Forces (25 channels), and General Community (13 channels).
|
84 |
|
85 |
+
Everyone who joins the project is required to follow the [BigCode Code of Conduct](https://www.bigcode-project.org/docs/about/code_of_conduct/), understand [how we manage intellectual property](https://www.bigcode-project.org/docs/about/ip/), and are encouraged to introduce themselves, and to join any working group or task force that aligns to their own interests. If a group does not cover their interests, they are encouraged to pitch their ideas and to take a leadership role for a new working group or task force with the approval of the Steering Committee. Researchers who wish to cite StarCoder are asked to please use the DOI link from the top of this page.
|
86 |
|
87 |
|
88 |
### Project Governance
|
|
|
118 |
|
119 |
On [December 22, 2022](https://twitter.com/BigCodeProject/status/1605958778330849281?s=20) we released [SantaCoder,](https://huggingface.co/bigcode/santacoder) a 1.1B multilingual large language model for code that outperforms much larger open-source models on both left-to-right generation, and infilling.
|
120 |
|
121 |
+
The SantaCoder models are licensed under an open & responsible AI model license (CodeML [OpenRAIL-M v0.1](https://huggingface.co/spaces/bigcode/license)). These are AI-specific licenses enabling free use and distribution of the model while setting specific use restrictions (e.g. malware generation). We published a [detailed technical report ](https://arxiv.org/abs/2301.03988)that included details of all the key contributions to the development of the model.
|
122 |
|
123 |
On [February 1, 2022](https://twitter.com/utopiah/status/1620722505664319488?s=20) Members of the BigCode core team were invited to meet with the European Parliament Innovation Lab. At this meeting we [shared details](https://twitter.com/utopiah/status/1620735424351322114?s=20) of the project and answered questions from members of the Lab. Engaging with policymakers and regulators is an important part of the journey to inform and educate key stakeholders from the broader AI ecosystem.
|
124 |
|
|
|
126 |
|
127 |
On [April 13, 2023](https://twitter.com/harmdevries77/status/1646524056538316805?s=20) Inspired by discussions in the training working group, Harm de Vries shared an analysis of Chinchilla scaling laws on how much additional compute resources are needed to create smaller LLMs. These insights suggest we have not reached the limit of training smaller models on more tokens - an important consideration for future research.
|
128 |
|
129 |
+
On May 4, 2023 BigCode announced StarCoder and StarCoderBase, two code LLMs trained on permissively licensed data from GitHub, including from 80+ programming languages, git commits, GitHub issues, and Jupyter notebooks. Similar to [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), StarCoderBase is a ~15B parameter model trained on 1 trillion tokens. On top of StarCoderBase a variant called StarCoder is trained for 35B additional tokens purely on Python.
|
130 |
|
131 |
|
132 |
### Supporting Resources and Funding
|
133 |
|
134 |
Understanding the costs of a project like BigCode can help ground conversations about the trade-offs involved in the development of code LLM technology more broadly, helping understand how various private and public institutions may participate in this development and allocate resources to maximize its overall benefits. We outline the major costs in terms of computation resources, human participation, and organization.
|
135 |
|
136 |
+
**Data collection**
|
137 |
+
ServiceNow handled the data collection effort to constitute a raw dataset containing 5.28B files with a total size of 92 TB and filtered it down to build The Stack.
|
138 |
+
|
139 |
+
**Compute and emissions**
|
140 |
+
We trained SantaCoder on the ServiceNow cluster using 96 Tesla V100 GPUs, and StarCoder on a Hugging Face GPU cluster with 512 A100 80GB GPUs distributed across 64 nodes.
|
141 |
+
We report the carbon footprint of training these models:
|
142 |
+
* SantaCoder: Based on the total number of GPU hours that training took (14,284) and an average power usage of 300W per GPU, this adds up to 4285 kWh of electricity consumed during the training process. Multiplied by the carbon intensity of the energy of the Montreal location (0.029 kgCO2e per kWh) and assuming an average Power Usage Effectiveness of 1.2, this results in 124 kg of CO2eq emitted.
|
143 |
+
* StarCoderBase: 320,256 GPU hours; 280W per GPU; 89671.68 kWh of electricity. Carbon intensity of the energy of the us-west-2 AWS location: 0.15495 kgCO2e per kWh; average Power Usage Effectiveness across AWS datacenters: 1.2. Total emissions: 16.68 tonnes of CO2eq.
|
144 |
|
145 |
**ServiceNow and Hugging Face employees working on BigCode**
|
146 |
The estimated time commitment for the duration of the project for employees of the host institutions corresponds to 6 full-time employees for the duration of the project.
|
|
|
148 |
**Estimated volunteer hours across the project**
|
149 |
The time commitment from volunteers is harder to estimate given the large number of participants and the variety of time investments across phases and participants. At a minimum, we estimate overall time commitment from volunteers matched time commitment from employees of the host institutions.
|
150 |
|
151 |
+
**Community events and appreciation** ServiceNow and Hugging Face organized a community meetup that coincided with NeurIPS 2022 in New Orleans, USA. The budget for the event was approximately \$6,000 from ServiceNow Research for the venue with hospitality. Hugging face also provided promotional items including stickers and tshirts at the event, and sent named contributors to the research paper complimentary BigCode branded tshirts.
|
152 |
|
153 |
**Data annotation** Hugging Face funded the data annotation services from Toloka, with a total outlay of $39,000 paid to crowd workers. Since this was a research project, Toloka provided free consulting and agreed to waive the fees for running the annotation tasks on their platform.
|
154 |
|
|
|
168 |
**The Stack Dataset Access and Management** The StarCoder model was trained on The Stack v1.2, which exclusively contains 6.4TB of [permissively licensed](https://blueoakcouncil.org) data from GitHub repositories, processed from an original source dataset of 102TB. Access and management follow the following schema:
|
169 |
|
170 |
* **What data can be accessed:** the 6.4TB of processed data can be accessed through the Hugging Face Hub, while the original 102TB are only accessible to the stewards of the project for the purposes of enabling the research and to support future internal and external requirements that may arise, for example to search the full dataset to recall licenses, determine code provenance, and attribution.
|
171 |
+
* **What are the conditions for accessing the data:** users are able to inspect the dataset via the Dataset Card and embedded Dataset Preview, but are required to agree to the [Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) for The Stack before being able to download it. This includes the requirements to 1) abide by the terms of original source code licenses, including attribution clauses when required (The Stack provides provenance information for each data point), 2) agree to update copies of The Stack to the most recent usable version specified [here](https://huggingface.co/datasets/bigcode/the-stack/discussions/7), and 3) include the Terms of Use and require users to agree to it if a copy is to be hosted, shared, or otherwise provided. As of May 3, 2023, The Stack had been downloaded 50,200 times.
|
172 |
+
* **How can a data subject request that their data be removed:** we provide an opt-out form that lets people opt out of having any code or text they put on GitHub be included in The Stack. Additionally, anyone who is concerned about specific data they have encountered in The Stack, for example relating to PII, malicious code, or code that has an incorrect license or attribution can email contact@ bigcode-project.org. At the time of the data processing for the StarCoder model training, 44 people had opted out of The Stack and associated repositories were removed.
|
173 |
+
* **How often is the data updated:** For as long as we are maintaining The Stack dataset, we will provide regular updates to the dataset to remove data that has been flagged since the last version. This includes data that has been opted out, and data that was flagged as containing PII, malicious code or using a non-permissive license since the previous release. The current plan is to update the dataset every 3 months, although the schedule may change based on the volume of requests received. If we are not in a position to continue maintaining the dataset, we plan to stop distributing it in its current format and update its terms of use to limit its range of applications further.
|
174 |
|
175 |
**PII Dataset Access and Management** In order to support our efforts to mitigate the risk that the model may leak private information, we selected 12,000 samples of code from The Stack and annotated them to detect PII using crowd-sourcing. The resulting dataset was used to train a PII detection model that we used to detect and then mask PII (Names, Emails, IP addresses, Keys, Passwords) from our StarCoder training dataset.
|
176 |
|
177 |
|
178 |
|
179 |
+
* **What data can be accessed:** the data is hosted as a gated dataset on the Hugging Face Hub. The dataset will be made available to researchers on a case-by-case basis for research projects that require access, in addition to the original team who developed the dataset.
|
180 |
+
* **What are the conditions for accessing the data:** researchers who want to access the dataset need to request access and be approved by the maintainers as well as agree with the dataset's Terms of Use
|
181 |
* **How can a data subject request that their data be removed:** as a derived dataset of The Stack, the PII dataset will be updated to reflect data that has been opted out from the source dataset.
|
182 |
* **How often is the data updated:** similarly, following The Stack terms of use, the PII Dataset will be updated as often as the Stack if some of the files it contains have been opted out.
|
183 |
|