Update README.md
Browse files
README.md
CHANGED
@@ -20,10 +20,6 @@ This model is baesed on https://huggingface.co/bigcode/starcoderbase and is fine
|
|
20 |
|
21 |
|
22 |
## Bias, Risks, and Limitations
|
23 |
-
- Inherits bias, risks, and limitations from the LLaMA model, as described in the [LLaMA Model Card Bias Evaluation](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#quantitative-analysis) and [Ethical Considerations](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#ethical-considerations).
|
24 |
-
- Retains biases present in the Stack Exchange dataset. Per the [latest developer survey for Stack Overflow](https://survey.stackoverflow.co/2022/),
|
25 |
-
which constitutes a significant part of the StackExchange data,
|
26 |
-
most users who answered the survey identified themselves as [White or European, men, between 25 and 34 years old, and based in the US (with a significant part of responders from India).](https://survey.stackoverflow.co/2022/#developer-profile-demographics)
|
27 |
- May generate answers that are incorrect or misleading.
|
28 |
- May copy answers from the training data verbatim.
|
29 |
- May generate language that is hateful or promotes discrimination ([example](https://huggingface.co/trl-lib/llama-7b-se-rl-peft/discussions/7#64376083369f6f907f5bfe4c)).
|
|
|
20 |
|
21 |
|
22 |
## Bias, Risks, and Limitations
|
|
|
|
|
|
|
|
|
23 |
- May generate answers that are incorrect or misleading.
|
24 |
- May copy answers from the training data verbatim.
|
25 |
- May generate language that is hateful or promotes discrimination ([example](https://huggingface.co/trl-lib/llama-7b-se-rl-peft/discussions/7#64376083369f6f907f5bfe4c)).
|