Jason Zheng commited on
Commit
7561615
1 Parent(s): 3181619

modify the github link

Browse files
Files changed (2) hide show
  1. app.py +1 -1
  2. text_content.py +2 -2
app.py CHANGED
@@ -63,7 +63,7 @@ with demo:
63
  """<div style="text-align: center;"><h1> 🏎️RACE Leaderboard</h1></div>\
64
  <br>\
65
  <p>Based on the 🏎️RACE benchmark, we demonstrated the ability of different LLMs to generate code that is <b><i>correct</i></b> and <b><i>meets the requirements of real-world development scenarios</i></b>.</p>
66
- <p>Model details about how to evalute the LLM are available in the <a href="https://github.com/test/test">🏎️RACE GitHub repository</a>.</p>
67
  """,
68
  elem_classes="markdown-text",
69
  )
 
63
  """<div style="text-align: center;"><h1> 🏎️RACE Leaderboard</h1></div>\
64
  <br>\
65
  <p>Based on the 🏎️RACE benchmark, we demonstrated the ability of different LLMs to generate code that is <b><i>correct</i></b> and <b><i>meets the requirements of real-world development scenarios</i></b>.</p>
66
+ <p>More details about how to evalute the LLM are available in the <a href="https://github.com/jszheng21/RACE">🏎️RACE GitHub repository</a>.</p>
67
  """,
68
  elem_classes="markdown-text",
69
  )
text_content.py CHANGED
@@ -10,7 +10,7 @@ The specific factors are as follows:
10
  - **Readability**: The code should be easy to read and understand.
11
  - `Comment`
12
  - `Naming Convention`
13
- - `Code Length`
14
  - **Maintainability**: The code should be easy to maintain and extend.
15
  - `MI Metric`
16
  - `Modularity`
@@ -19,7 +19,7 @@ The specific factors are as follows:
19
  - `Space Complexity`
20
 
21
  # How to evaluate?
22
- To facilitate evaluation on the RACE benchmark, we provide the evaluation data and easy-to-use evaluation scripts in our 🏎️RACE GitHub repository.
23
  Additionally, factors involving execution-based evaluation are conducted in a virtual environment to ensure evaluation security.
24
 
25
  # Contact
 
10
  - **Readability**: The code should be easy to read and understand.
11
  - `Comment`
12
  - `Naming Convention`
13
+ - `Length`
14
  - **Maintainability**: The code should be easy to maintain and extend.
15
  - `MI Metric`
16
  - `Modularity`
 
19
  - `Space Complexity`
20
 
21
  # How to evaluate?
22
+ To facilitate evaluation on the RACE benchmark, we provide the evaluation data and easy-to-use evaluation scripts in our [🏎️RACE GitHub repository](https://github.com/jszheng21/RACE).
23
  Additionally, factors involving execution-based evaluation are conducted in a virtual environment to ensure evaluation security.
24
 
25
  # Contact