Update README.md
Browse files
README.md
CHANGED
@@ -49,6 +49,16 @@ The highest score a model can get on this benchmark is 100%, you can see the ora
|
|
49 |
| gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 61.06 | [link](ft_gpt-4o-2024-08-06_patched_patched_9yhZp9nn-0-shot_semgrep_1.85.0_20240821_084452.log) |
|
50 |
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
# Static Analysis Eval Benchmark
|
53 |
|
54 |
A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub),
|
|
|
49 |
| gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 61.06 | [link](ft_gpt-4o-2024-08-06_patched_patched_9yhZp9nn-0-shot_semgrep_1.85.0_20240821_084452.log) |
|
50 |
|
51 |
|
52 |
+
## Mixture of Agents (MOA)
|
53 |
+
|
54 |
+
We also benchmarked gpt-4o with [Patched MOA](https://arxiv.org/abs/2407.18521). This demostrates that an inference optimization
|
55 |
+
technique like MOA can improve performance without fine-tuning.
|
56 |
+
|
57 |
+
| Model | Score | Logs |
|
58 |
+
|:-----:|:-----:|:----:|
|
59 |
+
| gpt-4o-moa + 3-shot prompt | 60.18 | [link]()|
|
60 |
+
| gpt-4o-moa + rag (embedding & reranking) | 61.06 | [link]() |
|
61 |
+
|
62 |
# Static Analysis Eval Benchmark
|
63 |
|
64 |
A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub),
|