README / README.md
zfj1998's picture
Update README.md
54427b3 verified

A newer version of the Streamlit SDK is available: 1.39.0

Upgrade
metadata
title: README
emoji: πŸ’»
colorFrom: green
colorTo: red
sdk: streamlit
pinned: false

HumanEval-V: A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating LMMs through Coding Tasks

πŸ“„ Paper β€’ 🏠 Home Page β€’ πŸ’» GitHub Repository β€’ πŸ† Leaderboard β€’ πŸ€— Dataset β€’ πŸ€— Dataset Viewer

HumanEval-V is a novel and lightweight benchmark designed to evaluate the visual understanding and reasoning capabilities of Large Multimodal Models (LMMs) through coding tasks. The dataset comprises 108 entry-level Python programming challenges, adapted from platforms like CodeForces and Stack Overflow. Each task includes visual context that is indispensable to the problem, requiring models to perceive, reason, and generate Python code solutions accordingly.

Key features:

  • Visual coding tasks that require understanding images to solve.
  • Entry-level difficulty, making it ideal for assessing the baseline performance of foundational LMMs.
  • Handcrafted test cases for evaluating code correctness through an execution-based metric pass@k.