# Load model directly from transformers import AutoProcessor, AutoModelForVisualQuestionAnswering processor = AutoProcessor.from_pretrained("microsoft/OmniParser") model = AutoModelForVisualQuestionAnswering.from_pretrained("microsoft/OmniParser")
Browse files
README.md
CHANGED
@@ -2,6 +2,11 @@
|
|
2 |
library_name: transformers
|
3 |
license: mit
|
4 |
pipeline_tag: image-text-to-text
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
📢 [[Project Page](https://microsoft.github.io/OmniParser/)] [[Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-for-pure-vision-based-gui-agent/)]
|
7 |
|
@@ -19,6 +24,4 @@ This model hub includes a finetuned version of YOLOv8 and a finetuned BLIP-2 mod
|
|
19 |
## limitations
|
20 |
- OmniParser is designed to faithfully convert screenshot image into structured elements of interactable regions and semantics of the screen, while it does not detect harmful content in its input (like users have freedom to decide the input of any LLMs), users are expected to provide input to the OmniParser that is not harmful.
|
21 |
- While OmniParser only converts screenshot image into texts, it can be used to construct an GUI agent based on LLMs that is actionable. When developing and operating the agent using OmniParser, the developers need to be responsible and follow common safety standard.
|
22 |
-
- For OmniPaser-BLIP2, it may incorrectly infer the gender or other sensitive attribute (e.g., race, religion etc.) of individuals in icon images. Inference of sensitive attributes may rely upon stereotypes and generalizations rather than information about specific individuals and are more likely to be incorrect for marginalized people. Incorrect inferences may result in significant physical or psychological injury or restrict, infringe upon or undermine the ability to realize an individual’s human rights. We do not recommend use of OmniParser in any workplace-like use case scenario.
|
23 |
-
|
24 |
-
|
|
|
2 |
library_name: transformers
|
3 |
license: mit
|
4 |
pipeline_tag: image-text-to-text
|
5 |
+
datasets:
|
6 |
+
- nvidia/HelpSteer2
|
7 |
+
base_model:
|
8 |
+
- nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
|
9 |
+
new_version: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
|
10 |
---
|
11 |
📢 [[Project Page](https://microsoft.github.io/OmniParser/)] [[Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-for-pure-vision-based-gui-agent/)]
|
12 |
|
|
|
24 |
## limitations
|
25 |
- OmniParser is designed to faithfully convert screenshot image into structured elements of interactable regions and semantics of the screen, while it does not detect harmful content in its input (like users have freedom to decide the input of any LLMs), users are expected to provide input to the OmniParser that is not harmful.
|
26 |
- While OmniParser only converts screenshot image into texts, it can be used to construct an GUI agent based on LLMs that is actionable. When developing and operating the agent using OmniParser, the developers need to be responsible and follow common safety standard.
|
27 |
+
- For OmniPaser-BLIP2, it may incorrectly infer the gender or other sensitive attribute (e.g., race, religion etc.) of individuals in icon images. Inference of sensitive attributes may rely upon stereotypes and generalizations rather than information about specific individuals and are more likely to be incorrect for marginalized people. Incorrect inferences may result in significant physical or psychological injury or restrict, infringe upon or undermine the ability to realize an individual’s human rights. We do not recommend use of OmniParser in any workplace-like use case scenario.
|
|
|
|