Ayush8120 commited on
Commit
e156e1e
1 Parent(s): 9652b1f

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center">
2
+ Commonsene Object Affordance Task [COAT]
3
+ </h1>
4
+
5
+ <p align="center">
6
+ <br>
7
+ <a href="https://openreview.net/pdf?id=xYkdmEGhIM">OpenReview</a> | <a href="https://drive.google.com/drive/u/4/folders/1reH0JHhPM_tFzDMcAaJF0PycFMixfIbo">Datasets</a>
8
+ </p>
9
+
10
+ <p align="center">
11
+ <img src="https://github.com/com-phy-affordance/COAT/blob/main/utility-intro(1).png" alt="Paper Summary Flowchart">
12
+ <em>A 3 level framework adumbrating human commonsense style reasoning for estimating object affordance for various tasks</em>
13
+ </p>
14
+
15
+ ### Experimental Setup:
16
+ - Task List: [tasks](https://github.com/com-phy-affordance/com-affordance/blob/main/tasks.json)
17
+ - Object List: [objects](https://github.com/com-phy-affordance/com-affordance/blob/main/concepts.json)
18
+ - Utility List[^1]: [utilities](https://github.com/com-phy-affordance/com-affordance/blob/main/concepts.json)
19
+ - Variables Used:
20
+ ```temperature```, ```mass```, ```material```, ```already-in-use```, ```condition```
21
+
22
+ ### Utility Level Pruning:
23
+ This gives us ```Utility```to``Object`` mappings also called ```utility objects```
24
+ - GT Object-Utility Mappings : [utility-mappings](https://github.com/com-phy-affordance/com-affordance/blob/main/objects.json)
25
+
26
+ ### Task-u(Utility Level):
27
+ Here we evaluate models on their ability to prune out appropriate objects on the basis of Utility.
28
+ - GT (Utility)-(Object) Mappings: [utility-objects](https://github.com/com-phy-affordance/com-affordance/blob/main/objects.json)
29
+ - Task-u Dataset: [4 Variations](https://drive.google.com/drive/folders/1JJSIicKGp0a7ThsenKl0XWKsTtPL_b5z?usp=sharing)
30
+
31
+ ### Task-0(Context Level):
32
+ Here we evaluate models on their ability to prune out appropriate objects on the basis of Context. This gives us ```(Task,Utility)```to``Object`` mappings also called ```context objects```
33
+ - GT (Task-Utility)-(Object) Mappings: [context-objects](https://github.com/com-phy-affordance/com-affordance/blob/main/oracle.json)
34
+ - Task-0 Dataset: [4 Variations](https://drive.google.com/drive/folders/1reH0JHhPM_tFzDMcAaJF0PycFMixfIbo?usp=sharing)
35
+
36
+ ### Task-1(Physical State Level):
37
+ Here we evaluate models on their ability to prune out the ```ideal``` configuration when presented with a number of ```context object``` configurations. (Something that is pretty obvious to humans)
38
+ - All Possible Common Configurations: [possible configurations](https://github.com/com-phy-affordance/com-affordance/blob/main/task-1/possible_configurations_v1.json)
39
+ - Ideal Configurations: [ideal configurations](https://github.com/com-phy-affordance/com-affordance/blob/main/task-1/pouch_config_oracle.json)
40
+ - Commonsense Common Occurence Variables: [common variables values](https://github.com/com-phy-affordance/com-affordance/blob/main/task-1/common_var_responses.json)
41
+ - Task-1 Dataset: [12 Variations](https://drive.google.com/drive/folders/1reH0JHhPM_tFzDMcAaJF0PycFMixfIbo?usp=sharing)
42
+
43
+ ### Task-2(Physical State Level):
44
+ Here we evaluate models on their ability to prune out the most appropriate```sub-optimal``` configuration when presented with a number of sub-optimal configurations of ```context objects```. (Something that is pretty obvious to humans)
45
+ - Suboptimal Configurations: [suboptimal configurations](https://github.com/com-phy-affordance/com-affordance/blob/main/task-2/pouch_suboptimal.json)
46
+ - Human Preference Material Order: [material preference](https://github.com/com-phy-affordance/com-affordance/blob/main/task-2/material_preference.json)
47
+ - Task-2 Dataset: [14 Variations](https://drive.google.com/drive/folders/1reH0JHhPM_tFzDMcAaJF0PycFMixfIbo?usp=sharing)
48
+ ---------------------------------------------------------------------------------------------------------------
49
+
50
+ ### Finetuning Datasets
51
+
52
+ Please refer to [Appendix F.1](https://openreview.net/pdf?id=xYkdmEGhIM) for dataset details
53
+
54
+ - Finetuning Dataset for Object Level Selection : [Google Drive Link](https://drive.google.com/drive/folders/1GtrGQxTTtYEczYK1ytB71Y2HGxM1TEu5?usp=drive_link)
55
+ - Finetuning Dataset for Physical State Level Selection : [Google Drive Link](https://drive.google.com/drive/folders/1FiZc8u_G8wUrN4NroZmIgmcTe0jor72T?usp=drive_link)
56
+
57
+ ### Full Pipeline Evaluation Datasets
58
+
59
+ Please refer to [Appendix F.2](https://openreview.net/pdf?id=xYkdmEGhIM) for dataset deatails
60
+
61
+ - Ideal Object Choice Datasets : [Google Drive Link](https://drive.google.com/drive/folders/1SMM2TU1BKH32oKtfmW0gS3QfyUA68IZ0?usp=drive_link)
62
+ - Moderate Object Choice Datasets : [Google Drive Link](https://drive.google.com/drive/folders/1SlZQBp4Iao3VHnmOFZMKfzn_LWOctnVE?usp=drive_link)
63
+
64
+
65
+ <h3>Prompts Used</h3>
66
+ <p>
67
+
68
+ [Quantitative Examples](https://giant-licorice-a62.notion.site/Prompts-for-Appendix-Examples-d58e0184d1c546bd8632024de3f7ac25)
69
+ </p>
70
+
71
+ ### Implementations For Language Models:
72
+ - PaLM/GPT3.5-Turbo: API
73
+ - LLama13B: huggingface text generation pipeline [link](https://huggingface.co/blog/llama2)
74
+ - Vicuna13B: lmsys [link](https://github.com/lm-sys/FastChat)
75
+ - Vicuna7B: lmsys [link](https://github.com/lm-sys/FastChat)
76
+ - Mistral-7B: huggingface [link](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
77
+ - ChatGLM-6B: huggingface [link](https://huggingface.co/THUDM/chatglm-6b)
78
+ - ChatGLM2-6B: huggingface [link](https://github.com/THUDM/ChatGLM2-6B)
79
+
80
+ [^1]: For the purpose of datasets, we've used `concept and utility` interchangeably.
81
+ ----------------------------------------------------------------------------------------------------------------
82
+ ### Upcoming Stuff:
83
+ - generating object, task, utility jsons for your purpose
84
+ - generating task-0 datasets for your own task list, object list, utility lists
85
+ - generating task-1, task-2 datasets for your own variables, your preferred possible configurations, handcrafted penalty schema and your own preferences.
86
+
87
+ > play around, create more variables, go for more comprehensive reward structures, go in any depth you wish. Let's create more agents capable of physical commonsense reasoning!
88
+
89
+ PS: If you need any help experimenting with this data or curating your own datasets, feel free to create an Issue.