Commonsene Object Affordance Task [COAT]
A 3 level framework adumbrating human commonsense style reasoning for estimating object affordance for various tasks
Experimental Setup:
- Task List: tasks
- Object List: objects
- Utility List[^1]: utilities
- Variables Used:
temperature
,mass
,material
,already-in-use
,condition
Utility Level Pruning:
This gives us Utility
toObject
mappings also called utility objects
- GT Object-Utility Mappings : utility-mappings
Task-u(Utility Level):
Here we evaluate models on their ability to prune out appropriate objects on the basis of Utility.
- GT (Utility)-(Object) Mappings: utility-objects
- Task-u Dataset: 4 Variations
Task-0(Context Level):
Here we evaluate models on their ability to prune out appropriate objects on the basis of Context. This gives us (Task,Utility)
toObject
mappings also called context objects
- GT (Task-Utility)-(Object) Mappings: context-objects
- Task-0 Dataset: 4 Variations
Task-1(Physical State Level):
Here we evaluate models on their ability to prune out the ideal
configuration when presented with a number of context object
configurations. (Something that is pretty obvious to humans)
- All Possible Common Configurations: possible configurations
- Ideal Configurations: ideal configurations
- Commonsense Common Occurence Variables: common variables values
- Task-1 Dataset: 12 Variations
Task-2(Physical State Level):
Here we evaluate models on their ability to prune out the most appropriatesub-optimal
configuration when presented with a number of sub-optimal configurations of context objects
. (Something that is pretty obvious to humans)
- Suboptimal Configurations: suboptimal configurations
- Human Preference Material Order: material preference
- Task-2 Dataset: 14 Variations
Finetuning Datasets
Please refer to Appendix F.1 for dataset details
- Finetuning Dataset for Object Level Selection : Google Drive Link
- Finetuning Dataset for Physical State Level Selection : Google Drive Link
Full Pipeline Evaluation Datasets
Please refer to Appendix F.2 for dataset deatails
- Ideal Object Choice Datasets : Google Drive Link
- Moderate Object Choice Datasets : Google Drive Link
Prompts Used
Implementations For Language Models:
- PaLM/GPT3.5-Turbo: API
- LLama13B: huggingface text generation pipeline link
- Vicuna13B: lmsys link
- Vicuna7B: lmsys link
- Mistral-7B: huggingface link
- ChatGLM-6B: huggingface link
- ChatGLM2-6B: huggingface link
[^1]: For the purpose of datasets, we've used concept and utility
interchangeably.
Upcoming Stuff:
- generating object, task, utility jsons for your purpose
- generating task-0 datasets for your own task list, object list, utility lists
- generating task-1, task-2 datasets for your own variables, your preferred possible configurations, handcrafted penalty schema and your own preferences.
play around, create more variables, go for more comprehensive reward structures, go in any depth you wish. Let's create more agents capable of physical commonsense reasoning!
PS: If you need any help experimenting with this data or curating your own datasets, feel free to create an Issue.