Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
2
4
GR
grosa1
Follow
RedHitMark's profile picture
1 follower
·
3 following
https://giovannirosa.com
AI & ML interests
None yet
Recent Activity
updated
a model
22 days ago
molise-ai/pii-detector-ai4privacy
New activity
29 days ago
ai4privacy/pii-masking-400k:
words and labels
Reacted to
singhsidhukuldeep
's
post
with 👍
about 2 months ago
Researchers have developed a novel approach called Logic-of-Thought (LoT) that significantly enhances the logical reasoning capabilities of large language models (LLMs). Here are the steps on how Logic-of-Thought (LoT) is implemented: -- 1. Logic Extraction 1. Use Large Language Models (LLMs) to identify sentences containing conditional reasoning relationships from the input context. 2. Generate a collection of sentences with logical relationships. 3. Use LLMs to extract the set of propositional symbols and logical expressions from the collection. 4. Identify propositions with similar meanings and represent them using identical propositional symbols. 5. Analyze the logical relationships between propositions based on their natural language descriptions. 6. Add negation (¬) for propositions that express opposite meanings. 7. Use implication (→) to connect propositional symbols when a conditional relationship exists. -- 2. Logic Extension 1. Apply logical reasoning laws to the collection of logical expressions from the Logic Extraction phase. 2. Use a Python program to implement logical deduction and expand the expressions. 3. Apply logical laws such as Double Negation, Contraposition, and Transitivity to derive new logical expressions. -- 3. Logic Translation 1. Use LLMs to translate the newly generated logical expressions into natural language descriptions. 2. Combine the natural language descriptions of propositional symbols according to the extended logical expressions. 3. Incorporate the translated logical information as a new part of the original input prompt. -- 4. Integration with Existing Prompting Methods 1. Combine the LoT-generated logical information with the original prompt. 2. Use this enhanced prompt with existing prompting methods like Chain-of-Thought (CoT), Self-Consistency (SC), or Tree-of-Thoughts (ToT). 3. Feed the augmented prompt to the LLM to generate the final answer. What do you think about LoT?
View all activity
Organizations
grosa1
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
liked
3 models
3 months ago
facebook/seamless-m4t-unity-small-s2t
Updated
Aug 24, 2023
•
28
facebook/seamless-streaming
Text-to-Speech
•
Updated
Jan 4
•
200
intfloat/multilingual-e5-small
Sentence Similarity
•
Updated
Jul 29
•
806k
•
151
liked
a model
8 months ago
andreabac3/Fauno-Italian-LLM-7B
Updated
Jul 12, 2023
•
35