librarian-bot
commited on
Commit
•
b86a4bd
1
Parent(s):
88afe43
Scheduled Commit
Browse files- data/2401.15687.json +1 -0
- data/2401.15708.json +1 -0
- data/2401.15914.json +1 -0
- data/2401.15975.json +1 -0
- data/2401.15977.json +1 -0
- data/2401.16013.json +1 -0
- data/2401.16158.json +1 -0
- data/2401.16380.json +1 -0
- data/2401.16420.json +1 -0
data/2401.15687.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.15687", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [GMTalker: Gaussian Mixture based Emotional talking video Portraits](https://huggingface.co/papers/2312.07669) (2023)\n* [DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models](https://huggingface.co/papers/2312.09767) (2023)\n* [DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation](https://huggingface.co/papers/2312.13578) (2023)\n* [SAiD: Speech-driven Blendshape Facial Animation with Diffusion](https://huggingface.co/papers/2401.08655) (2023)\n* [PMMTalk: Speech-Driven 3D Facial Animation from Complementary Pseudo Multi-modal Features](https://huggingface.co/papers/2312.02781) (2023)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.15708.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.15708", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DreamTuner: Single Image is Enough for Subject-Driven Generation](https://huggingface.co/papers/2312.13691) (2023)\n* [InstantID: Zero-shot Identity-Preserving Generation in Seconds](https://huggingface.co/papers/2401.07519) (2024)\n* [Cross Initialization for Personalized Text-to-Image Generation](https://huggingface.co/papers/2312.15905) (2023)\n* [SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation](https://huggingface.co/papers/2312.16272) (2023)\n* [Customization Assistant for Text-to-image Generation](https://huggingface.co/papers/2312.03045) (2023)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.15914.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.15914", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [COMMA: Co-Articulated Multi-Modal Learning](https://huggingface.co/papers/2401.00268) (2023)\n* [APoLLo : Unified Adapter and Prompt Learning for Vision Language Models](https://huggingface.co/papers/2312.01564) (2023)\n* [Learning to Prompt with Text Only Supervision for Vision-Language Models](https://huggingface.co/papers/2401.02418) (2024)\n* [LAMM: Label Alignment for Multi-Modal Prompt Learning](https://huggingface.co/papers/2312.08212) (2023)\n* [Concept-Guided Prompt Learning for Generalization in Vision-Language Models](https://huggingface.co/papers/2401.07457) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.15975.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.15975", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [InstantID: Zero-shot Identity-Preserving Generation in Seconds](https://huggingface.co/papers/2401.07519) (2024)\n* [Decoupled Textual Embeddings for Customized Image Generation](https://huggingface.co/papers/2312.11826) (2023)\n* [Cross Initialization for Personalized Text-to-Image Generation](https://huggingface.co/papers/2312.15905) (2023)\n* [PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding](https://huggingface.co/papers/2312.04461) (2023)\n* [SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation](https://huggingface.co/papers/2312.16272) (2023)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.15977.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.15977", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [LivePhoto: Real Image Animation with Text-guided Motion Control](https://huggingface.co/papers/2312.02928) (2023)\n* [I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models](https://huggingface.co/papers/2312.16693) (2023)\n* [DreamVideo: High-Fidelity Image-to-Video Generation with Image Retention and Text Guidance](https://huggingface.co/papers/2312.03018) (2023)\n* [Moonshot: Towards Controllable Video Generation and Editing with Multimodal Conditions](https://huggingface.co/papers/2401.01827) (2024)\n* [MotionCrafter: One-Shot Motion Customization of Diffusion Models](https://huggingface.co/papers/2312.05288) (2023)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.16013.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.16013", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Mastering Stacking of Diverse Shapes with Large-Scale Iterative Reinforcement Learning on Real Robots](https://huggingface.co/papers/2312.11374) (2023)\n* [Open-Source Reinforcement Learning Environments Implemented in MuJoCo with Franka Manipulator](https://huggingface.co/papers/2312.13788) (2023)\n* [RESPRECT: Speeding-up Multi-fingered Grasping with Residual Reinforcement Learning](https://huggingface.co/papers/2401.14858) (2024)\n* [Contact Energy Based Hindsight Experience Prioritization](https://huggingface.co/papers/2312.02677) (2023)\n* [SWBT: Similarity Weighted Behavior Transformer with the Imperfect Demonstration for Robotic Manipulation](https://huggingface.co/papers/2401.08957) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.16158.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.16158", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [AppAgent: Multimodal Agents as Smartphone Users](https://huggingface.co/papers/2312.13771) (2023)\n* [WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models](https://huggingface.co/papers/2401.13919) (2024)\n* [MobileAgent: enhancing mobile control via human-machine interaction and SOP integration](https://huggingface.co/papers/2401.04124) (2024)\n* [SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents](https://huggingface.co/papers/2401.10935) (2024)\n* [VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks](https://huggingface.co/papers/2401.13649) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.16380.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.16380", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [From Beginner to Expert: Modeling Medical Knowledge into General LLMs](https://huggingface.co/papers/2312.01040) (2023)\n* [Improving Text Embeddings with Large Language Models](https://huggingface.co/papers/2401.00368) (2023)\n* [EcomGPT-CT: Continual Pre-training of E-commerce Large Language Models with Semi-structured Data](https://huggingface.co/papers/2312.15696) (2023)\n* [Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs](https://huggingface.co/papers/2312.05934) (2023)\n* [CLAMP: Contrastive LAnguage Model Prompt-tuning](https://huggingface.co/papers/2312.01629) (2023)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2401.16420.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2401.16420", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion](https://huggingface.co/papers/2401.13388) (2024)\n* [VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation](https://huggingface.co/papers/2312.09251) (2023)\n* [Lyrics: Boosting Fine-grained Language-Vision Alignment and Comprehension via Semantic-aware Visual Objects](https://huggingface.co/papers/2312.05278) (2023)\n* [InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks](https://huggingface.co/papers/2312.14238) (2023)\n* [DocLLM: A layout-aware generative language model for multimodal document understanding](https://huggingface.co/papers/2401.00908) (2023)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|