Search is not available for this dataset
title
string | arxiv_id
string |
---|---|
SPEECH: Structured Prediction with Energy-Based Event-Centric Hyperspheres | 2305.13617 |
Rule By Example: Harnessing Logical Rules for Explainable Hate Speech Detection | 2307.12935 |
Pruning Pre-trained Language Models Without Fine-Tuning | 2210.06210 |
When Does Translation Require Context? A Data-driven, Multilingual Exploration | 2109.07446 |
Do Androids Laugh at Electric Sheep? Humor “Understanding” Benchmarks from The New Yorker Caption Contest | 2209.06293 |
DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models | 2210.14896 |
Being Right for Whose Right Reasons? | 2306.00639 |
Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages | 2305.12182 |
A Theory of Unsupervised Speech Recognition | 2306.07926 |
DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization | 2212.10018 |
Diverse Demonstrations Improve In-context Compositional Generalization | 2212.06800 |
Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering | 2212.10375 |
On the Efficacy of Sampling Adapters | 2307.03749 |
Measuring Progress in Fine-grained Vision-and-Language Understanding | 2305.07558 |
Elaboration-Generating Commonsense Question Answering at Scale | 2209.01232 |
White-Box Multi-Objective Adversarial Attack on Dialogue Generation | 2305.03655 |
Few-shot Adaptation Works with UnpredicTable Data | 2208.01009 |
In-Context Analogical Reasoning with Pre-Trained Language Models | 2305.17626 |
Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering | 2305.15387 |
REV: Information-Theoretic Evaluation of Free-Text Rationales | 2210.04982 |
Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues | 2305.16798 |
SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval | 2207.02578 |
From Ultra-Fine to Fine: Fine-tuning Ultra-Fine Entity Typing Models to Fine-grained | 2312.06188 |
What Makes Pre-trained Language Models Better Zero-shot Learners? | 2209.15206 |
Cross2StrA: Unpaired Cross-lingual Image Captioning with Cross-lingual Cross-modal Structure-pivoted Alignment | 2305.12260 |
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models | 2305.04091 |
Symbolic Chain-of-Thought Distillation: Small Models Can Also “Think” Step-by-Step | 2306.14050 |
Generating EDU Extracts for Plan-Guided Summary Re-Ranking | 2305.17779 |
Gradient-based Intra-attention Pruning on Pre-trained Language Models | 2212.07634 |
DiffusEmp: A Diffusion Model-Based Framework with Multi-Grained Control for Empathetic Response Generation | 2306.01657 |
Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization | 2212.07672 |
InfoMetIC: An Informative Metric for Reference-free Image Caption Evaluation | 2305.06002 |
HistRED: A Historical Document-Level Relation Extraction Dataset | 2307.04285 |
PVGRU: Generating Diverse and Relevant Dialogue Responses via Pseudo-Variational Mechanism | 2212.09086 |
A Survey on Zero Pronoun Translation | 2305.10196 |
MPCHAT: Towards Multimodal Persona-Grounded Conversation | 2305.17388 |
Dual-Alignment Pre-training for Cross-lingual Sentence Embedding | 2305.09148 |
Alleviating Over-smoothing for Unsupervised Sentence Representation | 2305.06154 |
From Characters to Words: Hierarchical Pre-trained Language Model for Open-vocabulary Language Understanding | 2305.14571 |
Code4Struct: Code Generation for Few-Shot Event Structure Prediction | 2210.12810 |
Efficient Semiring-Weighted Earley Parsing | 2307.02982 |
Entity Tracking in Language Models | 2305.02363 |
WACO: Word-Aligned Contrastive Learning for Speech Translation | 2212.09359 |
Knowledge-enhanced Mixed-initiative Dialogue System for Emotional Support Conversations | 2305.10172 |
Parameter-Efficient Fine-Tuning without Introducing New Latency | 2305.16742 |
MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages | 2204.08582 |
DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models | 2211.15029 |
Unified Demonstration Retriever for In-Context Learning | 2305.04320 |
DimonGen: Diversified Generative Commonsense Reasoning for Explaining Concept Relationships | 2212.10545 |
Hidden Schema Networks | 2207.03777 |
Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed Representations | 2211.08794 |
Pre-Training to Learn in Context | 2305.09137 |
Privacy-Preserving Domain Adaptation of Semantic Parsers | 2212.10520 |
KILM: Knowledge Injection into Encoder-Decoder Language Models | 2302.09170 |
Tokenization and the Noiseless Channel | 2306.16842 |
Reasoning with Language Model Prompting: A Survey | 2212.09597 |
DISCO: Distilling Counterfactuals with Large Language Models | 2212.10534 |
SCOTT: Self-Consistent Chain-of-Thought Distillation | 2305.01879 |
Evaluating Open-Domain Question Answering in the Era of Large Language Models | 2305.06984 |
What the DAAM: Interpreting Stable Diffusion Using Cross Attention | 2210.04885 |
Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training | 2206.00621 |
Counterspeeches up my sleeve! Intent Distribution Learning and Persistent Fusion for Intent-Conditioned Counterspeech Generation | 2305.13776 |
What is the best recipe for character-level encoder-only modelling? | 2305.05461 |
Dialect-robust Evaluation of Generated Text | 2211.00922 |
TOME: A Two-stage Approach for Model-based Retrieval | 2305.11161 |
miCSE: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings | 2211.04928 |
Forgotten Knowledge: Examining the Citational Amnesia in NLP | 2305.18554 |
Measuring the Instability of Fine-Tuning | 2302.07778 |
Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment | 2305.05940 |
Long-Tailed Question Answering in an Open World | 2305.06557 |
Did the Models Understand Documents? Benchmarking Models for Language Understanding in Document-Level Relation Extraction | 2306.11386 |
ContraCLM: Contrastive Learning For Causal Language Model | 2210.01185 |
Prompting Language Models for Linguistic Structure | 2211.07830 |
FLamE: Few-shot Learning from Natural Language Explanations | 2306.08042 |
Fact-Checking Complex Claims with Program-Guided Reasoning | 2305.12744 |
Patton: Language Model Pretraining on Text-Rich Networks | 2305.12268 |
Soft Language Clustering for Multilingual Model Pre-training | 2306.07610 |
Dynamic Transformers Provide a False Sense of Efficiency | 2305.12228 |
Multi-target Backdoor Attacks for Code Pre-trained Models | 2306.08350 |
Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text | 2211.11300 |
MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation | 2211.05719 |
ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models | 2212.10474 |
Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models | 2306.09308 |
Large Language Models Meet NL2Code: A Survey | 2212.09420 |
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning | 2305.17682 |
WebIE: Faithful and Robust Information Extraction on the Web | 2305.14293 |
Contextual Knowledge Learning for Dialogue Generation | 2305.18200 |
Text Style Transfer Back-Translation | 2306.01318 |
Continual Knowledge Distillation for Neural Machine Translation | 2212.09097 |
CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding | 2209.10918 |
Few-Shot Document-Level Event Argument Extraction | 2209.02203 |
Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation | 2305.08096 |
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models | 2111.00160 |
Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023? | 2212.09747 |
Weakly Supervised Vision-and-Language Pre-training with Relative Representations | 2305.15483 |
RECAP: Retrieval-Enhanced Context-Aware Prefix Encoder for Personalized Dialogue Response Generation | 2306.07206 |
On “Scientific Debt” in NLP: A Case for More Rigour in Language Model Pre-Training Research | 2306.02870 |
Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method | 2305.13412 |
What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models? | 2305.19597 |
FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction | 2305.02549 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 57