stereoplegic
's Collections
Challenges and Opportunities of Using Transformer-Based Multi-Task
Learning in NLP Through ML Lifecycle: A Survey
Paper
•
2308.08234
•
Published
•
1
Understanding and Improving Information Transfer in Multi-Task Learning
Paper
•
2005.00944
•
Published
•
1
Improving Multi-task Learning via Seeking Task-based Flat Regions
Paper
•
2211.13723
•
Published
•
1
Improvable Gap Balancing for Multi-Task Learning
Paper
•
2307.15429
•
Published
•
1
Multi-Task Recommendations with Reinforcement Learning
Paper
•
2302.03328
•
Published
•
1
Heterogeneous Multi-task Learning with Expert Diversity
Paper
•
2106.10595
•
Published
•
1
Adaptive Pattern Extraction Multi-Task Learning for Multi-Step
Conversion Estimations
Paper
•
2301.02494
•
Published
•
1
AdaTT: Adaptive Task-to-Task Fusion Network for Multitask Learning in
Recommendations
Paper
•
2304.04959
•
Published
•
1
Deep Task-specific Bottom Representation Network for Multi-Task
Recommendation
Paper
•
2308.05996
•
Published
•
1
Curriculum-based Asymmetric Multi-task Reinforcement Learning
Paper
•
2211.03352
•
Published
•
1
Efficient Training of Multi-task Combinarotial Neural Solver with
Multi-armed Bandits
Paper
•
2305.06361
•
Published
•
1
Multi-task Active Learning for Pre-trained Transformer-based Models
Paper
•
2208.05379
•
Published
•
1
STG-MTL: Scalable Task Grouping for Multi-Task Learning Using Data Map
Paper
•
2307.03374
•
Published
•
1
SkillNet-NLG: General-Purpose Natural Language Generation with a
Sparsely Activated Approach
Paper
•
2204.12184
•
Published
•
1
SkillNet-NLU: A Sparsely Activated Model for General-Purpose Natural
Language Understanding
Paper
•
2203.03312
•
Published
•
1
MVP: Multi-task Supervised Pre-training for Natural Language Generation
Paper
•
2206.12131
•
Published
•
1
Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture
with Task-level Sparsity via Mixture-of-Experts
Paper
•
2305.18691
•
Published
•
1
An Efficient General-Purpose Modular Vision Model via Multi-Task
Heterogeneous Training
Paper
•
2306.17165
•
Published
•
1
Hypernetworks for Zero-shot Transfer in Reinforcement Learning
Paper
•
2211.15457
•
Published
•
1
When Giant Language Brains Just Aren't Enough! Domain Pizzazz with
Knowledge Sparkle Dust
Paper
•
2305.07230
•
Published
•
1
Making Small Language Models Better Multi-task Learners with
Mixture-of-Task-Adapters
Paper
•
2309.11042
•
Published
•
2
TigerBot: An Open Multilingual Multitask LLM
Paper
•
2312.08688
•
Published
•
3
MEGAVERSE: Benchmarking Large Language Models Across Languages,
Modalities, Models and Tasks
Paper
•
2311.07463
•
Published
•
13
Differentiable Instruction Optimization for Cross-Task Generalization
Paper
•
2306.10098
•
Published
•
1
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning
Paper
•
2311.02303
•
Published
•
4
DSelect-k: Differentiable Selection in the Mixture of Experts with
Applications to Multi-Task Learning
Paper
•
2106.03760
•
Published
•
3