stereoplegic
's Collections
Hypernetwork
updated
Magnitude Invariant Parametrizations Improve Hypernetwork Learning
Paper
•
2304.07645
•
Published
•
1
HyperShot: Few-Shot Learning by Kernel HyperNetworks
Paper
•
2203.11378
•
Published
•
1
Hypernetworks for Zero-shot Transfer in Reinforcement Learning
Paper
•
2211.15457
•
Published
•
1
Continual Learning with Dependency Preserving Hypernetworks
Paper
•
2209.07712
•
Published
•
1
HyperMixer: An MLP-based Low Cost Alternative to Transformers
Paper
•
2203.03691
•
Published
•
1
Task-Agnostic Low-Rank Adapters for Unseen English Dialects
Paper
•
2311.00915
•
Published
•
1
Recomposing the Reinforcement Learning Building Blocks with
Hypernetworks
Paper
•
2106.06842
•
Published
•
1
Continual Model-Based Reinforcement Learning with Hypernetworks
Paper
•
2009.11997
•
Published
•
1
Continual learning with hypernetworks
Paper
•
1906.00695
•
Published
•
1
HyperTuning: Toward Adapting Large Language Models without
Back-propagation
Paper
•
2211.12485
•
Published
•
1
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Paper
•
2310.11670
•
Published
•
1
Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks
Paper
•
2312.16218
•
Published
•
7
A Brief Review of Hypernetworks in Deep Learning
Paper
•
2306.06955
•
Published
•
1
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared
Hypernetworks
Paper
•
2106.04489
•
Published
•
1
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision
Tasks
Paper
•
2210.03265
•
Published
•
1
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer
Paper
•
2205.12148
•
Published
•
2
Specialized Language Models with Cheap Inference from Limited Domain
Data
Paper
•
2402.01093
•
Published
•
45
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal
Large Language Models
Paper
•
2403.13447
•
Published
•
18
LoGAH: Predicting 774-Million-Parameter Transformers using Graph
HyperNetworks with 1/100 Parameters
Paper
•
2405.16287
•
Published
•
10
HyperInterval: Hypernetwork approach to training weight interval regions
in continual learning
Paper
•
2405.15444
•
Published