Hakuna Matata Forever
  • Bio
  • Experience
  • Projects
  • Papers
  • Talks
  • News
  • Recent & Upcoming Talks
    • 大语言模型推理与训练协同演进:探索高效推理技术的新篇章
    • 大语言模型推理与训练协同演进:探索高效推理技术的新篇章
  • Projects
  • Publications
    • CoCM: Conditional Cross-Modal Learning for Vision-Language Models
    • Cross-Modal Adapter: Parameter-Efficient Transfer Learning Approach for Vision-Language Models
    • Dsdrnet: Disentangling representation and reconstruct network for domain generalization
    • Generalizing to unseen domains via patchmix
    • Smile: Zero-shot sparse mixture of low-rank experts construction from pre-trained foundation models
    • Soft-Prompting with Graph-of-Thought for Multi-modal Representation Learning
    • OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
    • Deal: Difficulty-aware active learning for semantic segmentation
  • Blog
    • 🎉 Easily create your own simple yet highly customizable blog
    • 🧠 Sharpen your thinking with a second brain
    • 📈 Communicate your results effectively with the best data visualizations
    • 👩🏼‍🏫 Teach academic courses
    • ✅ Manage your projects
  • Projects
    • Pandas
    • PyTorch
  • Experience
  • Teaching
    • Learn JavaScript
    • Learn Python

Smile: Zero-shot sparse mixture of low-rank experts construction from pre-trained foundation models

Jan 1, 2024·
Anke Tang
,
Li Shen
,
Yong Luo
,
Shuai Xie
,
Han Hu
,
Lefei Zhang
,
Bo Du
,
Dacheng Tao
· 0 min read
Cite
Type
Journal article
Publication
arXiv preprint arXiv:2408.10174
Last updated on Jan 1, 2024

← Generalizing to unseen domains via patchmix Jan 1, 2024
Soft-Prompting with Graph-of-Thought for Multi-modal Representation Learning Jan 1, 2024 →

© 2025 Me. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.