[論文介紹] Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning 只訓練不到 1% 的參數,就能夠讓 LLM 變成 Multimodal LLM Jul 8.8 min read.論文介紹