[論文介紹] RAFT: Adapting Language Model to Domain Specific RAG 透過 Irrelevant Document 來訓練 RAG-LLM Oct 31.15 min read.論文介紹
[論文介紹] Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning 只訓練不到 1% 的參數,就能夠讓 LLM 變成 Multimodal LLM Jul 8.8 min read.論文介紹