전체 글355 [2025-1] 임준수 - Self-Adapting Language Models https://arxiv.org/abs/2506.10943 Self-Adapting Language ModelsLarge language models (LLMs) are powerful but static; they lack mechanisms to adapt their weights in response to new tasks, knowledge, or examples. We introduce Self-Adapting LLMs (SEAL), a framework that enables LLMs to self-adapt by generating their ownarxiv.org Abstract기존의 대형 언어 모델(LLM)은 강력하지만 정적(static)이며 새로운 작업이나 지식에 즉각적으로 적응하는 능.. 2025. 7. 10. [2025-1] 임재열 - Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks는 2017년 ICML에서 발표된,모델에 독립적인 meta learning 알고리즘을 제안한 논문입니다. [MAML]https://arxiv.org/abs/1703.03400 Model-Agnostic Meta-Learning for Fast Adaptation of Deep NetworksWe propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a vari.. 2025. 7. 5. [2025-1] 백승우 - GUI Agent by Script-based Automation 2025. 7. 4. [2025-2] 박지원 - GPTQ 논문) https://arxiv.org/abs/2210.17323 GPTQ: Accurate Post-Training Quantization for Generative Pre-trained TransformersGenerative Pre-trained Transformer models, known as GPT or OPT, set themselves apart through breakthrough performance across complex language modelling tasks, but also by their extremely high computational and storage costs. Specifically, due to their massarxiv.org 1. GPTQ란 GPTQ.. 2025. 7. 1. 이전 1 ··· 10 11 12 13 14 15 16 ··· 89 다음