Natural Language Processing63 [2025-1] 현시은 - PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers 원본 논문 링크 : https://arxiv.org/abs/2406.12430 PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision MakersIn this paper, we conduct a study to utilize LLMs as a solution for decision making that requires complex data analysis. We define Decision QA as the task of answering the best decision, $d_{best}$, for a decision-making question $Q$, business rul.. 2025. 3. 6. [2025-1] 백승우 - A-MEM: Agentic Memory for LLM Agents A-MEM: Agentic Memory for LLM AgentsWhile large language model (LLM) agents can effectively use external tools for complex real-world tasks, they require memory systems to leverage historical experiences. Current memory systems enable basic storage and retrieval but lack sophisticated memoryarxiv.org1. IntroductionLLM agent의 발전으로, 환경과 상호작용하고 작업을 실행하며 의사결정을 할 수 있게됨Reasoning과 planning 능력을 향상시키기 위해.. 2025. 3. 5. [2025-1] 백승우 - LegalAgentBench: Evaluating LLM Agents in Legal Domain LegalAgentBench: Evaluating LLM Agents in Legal DomainWith the increasing intelligence and autonomy of LLM agents, their potential applications in the legal domain are becoming increasingly apparent. However, existing general-domain benchmarks cannot fully capture the complexity and subtle nuances of real-worarxiv.org1. IntroductionLLM의 발전으로 법률 전문가들이 법률 연구, 계약서 작성, 판례 분석과 같은 업무를 더욱 효율적으로 처리할 수 있.. 2025. 3. 4. [2025-1] 백승우 - Perplexed by Perplexity: Perplexity-Based DataPruning With Small Reference Models Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference ModelsIn this work, we investigate whether small language models can determine high-quality subsets of large-scale text datasets that improve the performance of larger language models. While existing work has shown that pruning based on the perplexity of a largearxiv.org1. Methods전체 dataset 중에서 일부 data를 사용하여, perplexity를.. 2025. 3. 3. 이전 1 2 3 4 5 ··· 16 다음