본문 바로가기
  • 책상 밖 세상을 경험할 수 있는 Playground를 제공하고, 수동적 학습에서 창조의 삶으로의 전환을 위한 새로운 라이프 스타일을 제시합니다.

분류 전체보기372

[2025-1] 이재호 - Titans: Learning to Memorize at Test Time https://arxiv.org/abs/2501.00663Ali Behrouz, Peilin Zhong, and Vahab Mirrokni - Google Research  Titans: Learning to Memorize at Test TimeOver more than a decade there has been an extensive research effort on how to effectively utilize recurrent models and attention. While recurrent models aim to compress the data into a fixed-size memory (called hidden state), attention allows attending toarxiv.. 2025. 2. 8.
[2025-1] 김유현 - A Style-Based Generator Architecture for Generative Adversarial Networks https://arxiv.org/abs/1812.04948  A Style-Based Generator Architecture for Generative Adversarial NetworksWe propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identitarxiv.org 0. AbstractStyleGAN은 스타일 전.. 2025. 2. 8.
[2025-1] 염제원 - RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs이 글에서는 “RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs” 논문을 간단히 정리한다. 해당 논문은 기존 RAG(Retrieval-Augmented Generation)에 별도 랭킹 모델을 사용하지 않고, 하나의 LLM만으로 질문과 문서 간의 적합도를 판단해 상위 문서를 선별(reranking)하고 답변까지 생성하는 새로운 방법을 제안한다.1. 배경과 문제 설정대형 언어 모델(LLM)은 거대한 파라미터로 다양한 질의에 답변할 수 있지만, 모든 지식을 파라미터에 내재화하기는 현실.. 2025. 2. 5.
[2025-1] 김경훈 - SAM (Segment Anything Model) 원본 논문 링크 : https://arxiv.org/abs/2304.02643 Segment AnythingWe introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensearxiv.org  https://github.com/facebookresearch/segment-anything GitHub - facebookr.. 2025. 2. 5.