분류 전체보기264 [2025-1] 김학선 - Code Security Vulnerability Repair Using Reinforcement Learning with Large Language Models https://arxiv.org/abs/2401.07031 Code Security Vulnerability Repair Using Reinforcement Learning with Large Language ModelsWith the recent advancement of Large Language Models (LLMs), generating functionally correct code has become less complicated for a wide array of developers. While using LLMs has sped up the functional development process, it poses a heavy risk to code secarxiv.orgIntroducti.. 2025. 2. 18. [2025-1] 차승우 - Titans: Learning to Memorize at Test Time https://arxiv.org/abs/2501.00663 Titans: Learning to Memorize at Test TimeOver more than a decade there has been an extensive research effort on how to effectively utilize recurrent models and attention. While recurrent models aim to compress the data into a fixed-size memory (called hidden state), attention allows attending toarxiv.org 0. Abstract 순환 모델은 데이터를 고정된 크기의 메모리(hidden state)로 압축하는 것을 .. 2025. 2. 17. [2025-1] 주서영 - Adding Conditional Control to Text-to-Image Diffusion Models ControlNetGitHub GitHub - lllyasviel/ControlNet: Let us control diffusion models!Let us control diffusion models! Contribute to lllyasviel/ControlNet development by creating an account on GitHub.github.comICCV 20233626회 인용1. Introduction기존 Text-Image 모델(Stable Diffusion, DALL·E 2, MidJourney, etc.)은 이미지 생성은 뛰어났지만 프롬프트를 수정하며 원하는 결과를 얻기까지 반복 작업이 필요하고 Fine-tuning에서는 데이터셋과 훈련 비용 등에 문제가 있었음ControlNet.. 2025. 2. 15. [2025-1] 임수연 - Large scale distributed neural network training through online distillation | Relational knowledge distillation | Be your own teacher: Improve the performance of convolutional neural networks via self distillation https://arxiv.org/pdf/1904.05068https://arxiv.org/pdf/1804.03235https://arxiv.org/pdf/1905.08094안녕하세요, 이번 글에서는 Distillation의 변형된 기법들을 차례로 알아가보도록 하겠습니다. 핵심 아이디어 위주로 정리하였으므로 실험과 같은 자세한 내용은 논문을 참고해주시면 감사하겠습니다. Knowledge 관점 1. Introduction새로운 approach로 RKD(Relational Knowledge Distillation)과 loss 2가지 distance-wise & angle-wise distillation losses 를 제안. metric learning에서 student model이 outperform 할 .. 2025. 2. 15. 이전 1 ··· 13 14 15 16 17 18 19 ··· 66 다음