전체 글78 [2023-2] 백승우 - LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS LoRA: Low-Rank Adaptation of Large Language Models An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes le arxiv.org 0. Abstrct 대규모 모델을 사전 학습할수록 모든 모델 파라미터를 재학습하는 전체 미세 조정은 실현 가능성이 낮아진다. 사전 학습된 모델 .. 2024. 2. 13. [2023-2] 김경훈 - Finding Tiny Faces 원본 논문 링크 : https://arxiv.org/abs/1612.04402 Finding Tiny Faces Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, arxiv.org 0. Introduction 객체 탐지 기술은 컴퓨터 비전과 이미지 처리 분야에서 중요한 위치를 차지하며, 특히 디지털 이미.. 2024. 2. 6. [2023-2] 김동한 - Variable Selection via the Sparse Net Variable Selection via the Sparse Net https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002484008 Variable Selection via the Sparse Net Variable selection is an important problem when the model includes many noisy variables. For years, the sparse penalized approaches have been proposed for the problem. Examples are the least absolute selectio.. 2024. 2. 4. [2023-2] 백승우 - AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to rep arxiv.org 0. Abstract 트랜스포머 아키텍처는 자연어 처리 작업의 사실상의 표준이 되었지만, 컴퓨터 비전.. 2024. 1. 30. 이전 1 ··· 10 11 12 13 14 15 16 ··· 20 다음