본문 바로가기
  • 책상 밖 세상을 경험할 수 있는 Playground를 제공하고, 수동적 학습에서 창조의 삶으로의 전환을 위한 새로운 라이프 스타일을 제시합니다.

분류 전체보기271

[2024-1] 염제원 - Meta-Learning in Neural Networks: A Survey https://arxiv.org/abs/2004.05439 Meta-Learning in Neural Networks: A SurveyThe field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learniarxiv.org Abstract최근 "Learning-to-Learn"으로 표방되는 "Meta-Learning"의 연구.. 2024. 4. 12.
[2024-1] 박태호 - Large Language Models are Human-Level Prompt Engineers https://arxiv.org/abs/2211.01910 Large Language Models Are Human-Level Prompt Engineers By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and mo arxiv.org Abstract. LLM은 여러 방면으로 높은 성능을 보이지만, 모델을 조종하.. 2024. 4. 12.
[2024-1] 양소정 - Generative Adversarial Networks https://arxiv.org/abs/1406.2661 Generative Adversarial NetworksWe propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability thatarxiv.org Abstract적대적(adversarial) 프로세스를 통해 생성 모델을 추정하는 프레임워크 제안함이 프레임워크는 'minim.. 2024. 4. 10.
[2024-1] 백승우 - You Only Watch Once: A Unified CNN Architecturefor Real-Time Spatiotemporal Action Localization You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approache arxiv.org 0. Abstract Spatiotemporal action .. 2024. 4. 4.