본문 바로가기
  • 책상 밖 세상을 경험할 수 있는 Playground를 제공하고, 수동적 학습에서 창조의 삶으로의 전환을 위한 새로운 라이프 스타일을 제시합니다.

Computer Vision55

[2023-2] 백승우 - AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to rep arxiv.org 0. Abstract 트랜스포머 아키텍처는 자연어 처리 작업의 사실상의 표준이 되었지만, 컴퓨터 비전.. 2024. 1. 30.
[2023-2] 주서영 - EEG2IMAGE: Image Reconstruction from EEG Brain Signals EEG2IMAGE: Image Reconstruction from EEG Brain Signals Reconstructing images using brain signals of imagined visuals may provide an augmented vision to the disabled, leading to the advancement of Brain-Computer Interface (BCI) technology. The recent progress in deep learning has boosted the study area of synth arxiv.org GitHub - prajwalsingh/EEG2Image: EEG2IMAGE: Image Reconstruction from EEG Br.. 2024. 1. 24.
[2023-2] 김경훈 - Latent Consistency Models: Synthesizing High-Resolution Images wi 원본 논문 링크 : https://arxiv.org/abs/2310.04378 Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step InferenceLatent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we proparxiv.org PD.. 2024. 1. 23.
[2023-2] 염제원 - TASK2VEC: Task Embedding for Meta-Learning https://arxiv.org/abs/1902.03545 Task2Vec: Task Embedding for Meta-Learning We introduce a method to provide vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. Given a dataset with ground-truth labels and a loss function defined over those label arxiv.org Abstract Visual Classification Task에서 Task를 Vector로 표현하.. 2024. 1. 7.