본문 바로가기
  • 책상 밖 세상을 경험할 수 있는 Playground를 제공하고, 수동적 학습에서 창조의 삶으로의 전환을 위한 새로운 라이프 스타일을 제시합니다.

Computer Vision127

[2023-2] 주서영 - EEG2IMAGE: Image Reconstruction from EEG Brain Signals EEG2IMAGE: Image Reconstruction from EEG Brain Signals Reconstructing images using brain signals of imagined visuals may provide an augmented vision to the disabled, leading to the advancement of Brain-Computer Interface (BCI) technology. The recent progress in deep learning has boosted the study area of synth arxiv.org GitHub - prajwalsingh/EEG2Image: EEG2IMAGE: Image Reconstruction from EEG Br.. 2024. 1. 24.
[2023-2] 김경훈 - Latent Consistency Models: Synthesizing High-Resolution Images wi 원본 논문 링크 : https://arxiv.org/abs/2310.04378 Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step InferenceLatent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we proparxiv.org PD.. 2024. 1. 23.
[2023-2] 양소정 - U-Net: Convolutional Networks for Biomedical Image Segmentation https://arxiv.org/pdf/1505.04597.pdf Abstract사용 가능한 주석이 달린 샘플을 보다 효율적으로 사용하기 위해, 하나의 데이터를 여러 데이터처럼 사용하는 전략(data augmentation)을 제시함정확한 localization을 가능하게 하는 대칭 확장 path로 구성됨이러한 네트워크는 적은 수의 이미지에서 end-to-end로 학습될 수 있음그 결과, 성능은 ISBI challenge for segmentation of neuronal structures in electron microscopic stacks에서 이전 최고 방법(a sliding-window convolutional network)을 능가함 ExtraConvolutional Neural Network.. 2024. 1. 8.
[2023-2] 염제원 - TASK2VEC: Task Embedding for Meta-Learning https://arxiv.org/abs/1902.03545 Task2Vec: Task Embedding for Meta-Learning We introduce a method to provide vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. Given a dataset with ground-truth labels and a loss function defined over those label arxiv.org Abstract Visual Classification Task에서 Task를 Vector로 표현하.. 2024. 1. 7.