본문 바로가기
  • 책상 밖 세상을 경험할 수 있는 Playground를 제공하고, 수동적 학습에서 창조의 삶으로의 전환을 위한 새로운 라이프 스타일을 제시합니다.

분류 전체보기263

[2025-1] 최민서 - Denoising Diffusion Probabilistic Models [DDPM] https://arxiv.org/abs/2006.11239 Denoising Diffusion Probabilistic ModelsWe present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational boundarxiv.org 본 논문은 기존 Diffusion Model의 기본적인 토대를 바탕으로 매개화를 통해 새로운 .. 2025. 2. 1.
[2025-1] 박서형 - Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531 Distilling the Knowledge in a Neural NetworkA very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersomearxiv.org 1. Introductionmachine learning 알고리즘의 성능을 향상시키는 일반적인 방법.. 2025. 2. 1.
[2025-1] 정인아 - Image Super-Resolution via Iterative Refinement https://arxiv.org/abs/2104.07636 Image Super-Resolution via Iterative RefinementWe present SR3, an approach to image Super-Resolution via Repeated Refinement. SR3 adapts denoising diffusion probabilistic models to conditional image generation and performs super-resolution through a stochastic denoising process. Inference starts with parxiv.org Intro문제기존 GAN 기반 super-resolution 모델은 보기에 그럴듯해보이고, 실.. 2025. 2. 1.
[2025-1] 임재열- Mamba: Linear-Time Sequence Modeling with Selective State Spaces Mamba는 2024년 Albert Gu와 Tri Dao가 제안한 모델입니다. [Mamba]https://arxiv.org/abs/2312.00752 Mamba: Linear-Time Sequence Modeling with Selective State SpacesFoundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated conv.. 2025. 2. 1.