분류 전체보기263 [2025-1] 박제우 - A Unified Approach to Interpreting Model Predictions https://arxiv.org/abs/1705.07874 A Unified Approach to Interpreting Model PredictionsUnderstanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, cre...arxiv.org.. 2025. 2. 8. [2025-1] 황징아이 - Temporal Feature Alignment and Mutual Information Maximization for Video-Based Human Pose Estimation 논문 : https://arxiv.org/abs/2203.15227코드 : https://github.com/Pose-Group/FAMI-Pose GitHub - Pose-Group/FAMI-Pose: This is an official implementation of our CVPR 2022 ORAL paper "Temporal Feature Alignment and MuThis is an official implementation of our CVPR 2022 ORAL paper "Temporal Feature Alignment and Mutual Information Maximization for Video-Based Human Pose Estimation" . - Pose-Group/FAMI-Po.. 2025. 2. 8. [2025-1] 유경석 - MAISI: Medical AI for Synthetic Imaging https://arxiv.org/pdf/2409.11169v2 https://build.nvidia.com/nvidia/maisi maisi Model by NVIDIA | NVIDIA NIMMAISI is a pre-trained volumetric (3D) CT Latent Diffusion Generative Model.build.nvidia.com AbstractMAISI (Medical AI for Synthetic Imaging) : 3D 컴퓨터 단층촬영 (CT) 이미지 생성 모델Volume Compression Network : 고해상도 CT 이미지 생성Latent diffusion model : flexible volume dimensions과 voxel spacing 제공ControlNe.. 2025. 2. 8. [2025-1] 이재호 - Titans: Learning to Memorize at Test Time https://arxiv.org/abs/2501.00663Ali Behrouz, Peilin Zhong, and Vahab Mirrokni - Google Research Titans: Learning to Memorize at Test TimeOver more than a decade there has been an extensive research effort on how to effectively utilize recurrent models and attention. While recurrent models aim to compress the data into a fixed-size memory (called hidden state), attention allows attending toarxiv.. 2025. 2. 8. 이전 1 ··· 17 18 19 20 21 22 23 ··· 66 다음