최승진
IntellicodeContrastive learning has established itself as a cornerstone of unsupervised representation learning, achieving remarkable success across diverse domains. By leveraging self-supervised signals, contrastive methods learn representations that are both robust and transferable, often rivaling or surpassing supervised learning in performance. This talk begins with a brief overview of contrastive learning, focusing on fundamental principles and key techniques. We then shift focus to unsupervised domain adaptation (UDA), where the goal is to use labeled data from a source domain and unlabeled data from a target domain to train a classifier for the target domain. UDA is a critical area for enhancing model robustness under distribution shifts. We explore several effective strategies, including data reweighting, feature alignment, generative approaches for domain translation, cutting-edge algorithms for test-time domain adaptation, and the role of contrastive pre-training in improving domain adaptation. By the end of the talk, participants will gain a comprehensive understanding of how these advancements are shaping the landscape of representation learning and cross-domain generalization.
박노성
KAISTScientific machine learning is gathering much attention these days. One representation example is AlphaFold which is specialized in predicting 3D protein structures. On the other hand, more fundamental problems in sciences are solving partial differential equations (PDEs). Therefore, recent research work tries to build scientific foundation models for solving PDEs. However, there exist various PDEs with disparate characteristics. In this regard, the adaptation and the generalization of scientific foundation models are emerging as critical topics. In this talk, I will go over scientific foundation models and their adaptation and generalization issues.
장준혁
한양대학교Speech signal processing technology has become increasingly significant not only in the field of telecommunications but also in all areas requiring speech recognition and language models. With the recent introduction of artificial intelligence, the pace of advancement has accelerated dramatically. This presentation will cover key technologies, including speech separation, bandwidth extension, acoustic echo cancellation, packet loss consealment and localization, while reviewing the latest AI models applied in each area. Through this, we aim to provide insights into the progress brought by AI adoption in the speech processing field.
문태섭
서울대학교In this talk, I will discuss the refinement of negative sampling strategies for two applications of representation learning: continual self-supervised learning (CSSL) and vision-language pre-training (VLP). Regarding CSSL, I will present a novel loss function that leverages the representations of negative samples obtained from the previous model, thereby facilitating improved continual learning of representations. For VLP, I will introduce a novel GRouped mIni-baTch (GRIT) sampling strategy that effectively groups hard negative samples within each batch. Furthermore, I will demonstrate how the combination of GRIT sampling with label smoothing and correction methods addresses false negatives. These refinements to the negative sampling process yield substantial enhancements in training efficiency and representation quality for both CSSL and VLP.
박은혁
POSTECHThis presentation introduces practical ways to update and personalize Large Language Models (LLMs) and Diffusion Models more efficiently. These models excel in generating text and images, but their large size and high computational demands often make it difficult to apply them in various fields. We will explore key efforts and methods that have been proposed to reduce the costs of updating and maintaining these models. By sharing these approaches, we aim to deepen understanding of LLMs and Diffusion Models and support their wider use in a more accessible and cost-efficient way.
이상근
고려대학교우리 인간이 다른 사람과 자연스럽게 대화를 하기 위해서는 공통의 언어와 함께, 대화 상대와 공유하는 배경 지식과 맥락이 있어야 합니다. 이러한 배경 지식에는 ‘상식’도 포함됩니다. 상식은 우리 인간이 잘 알고 있는 지식입니다. 우리 인간은 이미 잘 알고 있는 상식이지만, 놀랍게도 AI는 인간의 상식을 충분히 잘 알지 못하고, 그렇기 때문에 인간 수준 혹은 그 이상의 능력을 가지는 AI인 범용인공지능(AGI)을 아직 만들지 못하고 있습니다. 이 강연에서는 ‘언어+상식’이라는 주제로, 세 가지 이슈를 다룹니다. 첫째, AI가 인간의 언어 이외에, ‘상식’을 이해하는 것이 왜 중요할까요? 둘째, AI가 인간의 상식을 이해하기 위해서는 어떻게 해야 할까요? 마지막으로, 상식을 이해하는 AI를 개발하기 위한 최신 방법에는 어떤 것들이 있는지 살펴봅니다.
Early Registration (~ 1월 31일) |
Late Registration | ||
---|---|---|---|
Academy | 교수 | 25만원 | 30만원 |
학생 | 15만원 | 20만원 | |
Industry | 25만원 | 30만원 |