8월 19일 (월) | |||
09:00 - 10:30 |
|
||
10:30 - 12:00 |
|
||
16:30 - 18:00 |
|
8월 20일 (화) | |||
09:00 - 10:30 |
|
||
10:30 - 12:00 |
|
||
16:30 - 18:00 |
|
박노성 교수
(KAIST)
Biography | |
---|---|
2024-현재 | KAIST 전산학부 전임/AI대학원 겸임/데이터사이언스대학원 겸임 |
2023-2024 | 연세대학교 인공지능학과 부교수 |
2023-2024 | 연세대학교 인공지능학과 조교수 |
2018-2020 | George Mason University, Assistant Professor |
2016-2018 | University of North Carolina at Charlotte, Assistant Professor |
2021-현재 | ㈜ 온코크로스 CAIO |
Science for Deep Learning & Deep Learning for Science (90분)
Physical laws, expressed in the form of differential equations, embody the essence of human intelligence. Interestingly, differential equations play a vital role in the design of modern deep neural networks. For instance, diffusion models and graph convolutional networks have been greatly inspired by them. In the first part of this talk, I will introduce the connection between differential equations and deep learning. In the second part, I will discuss how deep neural networks can be used to solve differential equations in fields such as natural sciences, social sciences, various engineering disciplines, economics, and more.
신진우 교수
(KAIST)
Biography | |
---|---|
2019-현재 | 한국과학기술원 김재철 AI 대학원 교수 |
2013-현재 | 한국과학기술원 전기및전자공학부 교수 |
2012-2013 | IBM T. J. Watson Research Center 연구원 |
2010-2012 | Georgia Institute of Technology 연구원 |
2005-2010 | MIT 수학 박사 |
Few-shot Tabular Learning: Self-supervised and LLM-based approaches (90분)
Learning with limited labeled tabular samples is an important problem for industrial machine learning applications, as acquiring annotations for tabular data is often too costly. In this talk, I will cover recent methods for few-shot tabular learning. In particular, I will introduce two types of methods: (a) self-supervised learning and (b) LLM-based methods, where (a) utilizes unlabeled data for effective tabular representation learning, and (b) does the power of pre-trained large language models.
이제현 연구원
(한국에너지기술연구원)
Biography | |
---|---|
2018-현재 | 한국에너지기술연구원 에너지AI·계산과학실 선임/책임 |
2024-현재 | 국가과학기술연구회(NST) 디지털전환 및 융합R&D 전문가위원 |
2023-현재 | 과학기술연합대학원대학교(UST) 겸임/전임교수 |
2023-현재 | Microsoft MVP (AI) |
2013-2017 | 삼성전자 종합기술원/반도체연구소 전문/책임/수석 |
2011-2012 | 서울대학교 재료공학부 연구조교수 |
2009-2011 | Solid State Physics, Vienna University of Technology 박사 |
2001-2008 | 서울대학교 재료공학부 석박사통합과정 |
Leveraging LLMs for Practical Research Applications (90분)
This lecture explores the practical applications of large language models (LLMs) in research, focusing on information retrieval and paper summarization. We will examine the capabilities and limitations of AI agents like Perplexity.ai and tools such as SciSpace, highlighting the constraints imposed by their databases. The lecture will also introduce the concept of creating personalized GPTs and present a case study from the Korea Institute of Energy Research, showcasing their successful implementation of LLMs. Attendees will gain insights into leveraging LLMs for research, understanding their limitations, and discovering strategies to adapt these tools to their specific requirements.
김세훈 연구원
(카카오브레인)
Biography | |
---|---|
2020-현재 | 카카오브레인 Director, Visual Generative Modelling |
2017-2019 | AITRICS 연구팀장 |
2018 | POSTECH 컴퓨터공학 박사 |
2012 | MSRA, MSR 연구 인턴 |
2009 | POSTECH 컴퓨터공학 학사 |
Opportunities and Challenges of Medical Foundation Models (90분)
The growing popularity of general-purpose multi-modal foundation models has sparked interest in their application within medical domains. Traditional approaches to developing machine learning models often rely on a limited set of annotations from medical experts, which is not scalable due to the high cost of labeling. In contrast, large-scale self-supervised approaches can utilize the abundant weakly supervised data available in medical institutions, enabling the scalable development of models capable of performing many tasks, rather than focusing on a single specific task. This talk covers recent advances in medical foundation models, including the types of tasks these models can address, the methods for their development, and the strategies for evaluating their performance. Finally, I will conclude by outlining the challenges to ensure these models are truly beneficial to medical professionals and can be integrated into daily practice.
최승진 소장
(IntelliCode)
Biography | |
---|---|
2022-현재 | 연구소장, Intellicode |
2019-2021 | CTO, BARO AI & 상임고문, BARO AI Academy |
2001-2019 | POSTECH 컴퓨터공학과 교수 |
2019-2021 | 정보과학회 인공지능소사이어티 회장 |
2018 | 삼성전자 종합기술원 자문교수 |
2017-2018 | 삼성리서치 AI센터 자문교수 |
2016-2017 | 신한카드 빅데이터센터 자문교수 |
2014-2016 | 정보과학회 머신러닝연구회 초대위원장 |
CLIP: Contrastive Language-Image Pre-Training and Prompt Tuning (90분)
Contrastive Language-Image Pre-Training (CLIP) has emerged as a powerful paradigm in machine learning, bridging the gap between natural language understanding and image processing tasks. This tutorial provides a comprehensive overview of CLIP and its underlying principles, beginning with an introduction to contrastive learning techniques. We delve into the architecture and mechanics of CLIP models, highlighting their ability to jointly understand text and image inputs without task-specific fine-tuning. Then, the tutorial explores advanced techniques for prompt-tuning in CLIP models, where a small number of prompt parameters are tuned on downstream data without fully fine-tuning a model. It covers context optimization (CoOP), which refines prompt embeddings to enhance the relevance and specificity of model response. Conditional context optimization (CoCoOP) techniques are discussed, allowing for tailored adjustments of prompts based on specific contextual conditions or user-defined criteria. Finally I introduce test-time prompt tuning strategies, enabling dynamic adaption of prompts during inference to improve model performance in real-world scenarios.
박은혁 교수
(포항공대)
Biography | |
---|---|
2020-현재 | 포항공과대학교 인공지능대학원 조교수 |
2020 | 서울대학교 반도체공동연구소 박사후연구원 |
2018 | Meta mobile vision team research intern |
2015-2020 | 서울대학교 컴퓨터공학과 공학박사 |
2014-2015 | 포항공과대학교 전자과 공학석사 |
LLM Compression and Acceleration Techniques (90분)
In this talk, we will explore advanced studies in the literature of LLM quantization to make inference costs affordable for practical environments. The unique characteristics of LLMs have led to quantization algorithms with distinct features over prior art, and some papers suggest important considerations. Additionally, I introduce the most advanced weight quantization scheme, called outlier-aware weight quantization (OWQ), which aims to minimize the footprint of LLMs through low-precision representation.