기계학습을 위한 수학

Accelerated first-order methods for large-scale optimization

김동환(KAIST)

Many modern applications, such as machine learning, require solving large-dimensional optimization problems. First-order methods, such as a gradient method, are widely used, since their computational cost per iteration mildly depends on the problem dimension. However, they suffer from slow convergence rates, compared to second-order methods such as Newton's method. Therefore, accelerating first-order methods has received a great interest, and this led to the development and extension of a conjugate gradient method, a heavy-ball method, and Nesterov's fast gradient method, which we briefly review in this talk. This talk will then present recent progress on this subject.

Adaptive Network Bandwidth Allocation with Gaussian Process Regression

황강욱(KAIST)

Network traffic prediction facilitates intelligent networking maintenance by enabling efficient network resource allocation. With the development of machine learning algorithms, traffic prediction has attracted increasing attention and has been widely used in resource allocation and traffic management. In this work we consider an input traffic with a quality of service (QoS) requirement and propose an adaptive bandwidth allocation method based on Gaussian Process Regression (GPR) to satisfy the required QoS. We investigate the performance of the proposed method through simulation with real-world traffic as well as computer-generated traffic and show that the proposed method allocates the bandwidth adaptively and efficiently to satisfy the required QoS. This is a joint work with Jeongseop Kim at KAIST.

NEAR: Neighborhood Edge AggregatoR for Graph Classification

황형주(POSTECH)

Learning graph-structured data with graph neural networks (GNNs) has been recently emerging as an important field because of its wide applicability in bioinformatics, chemoinformatics, social network analysis and data mining. Recent GNN algorithms are based on neural message passing, which enables GNNs to integrate local structures and node features recursively. However, past GNN algorithms based on 1-hop neighborhood neural message passing are exposed to a risk of loss of information on local structures and relationships. In this paper, we propose Neighborhood Edge AggregatoR (NEAR), a novel framework that aggregates relations between the nodes in the neighborhood via edges. NEAR, which can be orthogonally combined with previous GNN algorithms, gives integrated information that describes which nodes in the neighborhood are connected. Therefore, GNNs combined with NEAR reflect each node’s local structure beyond the nodes themselves. Experimental results on multiple graph classification tasks show that our algorithm achieves state-of-the-art results.