SNU NOW / Events

All Events

Events /

All Events

"Scheduling Optimization for Training and Inference of Machine Learning Models Including LLMs" by Prof. Jiwon Seo (Thursday, May 22)

May 19, 2025

We are pleased to announce the fourth lecture in the 2025 Spring Colloquium Series hosted by the AI Institute.

Register here:
(Offline attendees will receive a Nine Ounce Burger coupon)

  • Registration deadline: Monday, May 19

    • Only selected participants will receive a confirmation email after the registration deadline. Only those who receive this email will be allowed to attend the lecture in person.

    • If you have registered but are unable to attend due to unforeseen circumstances, please notify us in advance to allow others to participate.

Speaker: Prof. Jiwon Seo (School of Advanced Convergence Studies / Department of Electrical and Computer Engineering, College of Engineering)
Date & Time: Thursday, May 22, 2025, 4:00 PM
Venue: Lecture Room, 1st Floor, Haedong Advanced Engineering Hall (Bldg. 303)
Live streaming:
??YouTube:
??Zoom:
???Meeting ID: 859 2362 8089 / Password: 529164


"Scheduling Optimization for Training and Inference of Machine Learning Models Including LLMs"

As AI/ML technologies are increasingly applied across various domains, the field of machine learning systems—which focuses on efficient execution of these technologies—has become ever more critical.

In this lecture, Prof. Jiwon Seo will present recent research from his lab on scheduling optimization techniques for distributed neural network training and LLM inference.
To optimize distributed training, his team proposed a novel scheduling method called Out-of-Order BackProp, which analyzes dependency in backpropagation computations.
Based on this, they designed and implemented scheduling algorithms for single-GPU, data-parallel, and pipeline-parallel training setups, achieving up to 2x speed improvement over existing techniques.

For LLM inference optimization, they introduced a system called ExeGPT, which significantly enhances both latency and throughput performance.
ExeGPT achieved up to 15x speed improvement over Nvidia’s FasterTransformer under latency constraints.

Prof. Jiwon Seo leads the Machine Learning Systems Lab at 红桃视频, where he is affiliated with the School of Advanced Convergence Studies, the Department of Electrical and Computer Engineering, and the Interdisciplinary Program in Artificial Intelligence.