Semantic segmentation aims to detect objects of interest and determine the contours of these objects. It is an important research topic in the field of computer vision. As a crucial component to image understanding, semantic segmentation facilitates many high-level vision applications such as autonomous driving, robot navigation, and image synthesis. Semantic segmentation research has gained significant progress based on recent advances in deep learning. However, deep learning algorithms are data hungry. Furthermore, pixel-wise annotations are required for collecting training images for semantic segmentation. To address this issue, we are developing deep learning algorithms with low training data annotation costs for semantic segmentation. These algorithms can be divided into a few groups including 1) unsupervised object and instance co-segmentation, 2) weakly supervised semantic segmentation, and 3) segmentation training images synthesis. In this talk, I will present our research results in the aforementioned directions.
Yen-Yu Lin received the B.B.A. degree in Information Management, and the M.S. and Ph.D. degrees in Computer Science and Information Engineering from National Taiwan University in 2001, 2003, and 2010, respectively. He is a Professor with the Department of Computer Science, National Chiao Tung University. Prior to that, he worked for the Research Center for Information Technology Innovation, Academia Sinica from January 2011 to July 2019. His research interests include computer vision, machine learning, and artificial intelligence. He serves as an area chair for several international conferences including IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision (ICCV), and European Conference on Computer Vision (ECCV). He received Young Scholars' Creativity Award from Foundation for the Advancement of Outstanding Scholarship, Exploration Research Award from Pan Wen Yuan Foundation, 2018 ACCV Best Student Paper Award Honorable Mention, 2012 ACM Multimedia Grand Challenge First Prize, and 2005 ACM Multimedia Best Student Paper Award Runner-up.