中央研究院 資訊科學研究所

活動訊息

友善列印

列印可使用瀏覽器提供的(Ctrl+P)功能

[所內PI演講系列1/3] Learning efficient deep features for retrieval and continual lifelong learning of deep models

:::

[所內PI演講系列1/3] Learning efficient deep features for retrieval and continual lifelong learning of deep models

  • 講者陳祝嵩 博士 (中央研究院資訊科學研究所)
    邀請人:廖弘源
  • 時間2019-10-01 (Tue.) 14:00 ~ 15:00
  • 地點資訊所新館106演講廳
摘要

In this talk, I will give a quick review of my research at first. Then I introduce two topics of the studies. The first is about learning efficient features for retrieval. A simple yet effective supervised deep hash approach will be presented. We assume that the semantic labels are governed by several latent attributes with each attribute 'on' or 'off', and classification relies on these attributes. Our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. The second is about continual lifelong learning of deep models. I introduce an approach leveraging the principles of deep model compression with weight pruning, critical weights selection, and progressive networks expansion. By enforcing their integration in an iterative manner, an incremental learning method that is scalable to the number of sequential tasks in a continual learning process is proposed. Our approach, dubbed compacting picking and growing (CPG), owns several favorable characteristics. First, it can avoid forgetting (i.e., learn new tasks while remembering all previous tasks). Second, it allows model expansion but can maintain the model compactness when handling sequential tasks. Besides, we show that the knowledge accumulated through learning previous tasks is helpful to adapt to a better model for the new tasks compared to training the models independently with tasks. Experimental results show that our approach can incrementally learn a deep model to tackle multiple tasks without forgetting, while the model compactness is maintained with the performance more satisfiable than individual task training.

BIO

Chu-Song Chen received a B.S. degree in Control Engineering from National Chiao- Tung University, Taiwan, in 1989. He received an M.S. degree in 1991 and a Ph.D. degree in 1996, respectively, both from the Department of Computer Science and Information Engineering, National Taiwan University. He is now a research fellow of the Institute of Information Science, Academia Sinica, Taiwan, and also an adjunct professor of the Graduate Institute of Networking and Multimedia, National Taiwan University. From November 2008 to March 2015, Dr. Chen served as a deputy director of Research Center for Information Technology Innovation, Academia Sinica. Dr. Chen’s research interests include computer vision, signal/image processing, multimedia analysis and pattern recognition. In 2007- 2008, he served as the Secretary-General of the Image Processing and Pattern Recognition (IPPR) Society, Taiwan, which is one of the regional societies of the International Association of Pattern Recognition (IAPR), and he is currently a governing board member of IPPR. He has been on the editorial board of The Open Virtual Reality Journal (Bengham Science Publishers), 2008-2009, and IPSJ Transactions on Computer Vision and Applications (Information Processing Society of Japan), 2010-2013. He is currently on the editorial board of Journal of Machine Vision and Applications (Springer), Multimedia (Academy Publisher), and Journal of Information Science and Engineering (IIS, Academia Sinica). He served as the program co-chair of ICDAT2005 and ICDAT2006, theme chair of PSIVT2009, area chair of ACCV2009, ACCV2010, and NBiS2010, program co-chair of IMV2012, IMV2013, tutorial chair of ACCV2014, general chair of IMEV2014, and workshop chair of ACCV 2016.