您的瀏覽器不支援JavaScript語法,網站的部份功能在JavaScript沒有啟用的狀態下無法正常使用。

Institute of Information Science, Academia Sinica

Research

Print

Press Ctrl+P to print from browser

System Research Group

:::

The System Research Group currently has 6 members. The primary research areas of the System Research Group include Embedded Systems, Internet of Things Systems, and Parallel and Distributed Processing.

Members

Embedded Systems Lab

Yuan-Hao Chang

This team is devoted to embedded systems with a prime focus on memory and storage systems. In the line of flash-memory storage systems (e.g., SSDs), we introduce the design of a snapshot-consistent flash translation layer (SCFTL) for flash drives to guarantee recovering to the state right before a crash occurs. This work is the first attempt to leverage formal verification techniques to ensure the correctness of a complex FTL implementation, ensuring a more efficient design of upper layers in the storage stack. This result was published at the top conference USENIX OSDI 2020, the most prestigious conference of system research. In the line of non-volatile memory (NVM), we studied the key technology on training neural network (NN) over NVM-based systems and developed a new NN training technique to enable NN training on NVM-based systems. By utilizing the approximate feature of neural network, we proposed to utilize the “lossy write” operation of NVM to resolve the write energy and training performance issues of NN training without losing the prediction accuracy of the trained neural network. This work is the first one that combines both theory and practicability to enable NN training on NVM-based systems with high performance, low energy consumption, and high prediction accuracy. It has been accepted by top conference CODES+ISSS 2019 and received the best paper award. In addition, the follow-up researches also received the best paper awards of the top conferences CODES+ISSS 2022 and ISLPED 2020.

Ongoing Projects

  • Academia Sinica Deep Cultivation project: Processing-in-Memory: Opportunities in the Post-von-Neumann Era【Yuan-Hao Chang, 2022/01–2026/12】
  • NSTC Mainstream Leap project: Design and Optimization for High-performance and Ultra-scale Storage Systems 【Yuan,2021/08–2025/07】
  • IIS Cooperation project: Design, Optimization, Simulation/Verification with In/Near Memory and Storage Computing 【Yuan-Hao Chang, Yu-Feng Chen, 2023/01–2025/01】

Internet of Things System Lab

Ling-Jyh Chen

The main research areas of the team include networked sensing systems and applications. We researched Internet of Things systems and developed a large-scale system called AirBox for the PM2.5 monitoring system. The project engages citizens to participate in environmental sensing and enables them to make low-cost PM2.5 sensing devices independently. It also facilitates PM2.5 monitoring at a finer spatial-temporal granularity and enriches environmental data analysis by making all measurement data freely available. We have deployed more than 20,000 devices in 59 countries and developed algorithms for device ranking, emission source detection, and anomaly detection. Based on our research results, we have collaborated with government agencies on smart governance, smart inspection, and related issues. Ongoing research focuses on spatial-temporal data analysis of IoT systems and their applications. We intend to apply our research results to other real-world networked sensing systems, such as participatory sensing for urban profiling, environmental monitoring, and wearable sensing and computing. Moreover, we wish to employ advanced artificial intelligence techniques to increase the smartness of networked sensing systems. We will incorporate our research results with emerging social computing systems to facilitate cyber-physical socially networked systems in the future.

Ongoing Projects

  • NSTC project: Intelligent Low-cost Air Quality Sensing with Agility on the Edge【Ling-Jyh Chen, 2020/08–2023/07】
  • Academia Sinica Sustainability project: Research of Low-Cost Sensing Systems for Noise Measurement and Sound Classification【Ling-Jyh Chen, Da-Chien Jan (RCHSS, AS), 2020/01–2022/12】

Parallel and Distributed Processing Lab

Ding-Yong Hong, Jan-Jan Wu, and Chien-Min Wang

Recent research focuses on optimizing execution of deep learning on parallel computing platforms. Modern computing platform usually contains several levels of heterogeneity, including CPUs, GPUs, AI accelerators, and FPGAs. Furthermore, deep learning models become increasingly complex, such as hybrid models and muti-models. Our goal is to provide efficient solutions to execute such complex deep learning models on heterogeneous systems. We achieve the goal by designing a fine-grained scheduling algorithms to map model execution to the heterogeneous devices, as well as devising pipeline scheduling to exploit maximum parallelism in the execution. On an edge device with an ARM CPU and the EdgeTPU accelerator, the scheduling solution achieves 55 times speedup over CPU-only execution for a DenseNet/LSTM-based video captioning model. This is the first effort to achieve real-time video captioning on edge device for such a large model. In addition, we tackle the under-utilization problem of running complex CNN models with branch structures on modern GPUs. We exploit inter-operator parallelism and concurrent kernel execution, leading to more efficient hardware resource utilization. Our method achieves 3.8 times better performance on CNNs compared to sequential execution. The result won the Best Paper Runner-up award at IEEE ICPADS 2022. Our next step is to design new hardware-aware pruning and quantization methods for complex models, and develop new compiler optimization techniques for compressed network models.

Ongoing Projects

  • IIS Cooperation project: Heterogeneous Accelerators Architecture for Deep Learning: Virtual Platform and Compiler【Ding-Yung Hong, Jan-Jan Wu, 2021/01–2023/12】
  • NSTC project: Optimizing Performance of Hybrid DNN on Heterogeneous System Architecture 【Jan-Jan Wu, 2021/08–2023/07】
  • NSTC project: Virtual Platform Design for Deep Learning on Multi-core Accelerators 【Ding-Yung Hong, 2021/08–2023/07】