IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2025 Lo-Wei Tai, Ching-En Li, Cheng-Lin Chen, Chih-Jung Tsai, Hwann-Tzong Chen and Tyng-Luh Liu Principal Component Analysis (PCA), a classical dimensionality reduction technique, and Gaussian Splatting, a recent high-quality image synthesis method, represent fundamentally different approaches to image representation. Despite these significant differences, we present EigenGS, a novel method that bridges these two paradigms. By establishing an efficient transformation pipeline between eigenspace and image-space Gaussian representations, our approach enables instant initialization of Gaussian parameters for new images without requiring per-image training from scratch. Our method also introduces a frequency-aware learning mechanism that encourages Gaussians to adapt to different scales in order to better model spatial frequencies, effectively preventing artifacts in high-resolution reconstruction. Extensive experiments demonstrate that EigenGS not only achieves superior reconstruction quality but also dramatically accelerates convergence. The results highlight EigenGS's effectiveness and its ability to generalize across images with varying resolutions and diverse categories. This makes high-quality Gaussian Splatting practically viable for real-time applications. The Thirteenth International Conference on Learning Representations (ICLR 2025 Spotlight), April 2025 Hsun-Yu Kuo, Yin-Hsiang Liao, Yu-Chieh Chao, Wei-Yun Ma, Pu-Jen Cheng Synthetic data augmentation via Large Language Models (LLMs) allows researchers to leverage additional training data, thus enhancing the performance of downstream tasks, especially when real-world data is scarce. However, the generated data can deviate from the real-world data, and this misalignment can bring about deficient results while applying the trained model to applications. Therefore, we proposed efficient weighted-loss approaches to align synthetic data with
real-world distribution by emphasizing high-quality and diversified data generated by LLMs using merely a tiny amount of real-world data. We empirically assessed the effectiveness of our methods on multiple text classification tasks, and the results showed that leveraging our approaches on a BERT-level model robustly outperformed standard cross-entropy and other data weighting approaches, providing potential solutions to effectively leveraging synthetic data from any suitable data generator. Journal of Functional Programming, December 2024 Shin-Cheng Mu Some top-down problem specifications, if executed, may compute sub-problems repeatedly. Instead, we may want a bottom-up algorithm that stores solutions of sub-problems in a table to be reused. How the table can be represented and efficiently maintained, however, can be tricky. We study a special case: computing a function h taking lists as inputs such that hxs is defined in terms of all immediate sublists of xs. Richard Bird studied this problem in 2008 and presented a concise but cryptic algorithm without much explanation. We give this algorithm a proper derivation and discovered a key property that allows it to work. The algorithm builds trees that have certain shapes—the sizes along the left spine is a prefix of a diagonal in Pascal’s triangle. The crucial function we derive transforms one diagonal to the next. Journal of Parallel and Distributed Computing (JPDC), May 2025 Ding-Yong Hong, Tzu-Hsien Tsai, Ning Wang, Pangfeng Liu, Jan-Jan Wu In modern Deep Learning, it has been a trend to design larger Deep Neural Networks (DNNs) for the execution of more complex tasks and better accuracy. On the other hand, Convolutional Neural Networks (CNNs) have become the standard method for most of computer vision tasks. However, the memory allocation for the intermediate data in convolution layers can cause severe memory pressure during model training. Many solutions have been proposed to resolve the problem. Besides hardware-dependent solutions, a general methodology rematerialization can reduce GPU memory usage by trading computation for memory efficiently. The idea is to select a set of intermediate results during the forward phase as checkpoints, and only save them in memory to reduce memory usage. The backward phase recomputes the intermediate data from the closest checkpoints in memory as needed. This recomputation increases execution time but saves memory by not storing all intermediate results in memory during the forward phase. In this paper, we will focus on efficiently finding the optimal checkpoint subset to achieve the least peak memory usage during the model training. We first describe the theoretical background of the training of a neural network using mathematical equations. We use these equations to identify all essential data required during both forward and backward phases to compute the gradient of weights of the model. We first identify the checkpoint selection problem and propose a dynamic programming algorithm with time complexity to solve the problem of finding the optimal checkpoint subset. With extensive experiments, we formulate a more accurate description of the problem using our theoretical analysis and revise the objective function based on the tracing, and propose an -time algorithm for finding the optimal checkpoint subset. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2025 Farchan Hakim Raswa, Chun-Shien Lu, and Jia-Ching Wang Federated learning for pathological whole slide image (WSI) classification allows multiple clients to train a global multiple instance learning (MIL) model without sharing their privacy-sensitive WSIs. To accommodate the non-independent and identically distributed (non-i.i.d.) feature shifts, cross-client style transfer has been popularly used but is subject to two fundamental issues: (1) WSIs contain multiple morphological structures due to tissue heterogeneity, and (2) the region of interests (RoIs) is not guaranteed, particularly after augmenting local WSIs data trough style transfer. To address these challenges, we propose HistoFS, a federated learning framework for computational pathology on non-i.i.d. feature shifts in WSI classification. Specifically, we introduce pseudo bag styles that capture multiple style variations within a single WSI. In addition, an authenticity module is introduced to ensure that RoIs are preserved, allowing local models to learn WSIs with diverse styles while maintaining essential RoIs. Extensive experiments validate the superiority of HistoFS over state-of-the-art methods on three clinical datasets. the Thirteenth International Conference on Learning Representations (ICLR), April 2025 Po-Wei Huang, Pei-Chiun Peng, Hung Guei, Ti-Rong Wu Planning with options -- a sequence of primitive actions -- has been shown effective in reinforcement learning within complex environments. Previous studies have focused on planning with predefined options or learned options through expert demonstration data. Inspired by MuZero, which learns superhuman heuristics without any human knowledge, we propose a novel approach, named OptionZero. OptionZero incorporates an option network into MuZero, providing autonomous discovery of options through self-play games. Furthermore, we modify the dynamics network to provide environment transitions when using options, allowing searching deeper under the same simulation constraints. Empirical experiments conducted in 26 Atari games demonstrate that OptionZero outperforms MuZero, achieving a 131.58% improvement in mean human-normalized score. Our behavior analysis shows that OptionZero not only learns options but also acquires strategic skills tailored to different game characteristics. Our findings show promising directions for discovering and using options in planning. Our code is available at https://rlg.iis.sinica.edu.tw/papers/optionzero. the Thirteenth International Conference on Learning Representations (ICLR), April 2025 Chun Jung Chen, Chung-Chin Shih, Ti-Rong Wu Strength estimation and adjustment are crucial in designing human-AI interactions, particularly in games where AI surpasses human players. This paper introduces a novel strength system, including a strength estimator (SE) and an SE-based Monte Carlo tree search, denoted as SE-MCTS, which predicts strengths from games and offers different playing strengths with human styles. The strength estimator calculates strength scores and predicts ranks from games without direct human interaction. SE-MCTS utilizes the strength scores in a Monte Carlo tree search to adjust playing strength and style. We first conduct experiments in Go, a challenging board game with a wide range of ranks. Our strength estimator significantly achieves over 80% accuracy in predicting ranks by observing 15 games only, whereas the previous method reached 49% accuracy for 100 games. For strength adjustment, SE-MCTS successfully adjusts to designated ranks while achieving a 51.33% accuracy in aligning to human actions, outperforming a previous state-of-the-art, with only 42.56% accuracy. To demonstrate the generality of our strength system, we further apply SE and SE-MCTS to chess and obtain consistent results. These results show a promising approach to strength estimation and adjustment, enhancing human-AI interactions in games. Our code is available at https://rlg.iis.sinica.edu.tw/papers/strength-estimator. Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP), March 2025 Bing-Jou Wu, Ding-Yong Hong, Pangfeng Liu, Jan-Jan Wu As neural network models become gigantic, they increasingly demand more time and memory for training. To meet these demands, advanced parallel computing techniques have become essential. Our research focuses on hybrid parallelism, an extension of pipeline parallelism. Pipeline parallelism splits the neural network into sub-networks distributed across a sequence of processing units, enabling simultaneous processing of different data segments on each device. Hybrid parallelism extends this concept by allocating multiple devices to each sub-network. Our research focuses on optimizing hybrid parallelism by improving how the model is partitioned and how computational devices are assigned. We address these issues by modeling the neural network as a directed acyclic graph of tensor operators, and then demonstrating that optimally partitioning this graph is NP-complete.
Then, we propose a two-step approach. The first step is to determine a sequence of nodes. The second step is dynamic programming, which partitions the sequence to maintain balance across the assigned devices. In transforming the graph into a sequence, we explore two methods: one employs topological sorting, while the other clusters non-sequential subgraphs. We apply both methods and select the more effective one based on
performance outcomes. We implement our algorithm and conduct experiments. The results show substantial enhancements in both the speed of partitioning and training throughput, with speedups reaching up to 23 in partitioning time and a 1.3-fold increase in training throughput. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP2025), April 2025 Ryandhimas E. Zezario, Sabato M. Siniscalchi, Hsin-Min Wang, and Yu Tsao This work investigates two strategies for zero-shot non-intrusive speech assessment leveraging large language models. First, we explore the audio analysis capabilities of GPT4o. Second, we propose GPT-Whisper, which uses Whisper as an audio-to-text module and evaluates the text’s naturalness via targeted prompt engineering. We evaluate the assessment metrics predicted by GPT-4o and GPT-Whisper, examining their correlation with human-based quality and intelligibility assessments and the character error rate (CER) of automatic speech recognition. Experimental results show that GPT-4o alone is less effective for audio analysis, while GPT-Whisper achieves higher prediction accuracy, has moderate correlation with speech quality and intelligibility, and has higher correlation with CER. Compared to SpeechLMScore, DNSMOS, and VQScore, GPT-Whisper excels in intelligibility metrics, but performs slightly worse than SpeechLMScore in quality estimation. Furthermore, GPTWhisper outperforms supervised non-intrusive models MOS-SSL and MTI-Net in Spearman’s rank correlation for Whisper’s CER. These findings validate GPT-Whisper’s potential for zero-shot speech assessment without requiring additional training data. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP2025), April 2025 Chien-Chun Wang, Li-Wei Chen, Cheng-Kang Chou, Hung-Shin Lee, Berlin Chen, and Hsin-Min Wang While pre-trained automatic speech recognition (ASR) systems demonstrate impressive performance on matched domains, their performance often degrades when confronted with channel mismatch stemming from unseen recording environments and conditions. To mitigate this issue, we propose a novel channel-aware data simulation method for robust ASR training. Our method harnesses the synergistic power of channel-extractive techniques and generative adversarial networks (GANs). We first train a channel encoder capable of extracting embeddings from arbitrary audio. On top of this, channel embeddings are extracted using a minimal amount of target-domain data and used to guide a GAN-based speech synthesizer. This synthesizer generates speech that faithfully preserves the phonetic content of the input while mimicking the channel characteristics of the target domain. We evaluate our method on the challenging Hakka Across Taiwan (HAT) and Taiwanese Across Taiwan (TAT) corpora, achieving relative character error rate (CER) reductions of 20.02% and 9.64%, respectively, compared to the baselines. These results highlight the efficacy of our channel-aware data simulation method for bridging the gap between source- and target-domain acoustics. IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP2025), April 2025 Wenze Ren, Haibin Wu, Yi-Cheng Lin, Xuanjun Chen, Rong Chao, Kuo-Hsuan Hung, You-Jin Li, Wen-Yuan Ting, Hsin-Min Wang, and Yu Tsao In multichannel speech enhancement, effectively capturing spatial and spectral information across different microphones is crucial for noise reduction. Traditional methods, such as CNN or LSTM, attempt to model the temporal dynamics of fullband and sub-band spectral and spatial features. However, these approaches face limitations in fully modeling complex temporal dependencies, especially in dynamic acoustic environments. To overcome these challenges, we modify the current advanced model McNet by introducing an improved version of Mamba, a state-space model, and further propose MCMamba. MCMamba has been completely reengineered to integrate full-band and narrow-band spatial information with sub-band and full-band spectral features, providing a more comprehensive approach to modeling spatial and spectral information. Our experimental results demonstrate that MCMamba significantly improves the modeling of spatial and spectral features in multichannel speech enhancement, outperforming McNet and achieving very promising performance on the CHiME-3 dataset. Additionally, we find that Mamba performs exceptionally well in modeling spectral information. Journal of Hazardous Materials, March 2025 Yu-Jie Lin, Ping-Heng Hsieh, Chun-Chia Mao, Yang-Hsin Shih, Shu-Hwa Chen, Chung-Yen Lin Hexabromocyclododecane (HBCD) poses significant environmental risks, and identifying HBCD-degrading microbes and their enzymatic mechanisms is challenging due to the complexity of microbial interactions and metabolic pathways. This study aimed to identify critical genes involved in HBCD biodegradation through two approaches: functional annotation of metagenomes and the interpretation of machine learning-based prediction models. Our functional analysis revealed a rich metabolic potential in Chiang Chun soil (CCS) metagenomes, particularly in carbohydrate metabolism. Among the machine learning algorithms tested, random forest models outperformed others, especially when trained on datasets reflecting the degradation patterns of species like Dehalococcoides mccartyi and Pseudomonas aeruginosa. These models highlighted enzymes such as EC 1.8.3.2 (thiol oxidase) and EC 4.1.1.43 (phenylpyruvate decarboxylase) as inhibitors of degradation, while EC 2.7.1.83 (pseudouridine kinase) was linked to enhanced degradation. This dual-methodology approach not only deepens our understanding of microbial functions in HBCD degradation but also provides an unbiased view of the microbial and enzymatic interactions involved, offering a more targeted and effective bioremediation strategy. The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), February 2025 Yu-Chuan Chen, Hen-Hsen Huang This paper presents a practical problem in dialogue systems: the capability to adapt to changing user intentions and resolve inconsistencies in conversation histories.
It is crucial in scenarios like train ticket booking, where travel plans often change dynamically.
Notwithstanding the advancements in NLP and large language models (LLMs), these systems struggle with real-time information updates during conversations.
We introduce a specialized dataset to evaluate LLM-based chatbots on such conversational adaptability by asking a broad range of open-domain questions, focusing on scenarios where users modify their requests mid-conversation.
Additionally, as LLMs are susceptible to generating superfluous sentences, we propose a novel, Chain-of-Thought-free evaluation framework to distill the user intention from their responses.
Through extensive investigations on four LLMs, we observe that these contemporary LLMs are not well-aligned with the latest user intent in long-term conversations; they often fail to capture the nuances of natural conversations in a zero-shot setting.
Interestingly, the results demonstrate that GPT-4, widely recognized as having the most advanced reasoning capabilities to date, is bested by GPT-3.5 in this task.
This work aims to improve the practicality of LLM-based chatbots, bridging the gap between the current capabilities of dialogue systems and the fluidity of human interactions. The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), February 2025 Cheng-Yao Hong and Tyng-Luh Liu The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), February 2025 Pei-Kai Huang, Jun-Xiong Chong, Cheng-Hsuan Chiang, Tzu-Hsien Chen, Tyng-Luh Liu and Chiou-Ting Hsu The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), February 2025 Li-Heng Wang, YuJu Cheng and Tyng-Luh Liu ACM Symposium on Applied Computing (SAC), March 2025 Ze-Wei Liou and Ding-Yong Hong Modern AI inference accelerators offer high-performance and power-efficient computations for machine learning models. Most accelerators employ static inference to enhance performance, which requires models to be compiled with predetermined input batch sizes and intermediate tensor shapes. However, static inference can lead to program failures or inefficient execution when processing batched data of varying sizes, a scenario known as dynamic batch inference. This work addresses this challenge by focusing on the emerging multicore AI inference accelerators that offer flexible compute core assignment. We propose to dynamically partition the input batch data into smaller batches, and create multiple model instances to process each partition in parallel. The challenge lies in how to determine the optimal number of model instances, the proper batch size for each handling model, and the assignment of compute cores among the models, to minimize the inference time. To solve the problem, we construct an accurate profiling-based cost model and devise a dynamic programming algorithm to determine the best configuration. Experimental results indicate that our method achieves 3.05× higher throughput on average in multi- person pose estimation benchmarks, compared to the EdgeTPU-like inference strategy. The Journal of the Acoustical Society of America, November 2024 Hsin-Tien Chiang, Szu-Wei Fu, Hsin-Min Wang, Yu Tsao, and John H. L. Hansen Because a reference signal is often unavailable in real-world scenarios, reference-free speech quality and intelligibility assessment models are important for many speech processing applications. Despite a great number of deep-learning models that have been applied to build non-intrusive speech assessment approaches and achieve promising performance, studies focusing on the hearing impaired (HI) subjects are limited. This paper presents HASA-Net+, a multi-objective non-intrusive hearing-aid speech assessment model, building upon our previous work, HASA-Net. HASA-Net+ improves HASA-Net in several ways: (1) inclusivity for both normal-hearing and HI listeners, (2) integration with pre-trained speech foundation models and fine-tuning techniques, (3) expansion of predictive capabilities to cover speech quality and intelligibility in diverse conditions, including noisy, denoised, reverberant, dereverberated, and vocoded speech, thereby evaluating its robustness, and (4) validation of the generalization capability using an out-of-domain dataset. Journal of Biomedical Science, October 2024 Duy-Cuong Le, Mai-Huong T Ngo, Yung-Che Kuo, Shu-Hwa Chen, Chung-Yen Lin, Thai-Yen Ling, Quoc Thao Trang Pham, Heng-Kien Au, Jihwan Myung, Yen-Hua Huang Primary ovarian insufficiency (POI) is an early decline in ovarian function that leads to ovarian failure. Conventional treatments for POI are inadequate, and treatments based on mesenchymal stem cells (MSCs) have emerged as an option. However, the lack of consideration of the estrogen niche in ovarian tissue significantly reduces the therapeutic efficacy, with an unclear mechanism in the MSCs in POI treatment. Furthermore, the disruption of circadian rhythm associated with POI has not been previously addressed. Conditioned medium (CM) and estradiol-conditioned medium (E2-CM) were generated from estrogen receptor positive MSCs (ER+pcMSCs). Chemotherapy-induced POI models were established using C57BL/6 mice (in vivo) and KGN cells (in vitro) treated with cyclophosphamide (CTX) or 4-hydroperoxycyclophosphamide (4-OOH-CP). Gene/protein expressions were detected using RT-qPCR, Western blotting, and immunohistochemistry assays. Locomotor activity was monitored for behavioral circadian rhythmicity. Cytokine arrays and miRNA analysis were conducted to analyze potential factors within CM/E2-CM. The secretome of ER+pcMSCs (CM and E2-CM) significantly reduced the CTX-induced defects in ovarian folliculogenesis and circadian rhythm. CM/E2-CM also reduced granulosa cell apoptosis and rescued angiogenesis in POI ovarian tissues. E2-CM had a more favorable effect than the CM. Notably, ER+pcMSC secretome restored CTX-induced circadian rhythm defects, including the gene expressions associated with the ovarian circadian clock (e.g., Rora, E4bp4, Rev-erbα, Per2 and Dbp) and locomotor activity. Additionally, the cytokine array analysis revealed a significant increase in cytokines and growth factors associated with immunomodulation and angiogenesis, including angiogenin. Neutralizing the angiogenin in CM/E2-CM significantly reduced its ability to promote HUVEC tube formation in vitro. Exosomal miRNA analysis revealed the miRNAs involved in targeting the genes associated with POI rescue (PTEN and PDCD4), apoptosis (caspase-3, BIM), estrogen synthesis (CYP19A1), ovarian clock regulation (E4BP4, REV-ERBα) and fibrosis (COL1A1). This study is the first to demonstrate that, in considering the estrogen niche in ovarian tissue, an estrogen-priming ER+pcMSC secretome achieved ovarian regeneration and restored the circadian rhythm in a CTX-induced POI mouse model. The potential factors involved include angiogenin and exosomal miRNAs in the ER+pcMSC secretome. These findings offer insights into potential stem cell therapies for chemotherapy-induced POI and circadian rhythm disruption. BMC Genomics, September 2024 Lin, C.H., Tsai, C.H., Shiau, C.K., Huang, J.H. and Tsai, H.K.* Background
Alternative splicing is a pivotal mechanism of post-transcriptional modification that contributes to the transcriptome plasticity and proteome diversity in metazoan cells. Although many splicing regulations around the exon/intron regions are known, the relationship between promoter-bound transcription factors and the downstream alternative splicing largely remains unexplored.
Results
In this study, we present computational approaches to unravel the regulatory relationship between promoter-bound transcription factor binding sites (TFBSs) and the splicing patterns. We curated a fine dataset that includes DNase I hypersensitive site sequencing and transcriptomes across fifteen human tissues from ENCODE. Specifically, we proposed different representations of TF binding context and splicing patterns to examine the associations between the promoter and downstream splicing events. While machine learning models demonstrated potential in predicting splicing patterns based on TFBS occupancies, the limitations in the generalization of predicting the splicing forms of singleton genes across diverse tissues was observed with carefully examination using different cross-validation methods. We further investigated the association between alterations in individual TFBS at promoters and shifts in exon splicing efficiency. Our results demonstrate that the convolutional neural network (CNN) models, trained on TF binding changes in the promoters, can predict the changes in splicing patterns. Furthermore, a systemic in silico substitutions analysis on the CNN models highlighted several potential splicing regulators. Notably, using empirical validation using K562 CTCFL shRNA knock-down data, we showed the significant role of CTCFL in splicing regulation.
Conclusion
In conclusion, our finding highlights the potential role of promoter-bound TFBSs in influencing the regulation of downstream splicing patterns and provides insights for discovering alternative splicing regulations.EigenGS Representation: From Eigenspace to Gaussian Image Space
Abstract
Not All LLM-Generated Data Are Equal: Rethinking Data Weighting in Text Classification
Abstract
Bottom-up computation using trees of sublists
Abstract
GPU Memory Usage Optimization for Backward Propagation in Deep Network Training
Abstract
HistoFS: Non-IID Histopathologic Whole Slide Image Classification via Federated Style Transfer with RoI-Preserving
Abstract
OptionZero: Planning with Learned Options
Abstract
Strength Estimation and Human-Like Strength Adjustment in Games
Abstract
Execution Time Optimization for Pipeline Deep Network Training on Multiple GPUs
Abstract
A Study on Zero-shot Non-intrusive Speech Assessment using Large Language Models
Abstract
Channel-Aware Domain-Adaptive Generative Adversarial Network for Robust Speech Recognition
Abstract
Leveraging Joint Spectral and Spatial Learning with MAMBA for Multichannel Speech Enhancement
Abstract
Interpretation of Machine Learning-Based Prediction Models and Functional Metagenomic Approach to Identify Critical Genes in HBCD Degradation
Abstract
Exploring Conversational Adaptability: Assessing the Proficiency of Large Language Models in Dynamic Alignment with Updated User Intent
Abstract
Multimodal Promptable Token Merging for Diffusion Models
Abstract
SLIP: Spoof-aware One-class Face Anti-Spoofing with Language Image Pretraining
Abstract
Tracking Everything Everywhere across Multiple Cameras
Abstract
Optimizing Compute Core Assignment for Dynamic Batch Inference in AI Inference Accelerator
Abstract
Multi-objective Non-intrusive Hearing-aid Speech Assessment Model
Abstract
Secretome from estrogen‑responding human placenta‑derived mesenchymal stem cells rescues ovarian function and circadian rhythm in mice with cyclophosphamide‑induced primary ovarian insufficiency
Abstract
Background
Methods
Results
Conclusion
Predicting splicing patterns from the transcription factor binding sites in the promoter with deep learning
Abstract