Institute of Information Science
Recent Research Results
Current Research Results
Authors: Sung-Hsien Hsieh, Wei-Jie Liang, Chun-Shien Lu, and Soo-Chang Pei

Chun-ShienLu Abstract:
Distributed compressive sensing (DCS) is a framework that considers joint sparsity within signal ensembles along with multiple measurement vectors (MMVs).
However, current theoretical bounds of the probability of perfect recovery for MMVs are derived to be essentially identical to that of a single MV (SMV); this is because characteristics of the signal ensemble are ignored.
In this paper, we introduce two key ingredients, called ``Euclidean distances between signals'' and ``decay rate of signal ensemble,'' to conduct a performance analysis of a deterministic signal model under the MMVs framework.
We show that, by taking the size of signal ensembles into consideration, MMVs indeed exhibit better performance than SMV.
Although our extension can be broadly applied to CS algorithms with MMVs, a case study conducted on a greedy solver, which is commonly known as simultaneous orthogonal matching pursuit (SOMP), will be explored in this paper.
When incorporated with our concept by modifying the steps of support detection and signal estimation, we show that the performance of SOMP will be improved to a meaningful extent, especially for short Euclidean distances between signals.
Performance of the modified SOMP is verified to meet our theoretical prediction.
Moreover, we design a new method based on modified SOMP algorithms for a key application known as cooperative spectrum sensing (CSS).
The simulation results demonstrate that our method can benefit from more than one measurement vector, especially when the length of the measurement vectors is smaller than the sparsity of the signals, which is where traditional CS algorithms fail.
"CSPNet: A New Backbone that can Enhance Learning Capability of CNN," IEEE International Conference on Computer Vision and Pattern Recognition Workshop (CVPRW) on ``Low power computer vision'', June 2020.
Authors: C. Y. Wang, H. Y. Mark Liao, Y. H. Wu, P. Y. Chen, J. W. Hsieh, and I. H. Yeh

MarkLiaoChien YaoWang Abstract:
Neural networks have enabled state-of-the-art approaches to achieve incredible results on computer vision tasks such as object detection. However, such success greatly relies on costly computation resources, which hinders people with cheap devices from appreciating the advanced technology. In this paper, we propose Cross Stage Partial Network (CSPNet) to mitigate the problem that previous works require heavy inference computations from the network architecture perspective. We attribute the problem to the duplicate gradient information within network optimization. The proposed networks respect the variability of the gradients by integrating feature maps from the beginning and the end of a network stage, which, in our experiments, reduces computations by 20% with equivalent or evensuperioraccuracyontheImageNetdataset,andsignificantly outperforms state-of-the-art approaches in terms of AP50 on the MS COCO object detection dataset. The CSPNet is easy to implement and general enough to cope with architectures based on ResNet, ResNeXt, and DenseNet. 
"Difference-Seeking Generative Adversarial Network--Unseen Sample Generation," International Conference on Learning Representations (ICLR), April 2020.
Authors: Yi-Lin Sung, Sung-Hsien Hsieh, Soo-Chang Pei, and Chun-Shien Lu

Chun-ShienLu Abstract:
Unseen data, which are not samples from the distribution of training data and are difficult to collect, have exhibited importance in numerous applications, ({\em e.g.,} novelty detection, semi-supervised learning, and adversarial training). In this paper, we introduce a general framework called \textbf{d}ifference-\textbf{s}eeking \textbf{g}enerative \textbf{a}dversarial \textbf{n}etwork (DSGAN), to generate various types of unseen data. Its novelty is the consideration of the probability density of the unseen data distribution as the difference between two distributions $p_{\bar{d}}$ and $p_{d}$ whose samples are relatively easy to collect.
The DSGAN can learn the target distribution, $p_{t}$, (or the unseen data distribution) from only the samples from the two distributions, $p_{d}$ and $p_{\bar{d}}$. In our scenario, $p_d$ is the distribution of the seen data, and $p_{\bar{d}}$ can be obtained from $p_{d}$ via simple operations, so that we only need the samples of $p_{d}$ during the training.
Two key applications, semi-supervised learning and novelty detection, are taken as case studies to illustrate that the DSGAN enables the production of various unseen data. We also provide theoretical analyses about the convergence of the DSGAN.
Current Research Results
Authors: Hsin-Nan Lin ,Wen-Lian Hsu

Wen-LianHsuHsin-NanLin Abstract:
Background: Personal genomics and comparative genomics are becoming more important in clinical practice and genome research. Both fields require sequence alignment to discover sequence conservation and variation. Though many methods have been developed, some are designed for small genome comparison while some are not efficient for large genome comparison. Moreover, most existing genome comparison tools have not been evaluated the correctness of sequence alignments systematically. A wrong sequence alignment would produce false sequence variants. Results: In this study, we present GSAlign that handles large genome sequence alignment efficiently and identifies sequence variants from the alignment result. GSAlign is an efficient sequence alignment tool for intra-species genomes. It identifies sequence variations from the sequence alignments. We estimate performance by measuring the correctness of predicted sequence variations. The experiment results demonstrated that GSAlign is not only faster than most existing state-of-the-art methods, but also identifies sequence variants with high accuracy. Conclusions: As more genome sequences become available, the demand for genome comparison is increasing. Therefore an efficient and robust algorithm is most desirable. We believe GSAlign can be a useful tool. It exhibits the abilities of ultra-fast alignment as well as high accuracy and sensitivity for detecting sequence variations.
Current Research Results
Authors: Sheng-Yao Su, I-Hsuan Lu, Wen-Chih Cheng, Wei-Chun Chung, Pao-Yang Chen, Jan-Ming Ho, Shu-Hwa Chen, Chung-Yen Lin

Chung-YenLinJan-MingHo Abstract:
To our knowledge, the EpiMOLAS framework, consisting of DocMethyl and EpiMOLAS_web, is the first approach to include containerization technology and a web-based system for WGBS data analysis from raw data processing to downstream analysis. EpiMOLAS will help users cope with their WGBS data and also conduct reproducible analyses of publicly available data, thereby gaining insights into the mechanisms underlying complex biological phenomenon. The Galaxy Docker image DocMethyl is available at https://hub.docker.com/r/lsbnb/docmethyl/. EpiMOLAS_web is publicly accessible at http://symbiosis.iis.sinica.edu.tw/epimolas/.
Current Research Results
Authors: Chiang S., Shinohara H., Huang J.H., Tsai H. K., and Okada M.

Huai-KuangTsai Abstract:
Eukaryotic transcription factors (TFs) coordinate different upstream signals to regulate the target genes. To unveil this network regulation in B cell receptor signaling, we developed a computational pipeline to systematically analyze ERK- and IKK-dependent transcriptome response. We combined a linear regression method and a kinetic modeling to identify the signal-to-TF and TF-to-gene dynamics, respectively, from the time-course experimental data. We show that the combination of TFs differentially controlled by ERK and IKK could contribute divergent expression dynamics in orchestrating the B cell response. Our finding elucidates the regulatory mechanism of the signal-dependent gene expression responsible for eukaryotic cell development.
Current Research Results
"Hardware-Assisted MMU Redirection for In-guest Monitoring and API Profiling," IEEE Transactions on Information Forensics & Security, To Appear.
Authors: Shun-Wen Hsiao, Yeali Sun, Meng Chang Chen

Meng ChangChen Abstract:
With the advance of hardware, network, and virtualization technologies, cloud computing has prevailed and become the target of security threats such as the cross virtual machine (VM) side channel attack, with which malicious users exploit vulnerabilities to gain information or access to other guest virtual machines. Among the many virtualization technologies, the hypervisor manages the shared resource pool to ensure that the guest VMs can be properly served and isolated from each other. However, while managing the shared hardware resources, due to the presence of the virtualization layer and different CPU modes (root and non-root mode), when a CPU is switched to non-root mode and is occupied by a guest machine, a hypervisor cannot intervene with a guest at runtime. Thus, the execution status of a guest is like a black box to a hypervisor, and the hypervisor cannot mediate possible malicious behavior at runtime. To rectify this, we propose a hardware-assisted VMI (virtual machine introspection) based in-guest process monitoring mechanism which supports monitoring and management applications such as process profiling. The mechanism allows hooks placed within a target process (which the security expert selects to monitor and profile) of a guest virtual machine and handles hook invocations via the hypervisor. In order to facilitate the needed monitoring and/or management operations in the guest machine, the mechanism redirects access to in-guest memory space to a controlled, self-defined memory within the hypervisor by modifying the extended page table (EPT) to minimize guest and host machine switches. The advantages of the proposed mechanism include transparency, high performance, and comprehensive semantics. To demonstrate the capability of the proposed mechanism, we develop an API profiling system (APIf) to record the API invocations of the target process. The experimental results show an average performance degradation of about 2.32%, far better than existing similar systems.
"Declarative pearl: deriving monadic Quicksort," Functional and Logic Programming (FLOPS 2020), 2020.
Authors: Shin-Cheng Mu and Tsung-Ju Chiang

Shin-ChengMu Abstract:
To demonstrate derivation of monadic programs, we present a specification of sorting using the non-determinism monad, and derive pure quicksort on lists and state-monadic quicksort on arrays. In the derivation one may switch between point-free and pointwise styles, and deploy techniques familiar to functional programmers such as pattern matching and induction on structures or on sizes. Derivation of stateful programs resembles reasoning backwards from the postcondition.
Current Research Results
"Un-rectifying Non-linear Networks for Signal Representation," IEEE Transactions on Signal Processing, To Appear.
Authors: Wen-Liang Hwang, Andreas Heinecke

Andreas Heinecke Wen-LiangHwang Abstract:
We consider deep neural networks with rectifier activations and max-pooling from a signal representation per- spective. In this view, such representations mark the transition from using a single linear representation for all signals to utilizing a large collection of affine linear representations that are tailored to particular regions of the signal space. We propose a novel technique to “un-rectify” the nonlinear activations into data-dependent linear equations and constraints, from which we derive explicit expressions for the affine linear operators, their domains and ranges in terms of the network parameters. We show how increasing the depth of the network refines the domain partitioning and derive atomic decompositions for the corresponding affine mappings that process data belonging to the same partitioning region. In each atomic decomposition the connections over all hidden network layers are summarized and interpreted in a single matrix. We apply the decompositions to study the Lipschitz regularity of the networks and give sufficient conditions for network-depth-independent stability of the representation, drawing a connection to compressible weight distributions. Such analyses may facilitate and promote further theoretical insight and exchange from both the signal processing and machine learning communities.
"Attractive or Faithful? Popularity-Reinforced Learning for Inspired Headline Generation," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: YunZhu Song, Hong-Han Shuai, Sung-Lin Yeh, Yi-Lun Wu, Lun-Wei Ku, Wen-Chih Peng

Lun-WeiKu Image Image Abstract:
With the rapid proliferation of online media sources and pub-lished news, headlines have become increasingly importantfor  attracting  readers  to  news  articles,  since  users  may  beoverwhelmed with the massive information. In this paper, wegenerate inspired headlines that preserve the nature of newsarticles and catch the eye of the reader simultaneously. Thetask of inspired headline generation can be viewed as a specific form of Headline Generation (HG) task, with the em-phasis on creating an attractive headline from a given newsarticle. To generate inspired  headlines,  we propose a novelframework  called  POpularity-Reinforced  Learning  for  in-spired Headline Generation (PORL-HG). PORL-HG exploitsthe extractive-abstractive architecture with 1) Popular TopicAttention (PTA) for guiding the extractor to select the attrac-tive sentence from the article  and 2) a popularity predictorfor guiding the abstractor to rewrite the attractive sentence.Moreover, since the sentence selection of the extractor is notdifferentiable, techniques of reinforcement learning (RL) areutilized to bridge the gap with rewards obtained from a pop-ularity score predictor. Through quantitative and qualitativeexperiments,  we  show  that  the  proposed  PORL-HG  signif-icantly  outperforms  the  state-of-the-art  headline  generationmodels in terms of attractiveness evaluated by both human(71.03%) and the predictor (at least 27.60%), while the faith-fulness of PORL-HG is also comparable to the state-of-the-art generation model.
"Knowledge-Enriched Visual Storytelling," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: Chao-Chun Hsu, Zi-Yuan Chen, Chi-Yang Hsu, Chih-Chia Li, Tzu-Yuan Lin, Ting-Hao Huang, Lun-Wei Ku

Lun-WeiKu Abstract:
Stories  are  diverse  and  highly  personalized,  resulting  in  alarge possible output space for story generation. Existing end-to-end approaches produce monotonous stories because theyare limited to the vocabulary and knowledge in a single training dataset. This paper introduces KG-Story, a three-stage framework that allows the story generation model to take advantage of external Knowledge Graphs to produce interesting  stories.  KG-Story  distills  a  set  of  representative  wordsfrom the input prompts, enriches the word set by using external knowledge graphs, and finally generates stories basedon the enriched word set. This distill-enrich-generate framework  allows  the  use  of  external  resources  not  only  for  the enrichment  phase,  but  also  for  the  distillation  and  generation  phases.  In  this  paper,  we  show  the  superiority  of  KG-Story for visual storytelling, where the input prompt is a sequence  of  five  photos  and  the  output  is  a  short  story.  Perthe human ranking evaluation, stories generated by KG-Storyare  on  average  ranked  better  than  that  of  the  state-of-the-art  systems.  Our  code  and  output  stories  are  available  athttps://github.com/zychen423/KE-VIST.
Current Research Results
"A Partial Page Cache Strategy for NVRAM-Based Storage Devices," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), February 2020.
Authors: Shuo-Han Chen, Tseng-Yi Chen, Yuan-Hao Chang, Hsin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
Non-volatile random access memory (NVRAM) is becoming a popular alternative as the memory and storage medium in battery-powered embedded systems because of its fast read/write performance, byte-addressability, and non-volatility. A well-known example is phase-change memory (PCM) that has much longer life expectancy and faster access performance than NAND flash. When NVRAM is considered as both main memory and storage in battery-powered embedded systems, existing page cache mechanisms have too many unnecessary data movements between main memory and storage. To tackle this issue, we propose the concept of 'union page cache,' to jointly manage data of the page cache in both main memory and storage. To realize this concept, we design a partial page cache strategy that considers both main memory and storage as its management space. This strategy can eliminate unnecessary data movements between main memory and storage without sacrificing the data \textbf{integrity} of file systems. A series of experiments was conducted on an embedded platform. The results show that the proposed strategy can improve the file accessing performance up to 85.62% when PCM used as a case study.
"Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: Peng-Hsuan Li, Tsu-Jui Fu, Wei-Yun Ma

Wei-YunMa Wei-YunMa Wei-YunMa Abstract:
BiLSTM has been prevalently used as a core module for NER in a sequence-labeling setup. State-of-the-art approaches use BiLSTM with additional resources such as gazetteers, language-modeling, or multi-task supervision to further improve NER. This paper instead takes a step back and focuses on analyzing problems of BiLSTM itself and how exactly self-attention can bring improvements.We formally show the limitation of (CRF-)BiLSTM in modeling cross-context patterns for each word – the XOR limitation. Then, we show that two types of simple cross-structures – self-attention and Cross-BiLSTM – can effectively remedy the problem. We test the practical impacts of the deficiency on real-world NER datasets, OntoNotes 5.0 and WNUT 2017, with clear and consistent improvements over the baseline, up to 8.7% on some of the multi-token entity mentions. We give in-depth analyses of the improvements across several aspects of NER, especially the identification of multi-token mentions. This study should lay a sound foundation for future improvements on sequence-labeling NER.
"Relation Extraction Exploiting Full Dependency Forests," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: Lifeng Jin, Linfeng Song, Yue Zhang, Kun Xu, Wei-Yun Ma, Dong Yu

Wei-YunMa Abstract:
Dependency syntax has long been recognized as a crucial source of features for relation extraction. Previous work considers 1-best trees produced by a parser during preprocessing. However, error propagation from the out-of-domain parser may impact the relation extraction performance. We propose to leverage full dependency forests for this task, where a full dependency forest encodes all possible trees. Such representations of full dependency forests provide a differentiable connection between a parser and a relation extraction model, and thus we are also able to study adjusting the parser parameters based on end-task loss. Experiments on three datasets show that full dependency forests and parser adjustment give significant improvements over carefully designed baselines, showing state-of-the-art or competitive performances on biomedical or newswire benchmarks.
"Query-Driven Multi-Instance Learning," Thirty-Fourth AAAI Conference on Artificial Intelligence, February 2020, February 2020.
Authors: Yen-Chi Hsu, Cheng-Yao Hong, Ming-Sui Lee, and Tyng-Luh Liu

Tyng-LuhLiu Abstract:
We introduce a query-driven approach (qMIL) to multi-instance learning where the queries aim to uncover the class labels embodied in a given bag of instances. Specifically, it solves a multi-instance multi-label learning (MIML) problem with a more challenging setting than the conventional one. Each MIML bag in our formulation is annotated only with a binary label indicating whether the bag contains the instance of a certain class and the query is specified by the word2vec of a class label/name. To learn a deep-net model for qMIL, we construct a network component that achieves a generalized compatibility measure for query-visual co-embedding and yields proper instance attentions to the given query. The bag representation is then formed as the attention-weighted sum of the instances' weights, and passed to the classification layer at the end of the network. In addition, the qMIL formulation is flexible for extending the network to classify unseen class labels, leading to a new technique to solve the zero-shot MIML task through an iterative querying process. Experimental results on action classification over video clips and three MIML datasets from MNIST, CIFAR10 and Scene are provided to demonstrate the effectiveness of our method.
Current Research Results
Authors: Thejkiran Pitti, Ching-Tai Chen, Hsin-Nan Lin, Wai-Kok Choong, Wen-Lian Hsu, and Ting-Yi Sung

Ting-YiSung Wen-LianHsu Hsin-NanLin Ching-TaiChen ThejkiranPitti Abstract:
N-linked glycosylation is one of the predominant post-translational modifications involved in a number of biological functions. Since experimental characterization of glycosites is challenging, glycosite prediction is crucial. Several predictors have been made available and report high performance. Most of them evaluate their performance at every asparagine in protein sequences, not confined to asparagine in the N-X-S/T sequon. In this paper, we present N-GlyDE, a two-stage prediction tool trained on rigorously-constructed non-redundant datasets to predict N-linked glycosites in the human proteome. The first stage uses a protein similarity voting algorithm trained both glycoproteins and non-glycoproteins to predict a score for a protein to improve glycosite prediction. The second stage uses a support vector machine to predict N-linked glycosites by utilizing features of gapped dipeptides, pattern-based predicted surface accessibility, and predicted secondary structure. N-GlyDE’s final predictions are derived from a weight adjustment of the second-stage prediction results based on the first-stage prediction score. Evaluated on N-X-S/T sequons of an independent dataset comprised of 53 glycoproteins and 33 non-glycoproteins, N-GlyDE achieves an accuracy and MCC of 0.740 and 0.499, respectively, outperforming the compared tools. The N-GlyDE web server is available at http://bioapp.iis.sinica.edu.tw/N-GlyDE/ .
"Achieving Lossless Accuracy with Lossy Programming for Efficient Neural-Network Training on NVM-Based Systems," ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), October 2019.
Authors: Wei-Chen Wang, Yuan-Hao Chang, Tei-Wei Kuo, Chien-Chung Ho, Yu-Ming Chang, and Hung-Sheng Chang

Yuan-HaoChang Abstract:
Neural networks over conventional computing platforms are heavily restricted by the data volume and performance concerns. While non-volatile memory offers potential solutions to data volume issues, challenges must be faced over performance issues, especially with asymmetric read and write performance. Beside that, critical concerns over endurance must also be resolved before non-volatile memory could be used in reality for neural networks. This work addresses the performance and endurance concerns altogether by proposing a data-aware programming scheme. We propose to consider neural network training jointly with respect to the data-flow and data-content points of view. In particular, methodologies with approximate results over Dual-SET operations were presented. Encouraging results were observed through a series of experiments, where great efficiency and lifetime enhancement is seen without sacrificing the result accuracy.
"Enabling Sequential-write-constrained B+-tree Index Scheme to Upgrade Shingled Magnetic Recording Storage Performance," ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), October 2019.
Authors: Yu-Pei Liang, Tseng-Yi Chen, Yuan-Hao Chang, Shuo-Han Chen, Kam-Yiu Lam, Wei-Hsin Li, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
When a shingle magnetic recording (SMR) drive has been widely applied to modern computer systems (e.g., archive file systems, big data computing systems, and large-scale database systems), storage system developers should thoroughly review whether current designs (e.g., index schemes and data placements) are appropriate for an SMR drive because of its sequential write constraint. Through many prior works excellently manage data in an SMR drive by integrating their proposed solutions into the driver layer, an index scheme over an SMR drive has never been optimized by any previous works because managing index over the SMR drive needs to jointly consider the properties of B$^+$-tree and SMR natures (e.g., sequential write constraint and zone partitions) in a host storage system. Moreover, poor index management will result in terrible storage performance because an index manager is extensively used in file systems and database applications. For optimizing the B$^+$-tree index structure over an SMR storage, this work identifies performance overheads caused by the B$^+$-tree index structure in an SMR drive. By such observation, this study proposes a sequential-write-constrained B$^+$-tree index scheme, namely SW-B$^+$tree, which consists of an address redirection data structure, an SMR-aware node allocation mechanism, and a frequency-aware garbage collection strategy. According to our experiments, the SW-B$^+$tree can improve the SMR storage performance 55% on average.
Current Research Results
"Achieving Lossless Accuracy with Lossy Programming for Efficient Neural-Network Training on NVM-Based Systems," ACM Transactions on Embedded Computing Systems (TECS), October 2019.
Authors: Wei-Chen Wang, Yuan-Hao Chang, Tei-Wei Kuo, Chien-Chung Ho, Yu-Ming Chang, and Hung-Sheng Chang

Yuan-HaoChang Abstract:
Neural networks over conventional computing platforms are heavily restricted by the data volume and performance concerns. While non-volatile memory offers potential solutions to data volume issues, challenges must be faced over performance issues, especially with asymmetric read and write performance. Beside that, critical concerns over endurance must also be resolved before non-volatile memory could be used in reality for neural networks. This work addresses the performance and endurance concerns altogether by proposing a data-aware programming scheme. We propose to consider neural network training jointly with respect to the data-flow and data-content points of view. In particular, methodologies with approximate results over Dual-SET operations were presented. Encouraging results were observed through a series of experiments, where great efficiency and lifetime enhancement is seen without sacrificing the result accuracy.
Current Research Results
"Enabling Sequential-write-constrained B+-tree Index Scheme to Upgrade Shingled Magnetic Recording Storage Performance," ACM Transactions on Embedded Computing Systems (TECS), October 2019.
Authors: Yu-Pei Liang, Tseng-Yi Chen, Yuan-Hao Chang, Shuo-Han Chen, Kam-Yiu Lam, Wei-Hsin Li, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
When a shingle magnetic recording (SMR) drive has been widely applied to modern computer systems (e.g., archive file systems, big data computing systems, and large-scale database systems), storage system developers should thoroughly review whether current designs (e.g., index schemes and data placements) are appropriate for an SMR drive because of its sequential write constraint. Through many prior works excellently manage data in an SMR drive by integrating their proposed solutions into the driver layer, an index scheme over an SMR drive has never been optimized by any previous works because managing index over the SMR drive needs to jointly consider the properties of B$^+$-tree and SMR natures (e.g., sequential write constraint and zone partitions) in a host storage system. Moreover, poor index management will result in terrible storage performance because an index manager is extensively used in file systems and database applications. For optimizing the B$^+$-tree index structure over an SMR storage, this work identifies performance overheads caused by the B$^+$-tree index structure in an SMR drive. By such observation, this study proposes a sequential-write-constrained B$^+$-tree index scheme, namely SW-B$^+$tree, which consists of an address redirection data structure, an SMR-aware node allocation mechanism, and a frequency-aware garbage collection strategy. According to our experiments, the SW-B$^+$tree can improve the SMR storage performance 55% on average.
Authors: Tsu-Hui Fu, Peng-Hsuan Li, Wei-Yun Ma

Wei-YunMa Peng-HsuanLi TsuJuiFu Abstract:
In this paper, we present GraphRel, an end-to-end relation extraction model which uses graph convolutional networks (GCNs) to jointly learn named entities and relations. In contrast to previous baselines, we consider the interaction between named entities and relations via a relation-weighted GCN to better extract relations. Linear and dependency structures are both used to extract both sequential and regional features of the text, and a complete word graph is further utilized to extract implicit features among all word pairs of the text. With the graph-based approach, the prediction for overlapping relations is substantially improved over previous sequential approaches. We evaluate GraphRel on two public datasets: NYT andWebNLG. Results show that GraphRel maintains high precision while increasing recall substantially. Also, GraphRel outperforms previous work by 3.2% and 5.8% (F1 score), achieving a new state-of-the-art for relation extraction.
"On the Robustness of Self-Attentive Models," the 57th Annual Meeting of Association for Computational Linguistics, July 2019.
Authors: Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh

Wen-LianHsuYULUNHSIEH Abstract:
This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.
"UHop: An Unrestricted-Hop Relation Extraction Framework for Knowledge-Based Question Answering," Proceedings of 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019), June 2019.
Authors: Zi-Yuan Chen, Chih-Hung Chang, Yi-Pei Chen, Jijnasa Nayak and Lun-Wei Ku

Lun-WeiKu Chih-HungChang Zi-YuanChen Abstract:
In relation extraction for knowledge-based question answering, searching from one entity to another entity via a single relation is called "one hop". In related work, an exhaustive search from all one-hop relations, two-hop relations, and so on to the max-hop relations in the knowledge graph is necessary but expensive. Therefore, the number of hops is generally restricted to two or three. In this paper,we propose UHop, an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one. We conduct experiments on conventional 1- and 2-hop questions as well as lengthy questions, including datasets such as WebQSP, PathQuestion,  and Grid World. Results show that the proposed framework enables the ability to halt, works well with state-of-the-art models, achieves competitive performance without exhaustive searches, and opens the performance gap for long relation paths.

More

Academia Sinica Institue of Information Science Academia Sinica