Posts by Collection

portfolio

publications

WaveCRN: An Efficient Convolutional Recurrent Neural Network for End-to-end Speech Enhancement

Published in IEEE Signal Processing Letters, 2020

Due to the simple design pipeline, end-to-end (E2E) neural models for speech enhancement (SE) have attracted great interest. In order to improve the performance of the E2E model, the local and sequential properties of speech should be efficiently taken into account when modelling. However, in most current E2E models for SE, these properties are either not fully considered or are too complex to be realized. In this letter, we propose an efficient E2E SE model, termedWaveCRN. Compared with models based on convolutional neural networks (CNN) or long short-term memory (LSTM), WaveCRN uses a CNN module to capture the speech locality features and a stacked simple recurrent units (SRU) module to model the sequential property of the locality features. Different from conventional recurrent neural networks and LSTM, SRU can be efficiently parallelized in calculation, with even fewer model parameters. In order to more effectively suppress noise components in the noisy speech, we derive a novel restricted feature masking approach, which performs enhancement on the feature maps in the hidden layers; this is different from the approaches that apply the estimated ratio mask to the noisy spectral features, which is commonly used in speech separation methods. Experimental results on speech denoising and compressed speech restoration tasks confirm that with the SRU and the restricted feature map, WaveCRN performs comparably to other state-of-the-art approaches with notably reduced model complexity and inference time.

Recommended citation: T. -A. Hsieh, H. -M. Wang, X. Lu and Y. Tsao, "WaveCRN: An Efficient Convolutional Recurrent Neural Network for End-to-End Speech Enhancement," in IEEE Signal Processing Letters, vol. 27, pp. 2149-2153, 2020. https://ieeexplore.ieee.org/iel7/97/8966529/09272838.pdf

Boosting Objective Scores of a Speech Enhancement Model by MetricGAN Post-processing

Published in APSIPA ASC, 2020

The Transformer architecture has demonstrated a superior ability compared to recurrent neural networks in many different natural language processing applications. Therefore, our study applies a modified Transformer in a speech enhancement task. Specifically, positional encoding in the Transformer may not be necessary for speech enhancement, and hence, it is replaced by convolutional layers. To further improve the perceptual evaluation of the speech quality (PESQ) scores of enhanced speech, the šæ1 pre-trained Transformer is fine-tuned using a MetricGAN framework. The proposed MetricGAN can be treated as a general postprocessing module to further boost the objective scores of interest. The experiments were conducted using the data sets provided by the organizer of the Deep Noise Suppression (DNS) challenge. Experimental results demonstrated that the proposed system outperformed the challenge baseline, in both subjective and objective evaluations, with a large margin.

Recommended citation: S.-W. Fu, C.-F. Liao, T.-A. Hsieh, K.-H. Hung, S.-S. Wang, C. Yu, H.-C. Kuo, R. E Zezario, Y.-J. Li, S.-Y. Chuang, Y.-J. Lu, Y.-C. Lin, Y. Tsao, "Boosting Objective Scores of a Speech Enhancement Model by MetricGAN Post-processing,ā€ in Proc. APSIPA ASC, 2020. https://arxiv.org/pdf/2006.10296.pdf

Improving Perceptual Quality by Phone-Fortified Perceptual Loss using Wasserstein Distance for Speech Enhancement

Published in Proc. Interspeech, 2021

Speech enhancement (SE) aims to improve speech quality and intelligibility, which are both related to a smooth transition in speech segments that may carry linguistic information, e.g. phones and syllables. In this study, we propose a novel phone-fortified perceptual loss (PFPL) that takes phonetic information into account for training SE models. To effectively incorporate the phonetic information, the PFPL is computed based on latent representations of theĀ wav2vecĀ model, a powerful self-supervised encoder that renders rich phonetic information. To more accurately measure the distribution distances of the latent representations, the PFPL adopts the Wasserstein distance as the distance measure. Our experimental results first reveal that the PFPL is more correlated with the perceptual evaluation metrics, as compared to signal-level losses. Moreover, the results showed that the PFPL can enable a deep complex U-Net SE model to achieve highly competitive performance in terms of standardized quality and intelligibility evaluations on the Voice Bankā€“DEMAND dataset.

Recommended citation: T.-A. Hsieh and C. Yu and S.-W. Fu and X. Lu and Y. Tsao, ā€œImproving Perceptual Quality by Phone-Fortified Perceptual Loss Using Wasserstein Distance for Speech Enhancement,ā€ in Proc. Interspeech, 2021. https://www.isca-speech.org/archive/pdfs/interspeech_2021/hsieh21_interspeech.pdf

On the Importance of Neural Wiener Filter for Resource Efficient Multichannel Speech Enhancement

Published in Proc. ICASSP, 2021

We introduce a time-domain framework for efficient multichannel speech enhancement, emphasizing low latency and computational efficiency. This framework incorporates two compact deep neural networks (DNNs) surrounding a multichannel neural Wiener filter (NWF). The first DNN enhances the speech signal to estimate NWF coefficients, while the second DNN refines the output from the NWF. The NWF, while conceptually similar to the traditional frequency-domain Wiener filter, undergoes a training process optimized for low-latency speech enhancement, involving fine-tuning of both analysis and synthesis transforms. Our research results illustrate that the NWF output, having minimal nonlinear distortions, attains performance levels akin to those of the first DNN, deviating from conventional Wiener filter paradigms. Training all components jointly outperforms sequential training, despite its simplicity. Consequently, this framework achieves superior performance with fewer parameters and reduced computational demands, making it a compelling solution for resource-efficient multichannel speech enhancement.

Recommended citation: T.-A. Hsieh and J. Donley and D. Wong and B. Xu and A. Pandey, ā€œOn the Importance of Neural Wiener Filter for Resource Efficient Multichannel Speech Enhancement,ā€ in Proc. ICASSP, 2024. https://arxiv.org/pdf/2401.07882

Inference and Denoise: Causal Inference-based Neural Speech Enhancement

Published in IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2023), 2023

This study addresses the speech enhancement (SE) task within the causal inference paradigm by modeling the noise presence as an intervention. Based on the potential outcome framework, the proposed causal inference-based speech enhancement (CISE) separates clean and noisy frames in an intervened noisy speech using a noise detector and assigns both sets of frames to two mask-based enhancement modules (EMs) to perform noise-conditional SE. Specifically, we use the presence of noise as guidance for EM selection during training, and the noise detector selects the enhancement module according to the prediction of the presence of noise for each frame. Moreover, we derived an SE-specific average treatment effect to quantify the causal effect adequately. Experimental evidence demonstrates that CISE outperforms a non-causal mask-based SE approach in the studied settings and has better performance and efficiency than more complex SE models.

Recommended citation: T.-A. Hsieh and C.-H. H Yang and P.-Y. Chen and S. M Siniscalchi and Y. Tsao, ā€œInference and Denoise: Causal Inference-based Neural Speech Enhancement,ā€ in Proc. MLSP, 2023. https://arxiv.org/pdf/2211.01189

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.