Authors
Zhizheng Wu, Cassia Valentini-Botinhao, Oliver Watts, Simon King
Publication date
2015
Conference
ICASSP
Description
Deep neural networks (DNNs) use a cascade of hidden representations to enable the learning of complex mappings from input to output features. They are able to learn the complex mapping from text-based linguistic features to speech acoustic features, and so perform text-to-speech synthesis. Recent results suggest that DNNs can produce more natural synthetic speech than conventional HMM-based statistical parametric systems. In this paper, we show that the hidden representation used within a DNN can be improved through the use of Multi-Task Learning, and that stacking multiple frames of hidden layer activations (stacked bottleneck features) also leads to improvements. Experimental results confirmed the effectiveness of the proposed methods, and in listening tests we find that stacked bottleneck features in particular offer a significant improvement over both a baseline DNN and a benchmark HMM system.
Total citations
20152016201720182019202020212022202320241454486153382422176
Scholar articles
Z Wu, C Valentini-Botinhao, O Watts, S King - 2015 IEEE international conference on acoustics …, 2015