Towards Hierarchical Spoken Language Disfluency Modeling

Jiachen Lian, Gopala Anumanchipalli


Abstract
Speech dysfluency modeling is the bottleneck for both speech therapy and language learning. However, there is no AI solution to systematically tackle this problem. We first propose to define the concept of dysfluent speech and dysfluent speech modeling. We then present Hierarchical Unconstrained Dysfluency Modeling (H-UDM) approach that addresses both dysfluency transcription and detection to eliminate the need for extensive manual annotation. Furthermore, we introduce a simulated dysfluent dataset called VCTK++ to enhance the capabilities of H-UDM in phonetic transcription. Our experimental results demonstrate the effectiveness and robustness of our proposed methods in both transcription and detection tasks.
Anthology ID:
2024.eacl-long.32
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
539–551
Language:
URL:
https://aclanthology.org/2024.eacl-long.32
DOI:
Bibkey:
Cite (ACL):
Jiachen Lian and Gopala Anumanchipalli. 2024. Towards Hierarchical Spoken Language Disfluency Modeling. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 539–551, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Towards Hierarchical Spoken Language Disfluency Modeling (Lian & Anumanchipalli, EACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.eacl-long.32.pdf
Video:
 https://aclanthology.org/2024.eacl-long.32.mp4