February 27, 2019, Kyoto, Japan
In conjunction with the IEEE BigComp 2019 - 6th IEEE International Conference on Big Data and Smart Computing
When people look for information or find particular services, they used to put queries into search engines and choose a desired one among candidates. Although this way of human computer interaction (HCI) makes it possible to find their desired ones much efficiently than before, now they want more convenient way. Dialog system is the one, which makes people to communicate with computers through natural language or voices. The dialog system is successfully applied to various applications, such as intelligent speaker (e.g., Amazon echo, Google home) and intelligent counsellor, courtesy of great advance of machine learning techniques. It usually consists of several cascade steps (e.g., speech to text, natural language understanding), and it is necessary to find a way of improving of the steps and effectively incorporating them. We want to discuss and share the knowledge about how to solve these issues.
This workshop aims to create opportunities to discuss about the state-of-the-art studies, and to share on-going works. We hope that this will enhance collaboration among the researchers related to dialog systems. There are many challenging issues, such as out-of-domain detection, distant voice recognition, and end-to-end systems. We want to discuss about how to solve such issues, and share the experiences of applying the dialog system to real-world applications.
We invite submissions on topics that include, but are not limited to, the following:
All papers must be original and not simultaneously submitted to another journal or conference. Prospective authors are invited to submit their papers, 4 pages, in English according to the IEEE two-column format for conference proceedings. The author list may appear in the paper, but can be omitted if the authors want to. The direct link for paper submission is https://easychair.org/conferences/?conf=iwds2019. All submissions will be peer-reviewed by the Program Committee of the workshop. All accepted workshop papers will be published in the IEEE Xplore Digital Library as conference proceedings.
The workshop will be held in the International Science Innovation Building at Kyoto University in conjunction with the IEEE BigComp 2019. You will find more details (e.g., room number) at the conference.
Speech recognition has become unprecedentedly important with the popularization of the intelligent agents. The performance of the speech recognition is greatly influenced by the noise interference, which is unavoidable in practical situations. Humans can concentrate on speech signal in noisy environments. Such ability is enabled by auditory cues in human ear, which analyzes acoustic signal steadily in time and frequency domain. Speech segregation is an algorithm that mimics such ability of humans and it helps improving the performance of the speech recognition in noisy circumstances. In this talk, recent researches for speech segregation will be introduced, such as non-negative matrix factorization and deep clustering. In particular, my recent work of speech/music pitch classification based on bidirectional LSTM with probabilistic attention will be explained minutely.
Han-Gyu Kim received the B.S. degree in electronic engineering from Tsinghua University, Beijing, China, in 2009, and the M.S. degree and the Ph.D degree in School of Computing, KAIST, Daejeon, South Korea, in 2011 and 2018, respectively. He is currently a researcher in NAVER Corp., Gyeonggi-do, South Korea. His research interests include speech recognition, source separation, machine learning and artificial intelligence.
All questions about subissions should be emailed to chairs Jonghwan Hyeon (jonghwanhyeon@kaist.ac.kr) or Young-Seob Jeong (bytecell@sch.ac.kr).