The 2nd International Workshop on Dialog Systems
(IWDS 2019)


The 2nd International Workshop on Dialog Systems (IWDS 2019)

February 27, 2019, Kyoto, Japan

In conjunction with the IEEE BigComp 2019 - 6th IEEE International Conference on Big Data and Smart Computing


When people look for information or find particular services, they used to put queries into search engines and choose a desired one among candidates. Although this way of human computer interaction (HCI) makes it possible to find their desired ones much efficiently than before, now they want more convenient way. Dialog system is the one, which makes people to communicate with computers through natural language or voices. The dialog system is successfully applied to various applications, such as intelligent speaker (e.g., Amazon echo, Google home) and intelligent counsellor, courtesy of great advance of machine learning techniques. It usually consists of several cascade steps (e.g., speech to text, natural language understanding), and it is necessary to find a way of improving of the steps and effectively incorporating them. We want to discuss and share the knowledge about how to solve these issues.

Theme, Purpose, and Scope

This workshop aims to create opportunities to discuss about the state-of-the-art studies, and to share on-going works. We hope that this will enhance collaboration among the researchers related to dialog systems. There are many challenging issues, such as out-of-domain detection, distant voice recognition, and end-to-end systems. We want to discuss about how to solve such issues, and share the experiences of applying the dialog system to real-world applications.

We invite submissions on topics that include, but are not limited to, the following:

  • Intelligent dialog systems
  • Chatbot systems
  • Speech recognition
  • Speech synthesis
  • Natural language understanding
  • Information extraction
  • Dialog management
  • Language resources and representation scheme for dialog systems


All papers must be original and not simultaneously submitted to another journal or conference. Prospective authors are invited to submit their papers, 4 pages, in English according to the IEEE two-column format for conference proceedings. The author list may appear in the paper, but can be omitted if the authors want to. The direct link for paper submission is All submissions will be peer-reviewed by the Program Committee of the workshop. All accepted workshop papers will be published in the IEEE Xplore Digital Library as conference proceedings.


Submission of Workshop Papers
November 30, 2018 December 14, 2018 (UTC -12)
Notification of Paper Acceptance
December 21, 2018 December 31, 2018
Camera Ready Submission
December 28, 2018 January 7, 2019
Author Registration
December 28, 2018 January 7, 2019
February 27, 2019


Program Committee

  • Byungsoo Ko, Researcher, Naver
  • Dongkeon Lee, Researcher, KAIST
  • Hee-Cheol Seo, Researcher, Naver
  • Hyounggyu Lee, Researcher, Naver
  • Joonghwi Shin, Researcher, Naver
  • Kyoung-Soo Han, Researcher, Naver
  • Sa-Kwang Song, Researcher, KISTI
  • Seung-Ho Han, Researcher, KAIST
  • Yoonjae Jeong, Researcher, NCSOFT
  • Zae Myung Kim, Researcher, Naver

Organizing Committee

  • Young-Seob Jeong, Professor, SoonChunHyang Univ.
  • Jonghwan Hyeon, Ph.D Candidate, KAIST
  • Ho-Jin Choi, Professor, KAIST


The workshop will be held in the International Science Innovation Building at Kyoto University in conjunction with the IEEE BigComp 2019. You will find more details (e.g., room number) at the conference.


Monaural Speech Segregation Using Pitch Classification Based on Bidirectional LSTM with Probabilistic Attention


Han-Gyu Kim
Han-Gyu Kim
  • Researcher, Naver


Speech recognition has become unprecedentedly important with the popularization of the intelligent agents. The performance of the speech recognition is greatly influenced by the noise interference, which is unavoidable in practical situations. Humans can concentrate on speech signal in noisy environments. Such ability is enabled by auditory cues in human ear, which analyzes acoustic signal steadily in time and frequency domain. Speech segregation is an algorithm that mimics such ability of humans and it helps improving the performance of the speech recognition in noisy circumstances. In this talk, recent researches for speech segregation will be introduced, such as non-negative matrix factorization and deep clustering. In particular, my recent work of speech/music pitch classification based on bidirectional LSTM with probabilistic attention will be explained minutely.


Han-Gyu Kim received the B.S. degree in electronic engineering from Tsinghua University, Beijing, China, in 2009, and the M.S. degree and the Ph.D degree in School of Computing, KAIST, Daejeon, South Korea, in 2011 and 2018, respectively. He is currently a researcher in NAVER Corp., Gyeonggi-do, South Korea. His research interests include speech recognition, source separation, machine learning and artificial intelligence.


February 27 (Wednesday), 2019

Opening (09:30~09:40)

Welcoming address
Young-Seob Jeong (Professor, SoonChunHyang Univ.)

Invited talk (09:40~10:30)

Monaural Speech Segregation Using Pitch Classification Based on Bidirectional LSTM with Probabilistic Attention
Han-Gyu Kim (Researcher, Naver)

Coffee break (10:30~10:50)

Session 1 (10:50~12:10)

[ASC] Detecting Basic Level Categories by Term Weighting and Feature Entropy
Junze Li, Qing Du, Yi Cai and Jialin Wu (South China University of Technology, China)
[IWDS] Augmentative and Alternative Communication System Using Information Priority and Retrieval
Yoonseok Heo and Sangwoo Kang (Gachon University, South Korea)
[IWDS] Korean Time Information Analysis of Hierarchical Annotation Rules from Natural Language Text
Chae-Gyun Lim and Ho-Jin Choi (KAIST, South Korea)
[IWDS] Automatic Speech Recognition Dataset Augmentation with Pre-Trained Model and Script
Minsu Kwon and Ho-Jin Choi (KAIST, South Korea)

Lunch (12:10~14:00)

Session 2 (14:00~15:20)

[ASC] Word Embedding Method of SMS Messages for Spam Messge Filtering
Hyun-Young Lee and Seung-Shik Kang (Kookmin University, South Korea)
[IWDS] Word-level Emotion Embedding based on Semi-Supervised Learning for Emotional Classification in Dialogue
Young-Jun Lee, Chan-Yong Park and Ho-Jin Choi (KAIST, South Korea)
[IWDS] Improving Response Quality in a Knowledge-Grounded Chat System Based on a Sequence-to-Sequence Neural Network
Sihyung Kim, Harksoo Kim (Kangwon National University, South Korea), Oh-Woog Kwon and Young-Gil Kim (Electronics and Telecommunications Research Institute(ETRI), South Korea)
[ASC] Scenery-based Fashion Recommendation with Cross-Domain Geneartive Adverserial Networks
Sang Yeong Jo, Sun-Hye Jang, Hee-Eun Cho and Jin-Woo Jeong (Kumoh National Institute of Technology, South Korea)

Closing remarks (15:20~15:30)


All questions about subissions should be emailed to chairs Jonghwan Hyeon ( or Young-Seob Jeong (