The 3rd International Workshop on Dialog Systems
(IWDS 2020)


The 3rd International Workshop on Dialog Systems (IWDS 2020)

February 19, 2020, BEXCO, Busan, Korea

In conjunction with the IEEE BigComp 2020 - 7th IEEE International Conference on Big Data and Smart Computing


When people look for information or find particular services, they used to put queries into search engines and choose a desired one among candidates. Although this way of human computer interaction (HCI) makes it possible to find their desired ones much efficiently than before, now they want more convenient way. Dialog system is the one, which makes people to communicate with computers through natural language or voices. The dialog system is successfully applied to various applications, such as intelligent speaker (e.g., Amazon echo, Google home) and intelligent counsellor, courtesy of great advance of machine learning techniques. It usually consists of several cascade steps (e.g., speech to text, natural language understanding), and it is necessary to find a way of improving of the steps and effectively incorporating them. We want to discuss and share the knowledge about how to solve these issues.

Theme, Purpose, and Scope

This workshop aims to create opportunities to discuss about the state-of-the-art studies, and to share on-going works. We hope that this will enhance collaboration among the researchers related to dialog systems. There are many challenging issues, such as out-of-domain detection, distant voice recognition, and end-to-end systems. We want to discuss about how to solve such issues, and share the experiences of applying the dialog system to real-world applications.

We invite submissions on topics that include, but are not limited to, the following:

  • Intelligent dialog systems
  • Chatbot systems
  • Speech recognition
  • Speech synthesis
  • Natural language understanding
  • Information extraction
  • Dialog management
  • Language resources and representation scheme for dialog systems


Organizing Committee

  • Chae-Gyun Lim, Ph.D Candidate, KAIST
  • Jonghwan Hyeon, Ph.D Candidate, KAIST
  • Ho-Jin Choi, Professor, KAIST

Program Committee

  • Young-Seob Jeong, Professor, SoonChunHyang Univ.
  • Byungsoo Ko, Researcher, Naver
  • Dongkeon Lee, Researcher, KAIST
  • Hee-Cheol Seo, Researcher, Naver
  • Hyounggyu Lee, Researcher, Naver
  • Joonghwi Shin, Researcher, Naver
  • Kyoung-Soo Han, Researcher, Naver
  • Sa-Kwang Song, Researcher, KISTI
  • Seung-Ho Han, Researcher, KAIST
  • Yoonjae Jeong, Researcher, NCSOFT
  • Zae Myung Kim, Researcher, Naver


09:00 - 10:20, February 19 (Wednesday), 2020

Location: Room #326

09:00 - 09:05
Opening Remark
Chae-Gyun Lim
09:05 - 09:20
Prior Art Search Using Multi-Modal Embedding of Patent Documents
Myungchul Kang, Suan Lee, and Wookey Lee
09:20 - 09:35
Implementation of Python-Based Korean Speech Generation Service with Tacotron
Minsu Kwon, Young-Seob Jeong, and Ho-Jin Choi
09:35 - 09:50
Emotional Response Generation using Conditional Variational Autoencoder
Young-Jun Lee and Ho-Jin Choi
09:50 - 10:05
Temporal Relationship Extraction for Natural Language Texts by Using Deep Bidirectional Language Model
Chae-Gyun Lim and Ho-Jin Choi
10:05 - 10:20
Multi-label Patent Classification using Attention-Aware Deep Learning Model
Arousha Haghighian Roudsari, Jafar Afshar, Charles Cheolgi Lee, and Wookey Lee


All questions about submissions should be emailed to chairs Chae-Gyun Lim ( or Jonghwan Hyeon (