Authors
Hongyi Zhang, Mengxin Cao, Ron Chew
Demo
Introduction
UI designers have long been sketching to realize the interfaces they have in mind for prototyping. However, there is still much that can be done to make designers’ life easier. Current UI design tools are time-consuming and repetitive: drawing sets of graphics and widgets, then specifying their interactions and behaviors often requires a large effort in code and configuration for small results. The iteration process cannot catch up with the speed the designer’s minds are operating at.
In this project, we propose “Talk UI”, a rapid multi-modal prototyping tool that takes advantage of natural language to facilitate the design process and accelerate prototype iteration. With Talk UI, designers will be able to create graphical interfaces as well as specify interactive outcomes through talking and demonstration. No more sketching is needed, and the UI design process will be as natural as having a conversation.
Features
Our Talk UI leverages natural language and voice input from users and supports the following features that we would like to highlight:
- Continuous listening of user voice input without necessity of the user specifying start and stop of a conversation
- Instantiation of graphical objects and widgets through natural language
- Attaching interactive behaviors to existing graphical objects through demonstration
- Support of direct manipulation to update graphical object properties through property sheet
- Display of voice command history and conversation feedbacks
- Static export of the final interface