Voice-Based Programming
Programming languages and environments have been designed under the assumption that we program by using a keyboard and a computer screen. While this assumption is natural, there are situations where we cannot/do not want to use keyboards and computer screens. For example, you might want to instruct a vacuum cleaner robot to do a rather complicated sequence of behaviors in front of the robot. You might want to write a program while you are driving a car or riding a bicycle. Voice-based interface could be useful as speech recognition and synthesizing technologies are dramatically improved in the last decade. We already have seen their potential through the voice assistants like Amazon Alexa and Apple Siri.
However, there is a huge gap between the voice assistance and the voice-based programming in which we find a number of questions. Are traditional programming languages suitable for voice-based coding in terms of their syntax? Are they easy to dictate? Are they easy to listening comprehension? (How does an expression “x0 * c_minus1” sound to us and to speech recognition systems?) Should we use traditional programming editors via voice? How can move cursors and copy and paste code fragments, then? Or, should there be totally different syntax and an editor paradigm for voice-based coding?
In this project, we explore programming languages and programming environments that are suitable for voice-based, in particular voice-only, user interface. Our approach is incremental; we first design and implement a small language and environment that is capable for developing small programs for robots in order to find out important elements. We will then extend them to more realistic applications.
- Presentations on Voice-Based Programming and Distributed Reactive Programming at PX/26
- Omar’s Master Thesis Presentation
- Come and Go