Supporting Direct Human-Computer Communication
October 18th, 10:00
PCRI building (650), Room 465. How to get to there ? )
From the dawn of digital computing, we have striven to communicate with computers to fully leverage their computing power. The development of sensing technologies enables such communication with verbal language, gestural language, and graphical language. Despite the many different input techniques, conversations with computers are all structured around a fixed sets UI elements that do not support much flexibility. As such, the rich and dynamic thoughts we could have articulated naturally with flexible words, gestures, and visuals must be formalized as structured, restrictive, rigid, and repetitive tasks around such element. I seek to design a new interaction language that enables us to directly and flexibly articulate our creative thoughts. I approach this from two directions. First, I design new graphical representations of digital content to match our dynamic and flexible needs. Second, I invent novel interaction techniques to enable the direct articulation of user intention.
Haijun Xia is PhD student advised by Prof. Daniel Wigdor at DGP lab, University of Toronto. He is also a Microsoft PhD Fellow and Adobe PhD Fellow. His research focuses on creating flexible digital media to augment creativity. He approaches this from two directions: 1) he invents novel representation of the abstract content to match our dynamic needs; and 2) he develops novel interaction techniques that allow us to express our thoughts and ideas via graphical, gestural, and vocal communication that we are all naturally capable of.