For nearly everyone using a computer is limited to typing, clicking, searching, and thanks to Siri and other comparable software for verbal commands. Evaluating how humans cooperate with each other, face to face – smiling, pointing, the voice tone all lends prosperity to communication.
With the aim of revolutionizing daily interactions among humans and computers, Colorado State University are developing new technologies for making computers distinguish not just conventional commands, but also gestures, body language and face expressions.
“Current human-computer interfaces are still harshly limited,” said Draper, who is joined on the plan by CSU researchers from the department of computer science and mathematics. “First, they give basically one-way communication: users tell the computer what to do. This was okay when computers were basic tools, but increasingly, computers are becoming our partners and assistants in compound tasks. Communication with computers wants to be a two-way dialogue.”
Their objective: making computers enough smart to consistently recognize non-verbal cues from humans in the most usual, intuitive way possible. According to the project plan, the work could one day allow people to talk more easily with computers in loud settings, or when a person is deaf or hard to listen, speaks another language.
The plan, which falls largely under DARPA’s basic research arm, is focused on enabling people to talk to computers through expressions in adding up to words, not in place of them, the researchers say.
The plan also includes co-principal investigators professor of computer science; Ross Beveridge, Jaime Ruiz, assistant professor of computer science; and Michael Kirby and Chris Peterson, both are the professors of mathematics.