"The beginning of knowledge is the discovery of something we do not understand".
Frank Herbert.

At the Human-Centered Computing Lab we pursue innovative solutions to real-world problems. Using an interdisciplinary approach for Human-Computer Interaction. We bring together computer engineering, human factors, ergonomics, psychology, user interfaces, multimedia production and graphic design.

My research focuses on designing new ways of communication to integrate people and technology. Great part of my work analyses the use of consumer electronics like laptops, smartphones and tablets on high congnitive load situations such as automotive environments. I strive to find the perfect combination of performance, safety and fun using these technologies. My way to do that is to study technology and humans and make them understand each other.

Emotion Adaptive Vehicle User Interfaces

Affective computing is a interdisciplinary field of computer science, psychology and cognitive sciences that studies and develops systems that can recognize, interpret, process and simulate human affects. This research focuses in affective computing for automotive environments. While the ultimate goal is to build in-car interfaces that react to the driver's emotional state, current studies focus on several aspects:

Research on this field is ongoing and publications will be updated with the results of the studies.

Fields of interest: affective computing, adaptive interfaces, cognitive load, natural user interfaces, user modeling, dialog modeling

 

Integrated Transportation Platform

Update: We are winners of the 2011 US Government Connected Vehicle Technologies Challenge!

Thanks US Department of Transportation and US public opinion for awarding us the Finalist Winner title for: Clemson's Integrated Intelligent Transportation Platform

More than half of the world's population is concentrated in urban areas and this trend is expected to raise up to 75% in the next 10 years. In heavily populated areas transportation needs to be redefined to accomodate the mobility needs. It is therefore needed to create a transportation platform where connected vehicles communicate with each other, with the driver and with the central system optimizing resources. Our research group at Clemson University proposes an Integrated Intelligent Transportation Platform (IITP) to build a true connected vehicle ecosystem. The proposed platform will leverage high performance computing and networking capabilities in the infrastructure backbone with advanced computing and communication capabilities in the vehicle including the integration of mobile devices. A unique aspect of the platform is the DSRC support for mobile Commerce (m-commerce) applications that will allow for the incorporation of private entities in the use of the platform. This aspect will open up the opportunity for a successful business model that can also support the non-revenue generating functions of the platform such as safety applications. The framework will also be capable of supporting future, innovative transportation related applications.

For more information read our proposal for the US Department of Transportation Connected Vehicle Challenge, or watch the following video:

Fields of Interest: Multimodal Interaction, Intelligent Transportation Systems, Speech Recognition, Cloud Computing

 

Multimodal-multiplatform Information Access

Update: Presenting iHelp paper at the 2011 Interact workshop: User Experience in Cars !

In the last 10 years the consumer electronics revolution has produced a myriad of different devices that we use to retrieve information or produce it. Desktop computers, laptops, netbooks, tablets and smartphones run different operating systems on diverse hardware with specific screen resolutions, differnt browsers and even input methods. We can search for information using keyboards, mouses, touchscreens, voice inputs, accelerometer data for gestures or the device camera for image or gesture recognition.

Users expect information to be available on their device and require an optimized presentation of information, good performance and use of device specific features. Native applications seemed to be the only solution so far, however they required extensive resources for development and maintenance. The write once, run everywhere paradigm seemed impossible to meet untill recently. The development of web design and the adoption of HTML5 and CSS3 standards allows developers to produce multiplatform content from a single source. Furthermore, the user experience can now mimic the one of native applications using cross-platform open javascript libraries like jQuery, SenchaTouch or multiplaform solutions like Rhomobile and Phonegap for so called "hybrid apps".

In this research we intend to take a step further on information access and integrate cloud computing solutions, cross-platform web development, speech recognition, augmented reality and multimodal interaction principles to provide an enhanced infomation access experience.

Fields of Interest: Multimodal Interaction, Augmented Reality, Speech Recognition, Cloud Computing

 

Voice User Help

Vehicle manuals were designed to provide support and information about the usage and maintenance of the car. However, current manuals do not allow looking for information under driving conditions. Also the lack of clarity, efficiency and outdated information in today’s documentation leads to user dissatisfaction and low manual usage. In automobiles this problem is even more crucial given that driving is, for many people, the most complex and potentially dangerous task they will perform during their lifetime. In many cases user’s manuals are the driver’s only guidance when problems occur.

The voice interfaced vehicle manual can potentially fix the deficiencies of its alternatives. We provide integration of natural speech recognition in the vehicle and a driver-centered design to reduce driver distraction, increase satisfaction and manual usability; while also benefiting Original Equipment Manufacturers in reducing the documentation process and centralizing user manual deployment.

If you want to learn more about the Voice User Help you can go to the following link and collaborate by filling up the survey with your opinions after the video demonstration:

Voice User Help Demo Video and Survey

Fields of Interest: Natural Language Processing, Information Retrieval, Machine Learning

Voiceing

Cell phone use while driving has become a prevalent issue worldwide due to the high number of death related to texting while driving. Enforcing laws to prevent texting in the car has turned out to be more of a problem than a solution since drivers try to hide their phones while persisting on this dangerous habit.

Voiceing is a method for instantly sending messages using voice over a phone line. The process begins with the sender connecting to the server using a phone (cellular or land or internet, e.g. voice over IP). After the sender connects s/he composes a message using her/his voice. The message is stored on the server. The voiceTEXT system then places a call to the designated recipient. When the recipient answers the call, the voiceTEXT system plays the recorded message. The recipient can reply to the voiceTEXT message with a new voiceTEXT message using her/his voice, meaning the recipient becomes a sender.

Fields of Interest: Natural Language Processing, Driving Safety, Speech Synthesis