Selected projects
Virtual Health Counselors and Coaches

Spoken dialog with a virtual counselor: our virtual health coaches (described further here) are 3-dimensional virtual agents who use their speech synthesis abilities, speech understanding abilities, dynamic expressive animations of subtle facial expressions and gestures to engage people in spoken dialogs focused on healthy lifestyles - currently on excessive alcohol consumption, although it could be about other target behavior (e.g. overeating, drug use, unsafe sex).
The dialog system processes the persons’ spoken answers and controls the flow of the conversation based on effective motivational healthcare interventions.
The dialog system processes the persons’ spoken answers and controls the flow of the conversation based on effective motivational healthcare interventions.

On-Demand VIrtual Counselor (ODVIC): (described further here) accessed over the internet, is a multimodal Embodied Conversational Agent (ECA) that empathically delivers an evidence-based behavior change intervention. We currently focus our work on excessive alcohol consumption as a target behavior. Our approach is based on the DCU, a successful existing patient-centered brief motivational intervention for behavior change. With a simple text-only interface, the Drinker’s Check-Up (DCU) has been found effective in reducing alcohol consumption in problem drinkers by 50% at 12-month follow-up.
Our studies show a 31% increase in users' intention to reuse the intervention when the content of the DCU intervention is delivered by our ODVIC, compared to users' intention to reuse the text-only DCU.
Our studies show a 31% increase in users' intention to reuse the intervention when the content of the DCU intervention is delivered by our ODVIC, compared to users' intention to reuse the text-only DCU.

Ontology for behavioral health: we created an ontology for behavioral health (described further here) based on which we built an NER augmented with lexical resources. Our NER automatically tags words and phrases in sentences with relevant (lifestyle) domain-specific tags. For example, it can tag: healthy food; unhealthy food; potentially-risky, healthy activity, drugs, tobacco, alcoholic beverages. We have developed the first named-entity recognizer designed for the lifestyle change domain. It aims at enabling smart health applications to recognize relevant concepts.
3-D Character Animation

HapFACS open source software for facial expression generation: Our HapFACS open source software for facial expression generation (described further here), is based on Ekman's (1992) Facial Action Coding Systems (FACS) - the current standard on facial coding of the human face - enables (1) researchers on 3-dimensional animated characters - without expertise in computer graphics nor in FACS - to design a set of FACS-based facial expressions on virtual characters platform and to integrate them in their applications; and (2) researchers on facial expression recognition to experiment with realistically generated expressions. We currently already have over 20 research labs using our open source software.

3D Signing Virtual Agents: in this project funded by the National Science Foundation (described further here), we design 3D Signing Virtual Agents who can dynamically sign in American Sign Language (ASL).
We partner with the Institute for Disabilities Research and Training (IDRT) to generate real-time animations of 3D characters during real-time motion capture of human signers ASL gestures and facial expressions.
We partner with the Institute for Disabilities Research and Training (IDRT) to generate real-time animations of 3D characters during real-time motion capture of human signers ASL gestures and facial expressions.
past projects
Emotion and Sentiment Recognition

In this project (described here), we simulate driving scenes and scenarios in a 3D virtual reality immersive environment to sense and interpret drivers' affective states (e.g. anger, fear, boredom).
Drivers wear non invasive bio-sensors while operating vehicles with haptic feedback (e.g. break failure, shaking with flat tire) and experience emotionally loaded 3D interactive driving events (e.g. driving in frustrating delayed New-York traffic, arriving at full speed in a blocked accident scene with failed breaks and pedestrian crossing).
Drivers wear non invasive bio-sensors while operating vehicles with haptic feedback (e.g. break failure, shaking with flat tire) and experience emotionally loaded 3D interactive driving events (e.g. driving in frustrating delayed New-York traffic, arriving at full speed in a blocked accident scene with failed breaks and pedestrian crossing).
Social robots

Social service robot.
Cherry, the Little Red Robot with a Personality (described here), was one of the first fully integrated mobile robotic systems to test human-robot interaction in social contexts.
We built Cherry to provide guided tours of our Computer Science suite to visitors. Cherry had a map of the office floor, she could recognize faculty with machine vision and talk to visitors about each faculty's research. She would also get frustrated - and showed it - if she kept trying to find a professor whose door would always be closed…
We evaluated the reactions of people before they met Cherry and after they met her: the more people interacted with her, the more they liked her. When the project ended, many people told us they missed her roaming around our Computer Science floor, and asked if we could bring her back.
Cherry, the Little Red Robot with a Personality (described here), was one of the first fully integrated mobile robotic systems to test human-robot interaction in social contexts.
We built Cherry to provide guided tours of our Computer Science suite to visitors. Cherry had a map of the office floor, she could recognize faculty with machine vision and talk to visitors about each faculty's research. She would also get frustrated - and showed it - if she kept trying to find a professor whose door would always be closed…
We evaluated the reactions of people before they met Cherry and after they met her: the more people interacted with her, the more they liked her. When the project ended, many people told us they missed her roaming around our Computer Science floor, and asked if we could bring her back.

Cooperating mobile robots with emotion-based architecture.
We won the Nils Nilsson Award for Integrating AI Technologies and the Technical Innovation Award when we competed at the AAAI Robot Competition: Hors D'oeuvre Anyone at the National Artificial Intelligence Conference in 2000 organized by the Association for the Advancement of Artificial Intelligence (AAAI).
We were the first team to introduce a pair of collaborating robots (described further here) - Butler (taller, shown on the left) and its assistant (right): Butler moved around the crowd offering hors d'oeuvres on its tray; a laser sensor enabled it to know when a treat was taken, hence to determine when the tray was getting low on food and needed refilling. Sonars enabled them to avoid obstacles such as guests... The assistant stood by the hors d'oeuvres refill station until called by Butler to bring a new full tray for tray exchange. Moreover, they were designed with an emotion-based three-layer architecture which simulated some of the roles of emotions during human decision-making: e.g. if the assistant was too slow, Butler's frustration would increase (and be expressed) with time, until it decided to get the tray itself. If you're wondering about the armadillo theme, the competition was in Texas...
We won the Nils Nilsson Award for Integrating AI Technologies and the Technical Innovation Award when we competed at the AAAI Robot Competition: Hors D'oeuvre Anyone at the National Artificial Intelligence Conference in 2000 organized by the Association for the Advancement of Artificial Intelligence (AAAI).
We were the first team to introduce a pair of collaborating robots (described further here) - Butler (taller, shown on the left) and its assistant (right): Butler moved around the crowd offering hors d'oeuvres on its tray; a laser sensor enabled it to know when a treat was taken, hence to determine when the tray was getting low on food and needed refilling. Sonars enabled them to avoid obstacles such as guests... The assistant stood by the hors d'oeuvres refill station until called by Butler to bring a new full tray for tray exchange. Moreover, they were designed with an emotion-based three-layer architecture which simulated some of the roles of emotions during human decision-making: e.g. if the assistant was too slow, Butler's frustration would increase (and be expressed) with time, until it decided to get the tray itself. If you're wondering about the armadillo theme, the competition was in Texas...