Overview
Welcome to our Affective Social Computing Lab (ASCL) website!
Our long-term research goal is to create engaging, socially intelligent agents that can interact with humans in innovative ways through expressive multi-modal interaction.
So we focus on the design of intelligent virtual agents who can be expressive, culturally sensitive, and socially appropriate, depending upon the context of the interaction.
Our socially intelligent agents sense their interlocutor's social cues and respond to them in real-time. The agents also aim to portray different ethnicities, speak various languages, and simulate different personalities.
Recently we've been designing and evaluating our agents specifically in domains such as personal health informatics, social skills training, health education, health promotion, and learning environments. However, we've worked on many other application domains where socially expressive virtual agents are of interest, e.g. car safety, social robotics, tele-home healthcare.
To carry out our research, we create new knowledge by finding and synthesizing relevant interdisciplinary results into a computational form useful for affective intelligent virtual agents. We also need to research which Artificial Intelligent (AI) techniques are best suited to the specificity of the various components of the intelligent agent (from sensing, to decision-making, to actuating), and to apply Human-Computer Interaction (HCI) principles toward the design of engaging interactive media, and understand emotion and communication theories .
In a specific context, we build affective intelligent virtual agents able to:
Our long-term research goal is to create engaging, socially intelligent agents that can interact with humans in innovative ways through expressive multi-modal interaction.
So we focus on the design of intelligent virtual agents who can be expressive, culturally sensitive, and socially appropriate, depending upon the context of the interaction.
Our socially intelligent agents sense their interlocutor's social cues and respond to them in real-time. The agents also aim to portray different ethnicities, speak various languages, and simulate different personalities.
Recently we've been designing and evaluating our agents specifically in domains such as personal health informatics, social skills training, health education, health promotion, and learning environments. However, we've worked on many other application domains where socially expressive virtual agents are of interest, e.g. car safety, social robotics, tele-home healthcare.
To carry out our research, we create new knowledge by finding and synthesizing relevant interdisciplinary results into a computational form useful for affective intelligent virtual agents. We also need to research which Artificial Intelligent (AI) techniques are best suited to the specificity of the various components of the intelligent agent (from sensing, to decision-making, to actuating), and to apply Human-Computer Interaction (HCI) principles toward the design of engaging interactive media, and understand emotion and communication theories .
In a specific context, we build affective intelligent virtual agents able to:
- sense the affect, preferences, and personality of their interlocutor (bio-sensing, pattern matching, and knowledge elicitation and representation of affective phenomena);
- make decisions (logic-based and probabilistic reasoning) that are socially acceptable based on their dynamic user-model (knowledge representation);
- interact with humans (HCI design principles) within the domain knowledge (e.g. health interventions, tutoring system);
- display some emotional and social competence (emotion and social communication theory); and
- learn to tailor and adapt (machine learning) their interactive styles to the specific socio-emotional profile (user-modeling) of their human counterpart.
Selected projects in brief
Virtual Health Counselors and Virtual Counseling

Spoken dialog with a virtual counselor
Our virtual health coaches (described further here) are 3-dimensional virtual agents who use their speech synthesis abilities, speech understanding abilities, dynamic expressive animations of subtle facial expressions and gestures to engage people in spoken dialogs focused on healthy lifestyles - currently on excessive alcohol consumption, although it could be about other target behavior (e.g. overeating, drug use, unsafe sex).
The dialog system processes the persons’ spoken answers and controls the flow of the conversation based on effective motivational healthcare interventions.
Our virtual health coaches (described further here) are 3-dimensional virtual agents who use their speech synthesis abilities, speech understanding abilities, dynamic expressive animations of subtle facial expressions and gestures to engage people in spoken dialogs focused on healthy lifestyles - currently on excessive alcohol consumption, although it could be about other target behavior (e.g. overeating, drug use, unsafe sex).
The dialog system processes the persons’ spoken answers and controls the flow of the conversation based on effective motivational healthcare interventions.

On-Demand VIrtual Counselor (ODVIC)
ODVIC is a personalized virtual health agent system (described further here), accessed over the internet, controlled by a multimodal Embodied Conversational Agent (ECA) that empathically delivers an evidence-based behavior change intervention. We currently focus our work on excessive alcohol consumption as a target behavior.
Our approach is based on the DCU, a successful existing patient-centered brief motivational intervention for behavior change. With a simple text-only interface, the Drinker’s Check-Up (DCU) has been found effective in reducing alcohol consumption in problem drinkers by 50% at 12-month follow-up.
Our studies show a 31% increase in users' intention to reuse the intervention when the content of the DCU intervention is delivered by our ODVIC, compared to users' intention to reuse the text-only DCU.
ODVIC is a personalized virtual health agent system (described further here), accessed over the internet, controlled by a multimodal Embodied Conversational Agent (ECA) that empathically delivers an evidence-based behavior change intervention. We currently focus our work on excessive alcohol consumption as a target behavior.
Our approach is based on the DCU, a successful existing patient-centered brief motivational intervention for behavior change. With a simple text-only interface, the Drinker’s Check-Up (DCU) has been found effective in reducing alcohol consumption in problem drinkers by 50% at 12-month follow-up.
Our studies show a 31% increase in users' intention to reuse the intervention when the content of the DCU intervention is delivered by our ODVIC, compared to users' intention to reuse the text-only DCU.

Ontology for behavioral health
We created an ontology for behavioral health (described further here) based on which we built an NER augmented with lexical resources. Our NER automatically tags words and phrases in sentences with relevant (lifestyle) domain-specific tags.
For example, it can tag: healthy food; unhealthy food; potentially-risky, healthy activity, drugs, tobacco, alcoholic beverages. We have developed the first named-entity recognizer designed for the lifestyle change domain. It aims at enabling smart health applications to recognize relevant concepts.
We created an ontology for behavioral health (described further here) based on which we built an NER augmented with lexical resources. Our NER automatically tags words and phrases in sentences with relevant (lifestyle) domain-specific tags.
For example, it can tag: healthy food; unhealthy food; potentially-risky, healthy activity, drugs, tobacco, alcoholic beverages. We have developed the first named-entity recognizer designed for the lifestyle change domain. It aims at enabling smart health applications to recognize relevant concepts.
3-D Character Animation

HapFACS open source software for facial expression generation
Our HapFACS open source software for facial expression generation (described further here), is based on Ekman's (1992) Facial Action Coding Systems (FACS) - the current standard on facial coding of the human face - enables (1) researchers on 3-dimensional animated characters - without expertise in computer graphics nor in FACS - to design a set of FACS-based facial expressions on virtual characters platform and to integrate them in their applications; and (2) researchers on facial expression recognition to experiment with realistically generated expressions. We currently already have over 20 research labs using our open source software.
Our HapFACS open source software for facial expression generation (described further here), is based on Ekman's (1992) Facial Action Coding Systems (FACS) - the current standard on facial coding of the human face - enables (1) researchers on 3-dimensional animated characters - without expertise in computer graphics nor in FACS - to design a set of FACS-based facial expressions on virtual characters platform and to integrate them in their applications; and (2) researchers on facial expression recognition to experiment with realistically generated expressions. We currently already have over 20 research labs using our open source software.

3D Virtual Avatars Signing in American Sign Language
In this project funded by the National Science Foundation (described further here), we design 3D Signing Virtual Agents who can dynamically sign in American Sign Language (ASL).
We partner with the Institute for Disabilities Research and Training (IDRT) to generate real-time animations of 3D characters during real-time motion capture of human signers ASL gestures and facial expressions.
In this project funded by the National Science Foundation (described further here), we design 3D Signing Virtual Agents who can dynamically sign in American Sign Language (ASL).
We partner with the Institute for Disabilities Research and Training (IDRT) to generate real-time animations of 3D characters during real-time motion capture of human signers ASL gestures and facial expressions.
Emotion Recognition

Sentiment Analysis from Text
We compared the performance of three rule-based approaches and two supervised approaches (i.e. Naive Bayes and Maximum Entropy). We trained and tested our system using the SemEval-2007 affective text dataset, which contains news headlines extracted from news websites.
Our results (described here) show that our systems outperform the systems demonstrated in SemEval-2007.
We compared the performance of three rule-based approaches and two supervised approaches (i.e. Naive Bayes and Maximum Entropy). We trained and tested our system using the SemEval-2007 affective text dataset, which contains news headlines extracted from news websites.
Our results (described here) show that our systems outperform the systems demonstrated in SemEval-2007.