I am working on two ongoing research projects with the R-House Human-Robot Interaction lab, one is in collaboration
with Honda Research Institute of Japan and one is under a National Science Foundation grant to help Mississippi State University design a Socially Assistive Robot.
Haru Project: Haru, a prototype desktop social robot being developed by Honda Research Institute of Japan, is designed to be an "Encouraging Mediator"
between groups of people who are disconnected by age, culture, language, or physical distance. Our primary focus is on developing Haru for use with children, focusing
on engagement and privacy. One of the main studies I am a part of is creating perception models for Haru that can help Haru dynamically adapt to children's changing
levels of engagement based on non-invasive sensors such as an RGB camera and thermal camera. The other study I am a part of is using UNICEF's guidance for AI and children
which discussed nine requirements for child-centered AI that protects the child's rights during the interaction. We are running user based studies with children and their
parents to test how these guidelines, digital privacy, and Haru can work effectively and safely in everyday scenarios.
Therabot™ Project: This project studies how we can design a Socially Assistive Robot (SAR) for helping treat and manage Major Depressive Disorder (MDD)
using Mississippi State University's prototype SAR named Therabot™. Most of the studies I am involved in with this project have participants with MDD do a
participatory design workshop in which they will design an aspect of Therabot™, such as it's physical appearance, it's programming, or what sensors it will use. This
gives us insights into how to design Therabot™ but also what aspects of any SAR would be helpful in treating, monitoring, and managing MDD.
I am also working on a project in the Socioneural Physiology Lab studying ways to detect trust using physiological signals.
TrustingAI: This is a large, multi-university project funded by the DoD studying human trust in Artificial Intelligence. Our lab researches the detection of trust using physiological signals such as micro-expressions and ECG signals. We do this through having participants play a computer game against another person which will force a player to deceive the other player. By using computer vision and ECG sensors we are trying to detect changes in the trust between players at this point of deception and beyond.
As part of this project I am developing the AI counterpart to pretend to be a human and play the game against a participant. This is being accomplished through Reinforcement Learning in a set of environments similar to the one the participant will see. I am also helping assist in the analysis of our physiological data using a variety of Deep Learning and traditional Machine Learning models.