Researchers at the Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K) recently published two papers that showcase new capabilities of wearable sensors to automatically monitor conversation and TV watching. Both of these are frequent daily behaviors that have a major impact on social well-being and physical and mental health.
rConverse: Moment by Moment Conversation Detection Using a Mobile Respiration Sensor (citation below) presented a new way for researchers to passively monitor speaking and listening states in a conversation by analyzing breathing patterns captured by a wearable respiration sensor. Researchers showed that they can detect conversation with a similar accuracy as that from widely used audio recordings.
Using respiration for monitoring conversations presents several advantages over audio, such as reliable identification of the speaker, inference of urges to speak and unsuccessful turn-taking. Because stress can also be inferred from respiration, the stress states of a listener can be easily analyzed during a conversation.
In the second paper, Watching the TV Watchers (cited below), researchers used a wearable, head-mounted camera (e.g., in smart eyeglasses) to capture Point-of-View (POV) video. Use of POV video makes it possible to capture a continuous record of a person’s visual inputs as they go about their daily life. The researchers presented a machine learning-based analysis system that automatically detects the screens in a participant’s field of vision and identifies when one or more screens are being watched. The model does not involve tracking eye movement, which requires more expensive equipment.
These novel assessments can be used to improve health outcomes in multiple ways. Excessive television watching has been linked to patterns of sedentary behavior, unhealthy eating, and alcohol abuse.
Current research has been based largely on participant self-report, which doesn’t allow researchers to map detailed television exposure to determine what patterns of screen viewing and what sources of media content pose the greatest risk to health. These missing capabilities can now be used in both health research and sensor-triggered mHealth interventions to improve health outcomes.
Both paper were published in the ACM IMWUT journal and presented at the recent International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp 2018), held Oct. 8-12 in Singapore.
Citations
Rummana Bari, Roy J. Adams, Md. Mahbubur Rahman, Megan Battles Parsons, Eugene H. Buder, and Santosh Kumar. 2018. rConverse: Moment by Moment Conversation Detection Using a Mobile Respiration Sensor. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol (IMWUT). 2, 1, Article 2 (March 2018), 27 pages. DOI: https://doi.org/10.1145/3191734
Yun C. Zhang and James M. Rehg. 2018. Watching the TV Watchers. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol (IMWUT). 2, 2, Article 88 (July 2018), 27 pages. DOI: https://doi.org/10.1145/3214291
About MD2K
The MD2K Center is conducting research and developing software to make it easier to gather, analyze and interpret health data generated by mobile and wearable sensors. The mobile sensor big data software platforms developed by MD2K is being used in 14 research studies across 11 states producing hundreds of terabytes of sensor data to study stress, overeating, heart failure, smoking, cocaine use, opioid overuse, oral health, and work performance. The MD2K team is comprised of scientists in Computer Science, Engineering, Medicine, Behavioral Science, and Statistics, drawn from 13 universities (Cornell Tech, Georgia Tech, Harvard, Northwestern, Ohio State, UCLA, UC San Diego, UC San Francisco, the University of Massachusetts Amherst, the University of Memphis, the University of Michigan, University of Utah, and West Virginia University).