Ostadabbas Awarded NSF CRII Grant

Assistant Professor Sarah Ostadabbas of the Department of Electrical and Computer Engineering (ECE) has been awarded a grant from the National Science Foundation, Division of Information & Intelligent Systems to develop a “Semi-Supervised Physics-Based Generative Model for Data Augmentation and Cross-Modality Data Reconstruction” to bridge the gap between state-of-the-art deep learning techniques and the small data problem common in personalized healthcare and other data-limited domains.


Absract Source: NSF

Deep learning approaches have been rapidly adopted across a wide range of fields because of their accuracy and flexibility, but require large labeled training sets. This presents a fundamental problem for applications with limited, expensive or private data, such as in healthcare. One example of these applications with the small data challenges is human in-bed pose and pressure estimation. In-bed pose estimation can be a critical part of prevention, prediction, and management of movement-related problems like pressure ulcers. These pressure ulcers often lead to costly and painful conditions such as bedsores. In this research, we propose a semi-supervised generative model based on novel data augmentation and cross-modality data reconstruction techniques to expand the use of powerful deep learning approaches to the in-bed pose and pressure estimation problems. This grant will directly fund the education and mentorship of graduate students involved in researching these problems. In addition, middle school and high school students will be engaged through summer school mentorship programs at Northeastern University.The educational outreach funded by this grant will be used to mentor at schools primarily serving minority student populations. This comprehensive mentorship from middle school to PhD creates a pipeline of experienced students in this important area. The PI actively maintains a diverse research group which includes 50% women and other members of under-represented groups.

This proposed research explores the use of semi-supervised physics-based generative models to bridge the gap between state-of-the-art deep learning techniques and the small data problem common in personalized healthcare and other data-limited domains. The use of a physics-based approach to generate image data from a low-dimensional parameter space is unique and transformative. This proposal organizes the research to two Thrusts: (I) data augmentation, which synthesizes the large training set required to train a deep learning model to recognize the in-bed pose from an image; and (II) cross-modality data reconstruction, which extracts pose parameters from one image modality to generate data in another image modality. The success of the data augmentation will be measured by using the synthesized image data to train a network, which will be tested against deep and non-deep models trained on publicly-available pose datasets. The accuracy of the pressure image reconstruction will be tested by comparing the results to pressure images taken from a high-resolution pressure sensing mat. The successful completion of this project enables (1) the use of high-accuracy deep learning techniques for robustly recognizing objects and object poses for which articulated 3D models are available or can be generated; and (2) generating highly realistic images of posable figures in one sensory domain using data from another, when one sensory domain is cheaper or easier to gather data in than others.

Related Faculty: Sarah Ostadabbas

Related Departments:Electrical & Computer Engineering