You are here

ECE PhD Defense: "Human-in-the-loop Assistive Cyber Physical System Control using Physiological Signals," Fernando Quivira

30
Nov

ISEC 432

November 30, 2017 4:30 pm
November 30, 2017 4:30 pm

Abstract:
People affected by disabilities would like improved quality of life and ability to interact with their surroundings, communicate with their loved ones, and/or autonomously move to places. Restoring these basic capabilities is a challenging biomedical problem that involves reliably detecting human intent whereupon such intent is enacted in the physical environment by the system. Designs that attempt to improve the quality of life of people with disabilities fall under the umbrella of human-in-the-loop cyber-physical systems (HiLCPS). A HiLCPS works through a feedback loop where the user's intention is inferred by collecting physiological signals and context information; the decision is translated to a system action observable to the user in the physical world, thus closing the loop. Developing such a system poses an immense multidisciplinary challenge as each individual component requires specialized domain knowledge in robotics, user interface design, biomedical signal processing, and embedded system design, among others.

In this dissertation, we present a robotic hand prosthesis control application in the HilCPS framework. The objective of this work is to develop an active hand prosthesis for people with amputated upper limbs.

First, we formulate the intent inference pipeline as a continuous grasp classification problem that can be solved with a probabilistic switched dynamical system formulation. We implement linear and non-linear models of surface EMG and compare their performance against standard processing approaches. Second, we show how context evidence in the form of mobile eye-tracking can improve grasp classification performance thus increasing theoretical system reliability. Finally, we address the problem of mapping hand grasp types to low-level joint trajectories on a simulated prosthetic hand prototype using continuous space deep reinforcement learning. We show that using a standard grasp metric as a scoring mechanism in the reward function can enable the learning of grasp motion paths from a wide range of sensor data including joint angles, RGB-D from a palm camera and contact forces.

  • Professor Deniz Erdogmus (Advisor)
  • Professor Gunar Schirner
  • Professor Taskin Padir