Difference between revisions of "Robotic First aid response"
| Line 16: | Line 16: | ||
|Status=Ongoing | |Status=Ongoing | ||
}} | }} | ||
| − | Goal: The capability for a robot in a home or facility to be able to | + | Goal: The capability for a robot in a home or facility to be able to recognize a person's health state when an emergency such as a fall has occurred, as a first step toward endowing robots with critical first aid skills. |
Motivation: | Motivation: | ||
| − | Robots need to be useful | + | Robots need to be useful, and one of the most useful things a robot can do is look after people's health and safety. |
| − | + | A quick and meaningful assessment of a person's state during first response to a possible emergency could help save lives and prevent much anguish. | |
| − | A quick and meaningful assessment of a person's state during first response to a possible emergency could help save lives and prevent anguish. | ||
| − | Challenge: the first thing which should be done is to assess a victim's state, but this is very difficult; | + | Challenge: the first thing which should be done is to assess a victim's state, but this is very difficult even for humans; |
e.g., for a person who has fallen and unresponsive: | e.g., for a person who has fallen and unresponsive: | ||
1) where did they hurt themselves? | 1) where did they hurt themselves? | ||
2) are they breathing normally? | 2) are they breathing normally? | ||
| − | 3) are they bleeding? | + | 3) are they bleeding? |
| − | Approach: the | + | |
| + | Focus: this project seeks to show that such recognition is possible for an automatic system, as a proof-of-concept; a simplified in-lab scenario is assumed in which a robot is near the victim and good sensor data can be acquired (visual and sound, without occlusions or noise). | ||
| + | |||
| + | Approach: the students will perform three steps | ||
1) obtain kinect data (skeleton and depth) of a human-shaped dummy falling in different ways | 1) obtain kinect data (skeleton and depth) of a human-shaped dummy falling in different ways | ||
create a recognition system (possibly using LIBSVM) to classify if the head has been hurt | create a recognition system (possibly using LIBSVM) to classify if the head has been hurt | ||
Revision as of 02:07, 19 January 2015
| Title | Robotic First aid response |
|---|---|
| Summary | A robot system which assesses a person's state of health as a step toward first aid/ems |
| Keywords | robot first aid, injury localization, anomalous breathing recognition, bleeding recognitionProperty "Keywords" has a restricted application area and cannot be used as annotation property by a user. |
| TimeFrame | 2015/1/16-2015/6/30 |
| References | (first aid teleoperated robots)
http://www.uasvision.com/2014/10/29/ambulance-drone-with-integrated-defibrillator/ http://www.technologyreview.com/news/411865/a-robomedic-for-the-battlefield/ (fall detection example) Simin Wang, Salim Zabir, Bastian Leibe. Lying Pose Recognition for Elderly Fall Detection (breathing recognition) Phil Corbishley and Esther Rodriguez-Villegas. 2008. Breathing Detection: Towards a Miniaturized, Wearable, Battery-Operated Monitoring System. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 55, NO. 1, JANUARY 2008 |
| Prerequisites | Requirement: some ability to work with software (installing libraries and writing code), and interest in robots, healthcare, or recognition |
| Author | Tianyi Zhang and Yuwei Zhao |
| Supervisor | Martin Cooney, Anita Sant'Anna |
| Level | Master |
| Status | Ongoing |
Goal: The capability for a robot in a home or facility to be able to recognize a person's health state when an emergency such as a fall has occurred, as a first step toward endowing robots with critical first aid skills.
Motivation: Robots need to be useful, and one of the most useful things a robot can do is look after people's health and safety. A quick and meaningful assessment of a person's state during first response to a possible emergency could help save lives and prevent much anguish.
Challenge: the first thing which should be done is to assess a victim's state, but this is very difficult even for humans; e.g., for a person who has fallen and unresponsive:
1) where did they hurt themselves? 2) are they breathing normally? 3) are they bleeding?
Focus: this project seeks to show that such recognition is possible for an automatic system, as a proof-of-concept; a simplified in-lab scenario is assumed in which a robot is near the victim and good sensor data can be acquired (visual and sound, without occlusions or noise).
Approach: the students will perform three steps
1) obtain kinect data (skeleton and depth) of a human-shaped dummy falling in different ways
create a recognition system (possibly using LIBSVM) to classify if the head has been hurt
2) record sound samples based on videos of "agonal" respiration, tachypnea (fast breathing),
and regular breathing from YouTube
calculate mfcc features with htk
create a recognition system (possibly using LIBSVM) to classify kind of breathing
3) use a robot (possibly Turtlebot) to drag a white glove over a dummy (red ink will symbolize
blood at some areas) to detect the presence/location of deadly bleeding
Evaluation of system: accuracy or similar metric for how often the system detects head trauma, breathing, bleeding
Requirement: some ability to work with software (installing libraries and writing code), and interest in robots, healthcare, or recognition
Deliverable: an intelligent robot system which can assess a victim's state (thesis/report, code, video)