<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Marcoo</id>
	<title>ISLAB/CAISR - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Marcoo"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Special:Contributions/Marcoo"/>
	<updated>2026-04-18T12:24:02Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5417</id>
		<title>Thermal Detection of Subtle Human Cues for a Robot Magic Performance</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5417"/>
		<updated>2024-09-10T13:57:55Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Thermal Detection of Subtle Human Cues for a Robot Magic Performance (NOT AVAILABLE HT22/VT23)&lt;br /&gt;
|References=Martin Cooney, &amp;amp; Alexey Vinel. “Magic in Human-Robot Interaction (HRI).” In the 34th annual workshop of the Swedish Artificial Intelligence Society (SAIS 2022), 2022.&lt;br /&gt;
Cho, Y., Bianchi-Berthouze, N., Marquardt, N., &amp;amp; Julier, S. J. (2018, April). Deep thermal imaging: Proximate material type recognition in the wild through deep learning of spatial surface temperature patterns. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).&lt;br /&gt;
Xu, Z., Wang, Q., Li, D., Hu, M., Yao, N., &amp;amp; Zhai, G. (2020). Estimating departure time using thermal camera and heat traces tracking technique. Sensors, 20(3), 782.&lt;br /&gt;
Cooney, M., &amp;amp; Bigun, J. (2017). PastVision+: Thermovisual inference of recent Medicine intake by Detecting heated Objects and cooled lips. Frontiers in Robotics and AI, 4, 61.&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Author=(NOT AVAILABLE)&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Finished&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Automated_Inference_regarding_Goals_in_Elite_Football_Data&amp;diff=5172</id>
		<title>Automated Inference regarding Goals in Elite Football Data</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Automated_Inference_regarding_Goals_in_Elite_Football_Data&amp;diff=5172"/>
		<updated>2022-10-26T07:25:57Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Automated Inference regarding Goals in Elite Football Data&lt;br /&gt;
|References=https://www.worldatlas.com/articles/what-are-the-most-popular-sports-in-the-world.html&lt;br /&gt;
https://www.alliedmarketresearch.com/football-market-A11328&lt;br /&gt;
&lt;br /&gt;
Jordet, G., Aksum, K. M., Pedersen, D. N., Walvekar, A., Trivedi, A., McCall, A., ... &amp;amp; Priestley, D. (2020). Scanning, contextual factors, and association with performance in english premier league footballers: an investigation across a season. Frontiers in psychology, 11, 553813.&lt;br /&gt;
&lt;br /&gt;
Decroos, T., &amp;amp; Davis, J. (2019, September). Player vectors: Characterizing soccer players’ playing style from match event streams. In Joint European conference on machine learning and knowledge discovery in databases (pp. 569-584). Springer, Cham.&lt;br /&gt;
|Supervisor=Andreas, Summrina, Kunru, Martin&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Goal: To automatically infer/detect goal-related patterns in elite football data&lt;br /&gt;
&lt;br /&gt;
Motivation: &lt;br /&gt;
Football/soccer is the most popular sport in the world, with approx. 4 billion fans and a estimated market size of $1883.46 million in 2019, and also accordingly the most studied in the AI literature.&lt;br /&gt;
Work is ongoing all over the world to detect football events in association games and leverage insights to provide enhanced performance, but there is a need for more automation (much remains hand-coded) and some uncertainty exists with regard to exactly what can be done with the data.&lt;br /&gt;
&lt;br /&gt;
Challenge:&lt;br /&gt;
Various challenges exist: Recently it&amp;#039;s become possible to get data using computer vision not just for one&amp;#039;s own team, but also for opposing teams, but it&amp;#039;s not clear how to best use these data and tie them to performance, and a single game can result in very many data related to player and ball positions; i.e., various companies provide reports of games, but they are mostly just descriptive, with simple metrics; more analysis should be possible.&lt;br /&gt;
(As well, although there is much data in general, football is a game in which few goals are scored (e.g. compared to tennis, baseball, or basketball), which is a challenge for machine learning algorithms that require many data, and has many players (11 on the pitch), resulting in high complexity.)&lt;br /&gt;
&lt;br /&gt;
Approach:&lt;br /&gt;
Both theory and practice will be explored.&lt;br /&gt;
First, we will identify theoretical gaps in the literature related to how such data could be used.&lt;br /&gt;
Second, we will explore the practical side, starting with using some kind of statistical/machine learning approach to try to reproduce the performance of some current hand-picked heuristic scores regarding goals.&lt;br /&gt;
Third, we will explore more advanced kinds of inference, possibly using LSTMs or other techniques.&lt;br /&gt;
Elite sports data obtained from the Norwegian Women&amp;#039;s team, which was ranked 11th in the world in 2020, will be used.&lt;br /&gt;
&lt;br /&gt;
Expected outcomes: a thesis report, code, video. Ideally the results should be sufficient to form the basis for a paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Automated_Inference_regarding_Goals_in_Elite_Football_Data&amp;diff=5171</id>
		<title>Automated Inference regarding Goals in Elite Football Data</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Automated_Inference_regarding_Goals_in_Elite_Football_Data&amp;diff=5171"/>
		<updated>2022-10-26T07:19:38Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Automated Inference regarding Goals in Elite Football Data&lt;br /&gt;
|References=https://www.worldatlas.com/articles/what-are-the-most-popular-sports-in-the-world.html&lt;br /&gt;
https://www.alliedmarketresearch.com/football-market-A11328&lt;br /&gt;
Jordet, G., Aksum, K. M., Pedersen, D. N., Walvekar, A., Trivedi, A., McCall, A., ... &amp;amp; Priestley, D. (2020). Scanning, contextual factors, and association with performance in english premier league footballers: an investigation across a season. Frontiers in psychology, 11, 553813.&lt;br /&gt;
Decroos, T., &amp;amp; Davis, J. (2019, September). Player vectors: Characterizing soccer players’ playing style from match event streams. In Joint European conference on machine learning and knowledge discovery in databases (pp. 569-584). Springer, Cham.&lt;br /&gt;
|Supervisor=Andreas, Summrina, Kunru, Martin&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Goal: To automatically infer/detect goal-related patterns in elite football data&lt;br /&gt;
Motivation: &lt;br /&gt;
Football/soccer is the most popular sport in the world, with approx. 4 billion fans and a estimated market size of $1883.46 million in 2019, and also accordingly the most studied in the AI literature.&lt;br /&gt;
Work is ongoing all over the world to detect football events in association games and leverage insights to provide enhanced performance, but there is a need for more automation (much remains hand-coded) and some uncertainty exists with regard to exactly what can be done with the data.&lt;br /&gt;
Challenge:&lt;br /&gt;
Various challenges exist: Recently it&amp;#039;s become possible to get data using computer vision not just for one&amp;#039;s own team, but also for opposing teams, but it&amp;#039;s not clear how to best use these data and tie them to performance, and a single game can result in very many data related to player and ball positions; i.e., various companies provide reports of games, but they are mostly just descriptive, with simple metrics; more analysis should be possible.&lt;br /&gt;
(As well, although there is much data in general, football is a game in which few goals are scored (e.g. compared to tennis, baseball, or basketball), which is a challenge for machine learning algorithms that require many data, and has many players (11 on the pitch), resulting in high complexity.)&lt;br /&gt;
Approach:&lt;br /&gt;
Both theory and practice will be explored.&lt;br /&gt;
First, we will identify theoretical gaps in the literature related to how such data could be used.&lt;br /&gt;
Second, we will explore the practical side, starting with using some kind of statistical/machine learning approach to try to reproduce the performance of some current hand-picked heuristic scores regarding goals.&lt;br /&gt;
Third, we will explore more advanced kinds of inference, possibly using LSTMs or other techniques.&lt;br /&gt;
Elite sports data obtained from the Norwegian Women&amp;#039;s team, which was ranked 11th in the world in 2020, will be used.&lt;br /&gt;
&lt;br /&gt;
Expected outcomes: a thesis report, code, video. Ideally the results should be sufficient to form the basis for a paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5159</id>
		<title>Thermal Detection of Subtle Human Cues for a Robot Magic Performance</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5159"/>
		<updated>2022-10-24T07:46:04Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Thermal Detection of Subtle Human Cues for a Robot Magic Performance (NOT AVAILABLE HT22/VT23)&lt;br /&gt;
|References=Martin Cooney, &amp;amp; Alexey Vinel. “Magic in Human-Robot Interaction (HRI).” In the 34th annual workshop of the Swedish Artificial Intelligence Society (SAIS 2022), 2022.&lt;br /&gt;
Cho, Y., Bianchi-Berthouze, N., Marquardt, N., &amp;amp; Julier, S. J. (2018, April). Deep thermal imaging: Proximate material type recognition in the wild through deep learning of spatial surface temperature patterns. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).&lt;br /&gt;
Xu, Z., Wang, Q., Li, D., Hu, M., Yao, N., &amp;amp; Zhai, G. (2020). Estimating departure time using thermal camera and heat traces tracking technique. Sensors, 20(3), 782.&lt;br /&gt;
Cooney, M., &amp;amp; Bigun, J. (2017). PastVision+: Thermovisual inference of recent Medicine intake by Detecting heated Objects and cooled lips. Frontiers in Robotics and AI, 4, 61.&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Author=(NOT AVAILABLE HT22/VT23)&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
(THIS PROJECT WILL NOT BE AVAILABLE FALL 2022/SPRING 2023 DUE TO PARENTAL LEAVE.)&lt;br /&gt;
Generations have been entertained and inspired by mentalist magicians and fantastical accounts of detectives like Sherlock Holmes, who draw extensive conclusions from minute observations.&lt;br /&gt;
As AI rapidly becomes more advanced, robots will one day become able to leverage superhuman abilities in sensing and calculation to infer in such a way from subtle cues, to better help people, quickly recognize what people want and providing it in a good way.&lt;br /&gt;
But in 2022, most robots are still remarkably &amp;quot;dense&amp;quot;:&lt;br /&gt;
As a simple example, if one goes up to a Pepper robot and taps it on the head, the odds are that it will not react.&lt;br /&gt;
This kind of lack of observation and interactivity acts as a barrier to acceptance of robots, which can be seen in the failures of various robot start up companies.&lt;br /&gt;
&lt;br /&gt;
A speculative prototyping approach will be followed:&lt;br /&gt;
*speculative step: a list of potentially useful subtle cues will be compiled (e.g., reflections, shadows, etc.) and proposals made on how they could be implemented in robots.&lt;br /&gt;
For this, some review of potential recognition (e.g., DL) techniques will be required.&lt;br /&gt;
*prototyping step: the use of current methods to recognize and leverage one subtle cue will be explored.&lt;br /&gt;
This will probably involve Deep Learning (DL) to carry out inference on data from a thermal camera in some challenging situation.&lt;br /&gt;
This could possibly involve detection of light touches, at distance, multiple cameras, etc.&lt;br /&gt;
Furthermore, the recognition capability will be incorporated into the interaction design for a robot, within the application area of robot magic, to carry out a simplified magic performance.&lt;br /&gt;
&lt;br /&gt;
Expected outcomes: a thesis/report, code, video. Ideally the results should be sufficient to form the basis for a paper, and we will hopefully participate in an International Robot Magic Competition.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5158</id>
		<title>Thermal Detection of Subtle Human Cues for a Robot Magic Performance</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5158"/>
		<updated>2022-10-24T07:45:33Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Thermal Detection of Subtle Human Cues for a Robot Magic Performance&lt;br /&gt;
|References=Martin Cooney, &amp;amp; Alexey Vinel. “Magic in Human-Robot Interaction (HRI).” In the 34th annual workshop of the Swedish Artificial Intelligence Society (SAIS 2022), 2022.&lt;br /&gt;
Cho, Y., Bianchi-Berthouze, N., Marquardt, N., &amp;amp; Julier, S. J. (2018, April). Deep thermal imaging: Proximate material type recognition in the wild through deep learning of spatial surface temperature patterns. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).&lt;br /&gt;
Xu, Z., Wang, Q., Li, D., Hu, M., Yao, N., &amp;amp; Zhai, G. (2020). Estimating departure time using thermal camera and heat traces tracking technique. Sensors, 20(3), 782.&lt;br /&gt;
Cooney, M., &amp;amp; Bigun, J. (2017). PastVision+: Thermovisual inference of recent Medicine intake by Detecting heated Objects and cooled lips. Frontiers in Robotics and AI, 4, 61.&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Author=(NOT AVAILABLE HT22/VT23)&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
(THIS PROJECT WILL NOT BE AVAILABLE FALL 2022/SPRING 2023 DUE TO PARENTAL LEAVE.)&lt;br /&gt;
Generations have been entertained and inspired by mentalist magicians and fantastical accounts of detectives like Sherlock Holmes, who draw extensive conclusions from minute observations.&lt;br /&gt;
As AI rapidly becomes more advanced, robots will one day become able to leverage superhuman abilities in sensing and calculation to infer in such a way from subtle cues, to better help people, quickly recognize what people want and providing it in a good way.&lt;br /&gt;
But in 2022, most robots are still remarkably &amp;quot;dense&amp;quot;:&lt;br /&gt;
As a simple example, if one goes up to a Pepper robot and taps it on the head, the odds are that it will not react.&lt;br /&gt;
This kind of lack of observation and interactivity acts as a barrier to acceptance of robots, which can be seen in the failures of various robot start up companies.&lt;br /&gt;
&lt;br /&gt;
A speculative prototyping approach will be followed:&lt;br /&gt;
*speculative step: a list of potentially useful subtle cues will be compiled (e.g., reflections, shadows, etc.) and proposals made on how they could be implemented in robots.&lt;br /&gt;
For this, some review of potential recognition (e.g., DL) techniques will be required.&lt;br /&gt;
*prototyping step: the use of current methods to recognize and leverage one subtle cue will be explored.&lt;br /&gt;
This will probably involve Deep Learning (DL) to carry out inference on data from a thermal camera in some challenging situation.&lt;br /&gt;
This could possibly involve detection of light touches, at distance, multiple cameras, etc.&lt;br /&gt;
Furthermore, the recognition capability will be incorporated into the interaction design for a robot, within the application area of robot magic, to carry out a simplified magic performance.&lt;br /&gt;
&lt;br /&gt;
Expected outcomes: a thesis/report, code, video. Ideally the results should be sufficient to form the basis for a paper, and we will hopefully participate in an International Robot Magic Competition.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Automated_Inference_regarding_Goals_in_Elite_Football_Data&amp;diff=5064</id>
		<title>Automated Inference regarding Goals in Elite Football Data</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Automated_Inference_regarding_Goals_in_Elite_Football_Data&amp;diff=5064"/>
		<updated>2022-09-19T13:55:00Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Automated Inference regarding Goals in Elite Football Data |References=https://www.worldatlas.com/articles/what-are-the-most-popular-sports-i...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Automated Inference regarding Goals in Elite Football Data&lt;br /&gt;
|References=https://www.worldatlas.com/articles/what-are-the-most-popular-sports-in-the-world.html&lt;br /&gt;
https://www.alliedmarketresearch.com/football-market-A11328&lt;br /&gt;
Jordet, G., Aksum, K. M., Pedersen, D. N., Walvekar, A., Trivedi, A., McCall, A., ... &amp;amp; Priestley, D. (2020). Scanning, contextual factors, and association with performance in english premier league footballers: an investigation across a season. Frontiers in psychology, 11, 553813.&lt;br /&gt;
Decroos, T., &amp;amp; Davis, J. (2019, September). Player vectors: Characterizing soccer players’ playing style from match event streams. In Joint European conference on machine learning and knowledge discovery in databases (pp. 569-584). Springer, Cham.&lt;br /&gt;
|Supervisor=Andreas, Summrina, Kunru, Martin&lt;br /&gt;
|Author= Hong Khong Tan?&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Goal: To automatically infer/detect goal-related patterns in elite football data&lt;br /&gt;
Motivation: &lt;br /&gt;
Football/soccer is the most popular sport in the world, with approx. 4 billion fans and a estimated market size of $1883.46 million in 2019, and also accordingly the most studied in the AI literature.&lt;br /&gt;
Work is ongoing all over the world to detect football events in association games and leverage insights to provide enhanced performance, but there is a need for more automation (much remains hand-coded) and some uncertainty exists with regard to exactly what can be done with the data.&lt;br /&gt;
Challenge:&lt;br /&gt;
Various challenges exist. Recently it&amp;#039;s become possible to get data using computer vision not just for one&amp;#039;s own team, but also for opposing teams, but it&amp;#039;s not clear how to best use these data and tie them to performance, and a single game can result in very many data related to player and ball positions; i.e., various companies provide reports of games, but they are mostly just descriptive, with simple metrics; more analysis should be possible.&lt;br /&gt;
(As well, although there is much data in general, football is a game in which few goals are scored (e.g. compared to tennis, baseball, or basketball), which is a challenge for machine learning algorithms that require many data, and has many players (11 on the pitch), resulting in high complexity.)&lt;br /&gt;
Approach:&lt;br /&gt;
Both theory and practice will be explored.&lt;br /&gt;
First, we will identify theoretical gaps in the literature related to how such data could be used.&lt;br /&gt;
Second, we will explore the practical side, starting with using some kind of statistical/machine learning approach to try to reproduce the performance of some current hand-picked heuristic scores regarding goals.&lt;br /&gt;
Third, we will explore more advanced kinds of inference, possibly using LSTMs or other techniques.&lt;br /&gt;
Elite sports data obtained from the Norwegian Women&amp;#039;s team, which was ranked 11th in the world in 2020, will be used.&lt;br /&gt;
&lt;br /&gt;
Expected outcomes: a thesis report, code, video. Ideally the results should be sufficient to form the basis for a paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5063</id>
		<title>Thermal Detection of Subtle Human Cues for a Robot Magic Performance</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Thermal_Detection_of_Subtle_Human_Cues_for_a_Robot_Magic_Performance&amp;diff=5063"/>
		<updated>2022-09-19T13:09:23Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Thermal Detection of Subtle Human Cues for a Robot Magic Performance |References=Martin Cooney, &amp;amp; Alexey Vinel. “Magic in Human-Robot Inter...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Thermal Detection of Subtle Human Cues for a Robot Magic Performance&lt;br /&gt;
|References=Martin Cooney, &amp;amp; Alexey Vinel. “Magic in Human-Robot Interaction (HRI).” In the 34th annual workshop of the Swedish Artificial Intelligence Society (SAIS 2022), 2022.&lt;br /&gt;
Cho, Y., Bianchi-Berthouze, N., Marquardt, N., &amp;amp; Julier, S. J. (2018, April). Deep thermal imaging: Proximate material type recognition in the wild through deep learning of spatial surface temperature patterns. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).&lt;br /&gt;
Xu, Z., Wang, Q., Li, D., Hu, M., Yao, N., &amp;amp; Zhai, G. (2020). Estimating departure time using thermal camera and heat traces tracking technique. Sensors, 20(3), 782.&lt;br /&gt;
Cooney, M., &amp;amp; Bigun, J. (2017). PastVision+: Thermovisual inference of recent Medicine intake by Detecting heated Objects and cooled lips. Frontiers in Robotics and AI, 4, 61.&lt;br /&gt;
|Supervisor=Martin Cooney, &lt;br /&gt;
|Author=Yijun Wang&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Generations have been entertained and inspired by mentalist magicians and fantastical accounts of detectives like Sherlock Holmes, who draw extensive conclusions from minute observations.&lt;br /&gt;
As AI rapidly becomes more advanced, robots will one day become able to leverage superhuman abilities in sensing and calculation to infer in such a way from subtle cues, to better help people, quickly recognize what people want and providing it in a good way.&lt;br /&gt;
But in 2022, most robots are still remarkably &amp;quot;dense&amp;quot;:&lt;br /&gt;
As a simple example, if one goes up to a Pepper robot and taps it on the head, the odds are that it will not react.&lt;br /&gt;
This kind of lack of observation and interactivity acts as a barrier to acceptance of robots, which can be seen in the failures of various robot start up companies.&lt;br /&gt;
&lt;br /&gt;
A speculative prototyping approach will be followed:&lt;br /&gt;
*speculative step: a list of potentially useful subtle cues will be compiled (e.g., reflections, shadows, etc.) and proposals made on how they could be implemented in robots.&lt;br /&gt;
For this, some review of potential recognition (e.g., DL) techniques will be required.&lt;br /&gt;
*prototyping step: the use of current methods to recognize and leverage one subtle cue will be explored.&lt;br /&gt;
This will probably involve Deep Learning (DL) to carry out inference on data from a thermal camera in some challenging situation.&lt;br /&gt;
This could possibly involve detection of light touches, at distance, multiple cameras, etc.&lt;br /&gt;
Furthermore, the recognition capability will be incorporated into the interaction design for a robot, within the application area of robot magic, to carry out a simplified magic performance.&lt;br /&gt;
&lt;br /&gt;
Expected outcomes: a thesis/report, code, video. Ideally the results should be sufficient to form the basis for a paper, and we will hopefully participate in an International Robot Magic Competition.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Social_touch_for_robots&amp;diff=4604</id>
		<title>Social touch for robots</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Social_touch_for_robots&amp;diff=4604"/>
		<updated>2020-06-10T09:47:49Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=something with social robots&lt;br /&gt;
|Keywords=social robots, social touch, recognition, behavior generation, HRI&lt;br /&gt;
|References=You can read some papers by Breazeal and Dautenhahn about social robots.&lt;br /&gt;
|Prerequisites=Willing to write a conference paper&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Author=Prateek&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
-We will decide the exact question after discussing.&lt;br /&gt;
One possibility is: Touch is a powerful communication modality whose code we are just recently beginning to crack--we want to know how robots can leverage social touch in interactions to help people to feel good.&lt;br /&gt;
Another possibility is to do something about robots/AI in education or for healthcare.&lt;br /&gt;
-Software&lt;br /&gt;
-WPs will include gathering requirements then creating an architecture, a recognition module, and behavior generation module, and getting feedback on the design.&lt;br /&gt;
-The project should lead to a paper. Demos/events (or for education, something which we can actually use with our students) would be nice.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=4495</id>
		<title>Martin Cooney</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=4495"/>
		<updated>2020-01-10T15:42:06Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: /* Current Activities */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cooney&lt;br /&gt;
|Given Name=Martin&lt;br /&gt;
|Title=PhD&lt;br /&gt;
|Phone=+46 35 167623&lt;br /&gt;
|Position=Postdoctoral Researcher&lt;br /&gt;
|Email=martin.cooney@hh.se&lt;br /&gt;
|Image=martin_200.jpg&lt;br /&gt;
|Office=E530&lt;br /&gt;
|url=http://martin-cooney.com&lt;br /&gt;
|Subject=Intelligent Systems for healthcare&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Current Activities&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Work&lt;br /&gt;
** Smart environment for healthcare with robots&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Research Interests&lt;br /&gt;
** Human robot interaction&lt;br /&gt;
** Health technology and health innovation&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Projects&lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/Health_Technology CAISR Health Technology Application Area] &lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/CAISR_Intelligent_Environment CAISR Intelligent Environment]&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Teaching&lt;br /&gt;
** [http://www.hh.se/sitevision/proxy/english/education/coursesyllabi.3678.html/svid12_70cf2e49129168da015800070213/752680950/se_proxy/utb_kursplan.asp;jsessionid=42994EA1681258ED2A13A7FE143DDA7E?kurskod=DT8007&amp;amp;revisionsnr=1%2C1&amp;amp;format=pdf&amp;amp;lang=SV Design of Embedded and Intelligent Systems ] (Course responsible from 2016)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Manuals&lt;br /&gt;
** Turtlebot Manual: [http://islab.hh.se/mediawiki/images/0/02/Turtlebot_manual_students_2015_1_30.pdf click here]&lt;br /&gt;
** Nao Manual: [http://islab.hh.se/mediawiki/images/1/14/Nao_manual_6_27.pdf click here]&lt;br /&gt;
** Lego Manual: [http://islab.hh.se/mediawiki/images/6/68/Lego_manual_6_27.pdf click here]&lt;br /&gt;
** Skypebot Manual: [http://islab.hh.se/mediawiki/images/1/1b/Skypebot_manual_10_15.pdf click here]&lt;br /&gt;
** Energy harvesting prototype Manual: [http://islab.hh.se/mediawiki/images/c/ca/Energy_harvester_10_16.pdf click here]&lt;br /&gt;
** Deis Course Description 2017: [http://islab.hh.se/mediawiki/images/7/72/Deis_course_description_2017.zip click here]&lt;br /&gt;
** Deis Course Description 2019: [http://islab.hh.se/mediawiki/images/c/c7/Deis_course_description_2019_7-5_slightly_revised.pdf click here]&lt;br /&gt;
&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:Deis_course_description_2019_7-5_slightly_revised.pdf&amp;diff=4494</id>
		<title>File:Deis course description 2019 7-5 slightly revised.pdf</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:Deis_course_description_2019_7-5_slightly_revised.pdf&amp;diff=4494"/>
		<updated>2020-01-10T15:40:45Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Social_touch_for_robots&amp;diff=4309</id>
		<title>Social touch for robots</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Social_touch_for_robots&amp;diff=4309"/>
		<updated>2019-09-29T17:23:31Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=something with social robots |Keywords=social robots, social touch, recognition, behavior generation, HRI |References=You can read some paper...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=something with social robots&lt;br /&gt;
|Keywords=social robots, social touch, recognition, behavior generation, HRI&lt;br /&gt;
|References=You can read some papers by Breazeal and Dautenhahn about social robots.&lt;br /&gt;
|Prerequisites=Willing to write a conference paper&lt;br /&gt;
|Supervisor=Martin Cooney, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
-We will decide the exact question after discussing.&lt;br /&gt;
One possibility is: Touch is a powerful communication modality whose code we are just recently beginning to crack--we want to know how robots can leverage social touch in interactions to help people to feel good.&lt;br /&gt;
Another possibility is to do something about robots/AI in education or for healthcare.&lt;br /&gt;
-Software&lt;br /&gt;
-WPs will include gathering requirements then creating an architecture, a recognition module, and behavior generation module, and getting feedback on the design.&lt;br /&gt;
-The project should lead to a paper. Demos/events (or for education, something which we can actually use with our students) would be nice.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4084</id>
		<title>Something to do with social robots</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4084"/>
		<updated>2018-10-26T09:48:10Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Robot Exercise Coach&lt;br /&gt;
|Programme=Stub&lt;br /&gt;
|Keywords=Social robot, human-robot interaction, recognition, behavior&lt;br /&gt;
|TimeFrame=Jan-July 2019&lt;br /&gt;
|References=Fasola, J., &amp;amp; Matarić, M. J. (2013). A socially assistive robot exercise coach for the elderly. Journal of Human-Robot Interaction, 2(2), 3-32.&lt;br /&gt;
http://delivery.acm.org/10.1145/3110000/3109710/p3-fasola.pdf?ip=194.47.19.128&amp;amp;id=3109710&amp;amp;acc=OA&amp;amp;key=74F7687761D7AE37%2ED64FD9DC22ECC16B%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;amp;__acm__=1540547143_b8ef87a85e37d78837ed72cdc958a5e1&lt;br /&gt;
&lt;br /&gt;
Gold, K., &amp;amp; Scassellati, B. (2006). Learning acceptable windows of contingency. Connection Science, 18(2), 217-228.&lt;br /&gt;
http://www.cs.yale.edu/homes/scaz/papers/Gold-ConnSci-06.pdf&lt;br /&gt;
|Prerequisites=Should be able to code in python, work with Robot Operating System, and be interested in robots, computer vision, human-robot interaction.&lt;br /&gt;
|Supervisor=Martin Cooney, Eren Erdal Aksoy, Fernando Alonso-Fernandez, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Details will be provided later (based on discussion between supervisors and students, to find what is most interesting for everyone involved), but this project will have something to do with social robotics.&lt;br /&gt;
And it would be nice to work with a company like Sony Mobile and/or Meta Bytes (I am trying to get their contact info right now).&lt;br /&gt;
Someone else could also be involved like Fernando or Eren.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example: &lt;br /&gt;
*OpenPose for robot exercising.&lt;br /&gt;
Baxter (on Ridgeback) will lead a person or group of people to exercise.&lt;br /&gt;
Poses will be recognized with OpenPose, and we can maybe do something with deep learning.&lt;br /&gt;
An interesting point could be in adapting to interruptions (maybe the people cannot follow along sometimes, so the robot should stop/slow down sometimes and regain control) and/or recognizing exercises for dementia (they can be different than for young adults, like turning a screw).&lt;br /&gt;
This class could be at the university gym if we get permission (I have received some contacts and will check). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Or, OpenPose for robot first aid. &lt;br /&gt;
Baxter will carry out the Glasgow Coma Scale assessment to determine how conscious a patient is.&lt;br /&gt;
This will involve using computer vision to detect where a robot should touch a person to elicit a reaction, then classifying the way a person moves in reaction (e.g., decerebrate, decorticate).&lt;br /&gt;
I think this kind of ability to save lives and give real value will be very important for bringing robots into real homes.)&lt;br /&gt;
&lt;br /&gt;
Focus: software&lt;br /&gt;
Outcomes: a thesis, demonstrator, video, a local conference paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4083</id>
		<title>Something to do with social robots</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4083"/>
		<updated>2018-10-26T09:46:58Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Robot Exercise Coach&lt;br /&gt;
|Programme=Stub&lt;br /&gt;
|Keywords=Social robot, human-robot interaction, recognition, behavior&lt;br /&gt;
|TimeFrame=Jan-July 2019&lt;br /&gt;
|References=Fasola, J., &amp;amp; Matarić, M. J. (2013). A socially assistive robot exercise coach for the elderly. Journal of Human-Robot Interaction, 2(2), 3-32.&lt;br /&gt;
http://delivery.acm.org/10.1145/3110000/3109710/p3-fasola.pdf?ip=194.47.19.128&amp;amp;id=3109710&amp;amp;acc=OA&amp;amp;key=74F7687761D7AE37%2ED64FD9DC22ECC16B%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;amp;__acm__=1540547143_b8ef87a85e37d78837ed72cdc958a5e1&lt;br /&gt;
&lt;br /&gt;
Gold, K., &amp;amp; Scassellati, B. (2006). Learning acceptable windows of contingency. Connection Science, 18(2), 217-228.&lt;br /&gt;
http://www.cs.yale.edu/homes/scaz/papers/Gold-ConnSci-06.pdf&lt;br /&gt;
|Prerequisites=Should be able to code in python, work with Robot Operating System, and be interested in robots, computer vision, human-robot interaction.&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Details will be provided later (based on discussion between supervisors and students, to find what is most interesting for everyone involved), but this project will have something to do with social robotics.&lt;br /&gt;
And it would be nice to work with a company like Sony Mobile and/or Meta Bytes (I am trying to get their contact info right now).&lt;br /&gt;
Some one else could also be involved like Fernando or Eren.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example: &lt;br /&gt;
*OpenPose for robot exercising.&lt;br /&gt;
Baxter (on Ridgeback) will lead a person or group of people to exercise.&lt;br /&gt;
Poses will be recognized with OpenPose, and we can maybe do something with deep learning.&lt;br /&gt;
An interesting point could be in adapting to interruptions (maybe the people cannot follow along sometimes, so the robot should stop/slow down sometimes and regain control) and/or recognizing exercises for dementia (they can be different than for young adults, like turning a screw).&lt;br /&gt;
This class could be at the university gym if we get permission (I have received some contacts and will check). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Or, OpenPose for robot first aid. &lt;br /&gt;
Baxter will carry out the Glasgow Coma Scale assessment to determine how conscious a patient is.&lt;br /&gt;
This will involve using computer vision to detect where a robot should touch a person to elicit a reaction, then classifying the way a person moves in reaction (e.g., decerebrate, decorticate).&lt;br /&gt;
I think this kind of ability to save lives and give real value will be very important for bringing robots into real homes.)&lt;br /&gt;
&lt;br /&gt;
Focus: software&lt;br /&gt;
Outcomes: a thesis, demonstrator, video, a local conference paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4073</id>
		<title>Something to do with social robots</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4073"/>
		<updated>2018-10-20T10:54:20Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Stub&lt;br /&gt;
|Programme=Stub&lt;br /&gt;
|Keywords=Social robot, human-robot interaction, recognition, behavior&lt;br /&gt;
|TimeFrame=Jan-July 2019&lt;br /&gt;
|References=Stub&lt;br /&gt;
|Prerequisites=Stub&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
This is a stub to be completed later.&lt;br /&gt;
The project will have something to do with social robotics.&lt;br /&gt;
And it would be nice to work with a company like Sony Mobile and/or Meta Bytes (I am trying to get their contact info right now).&lt;br /&gt;
Some one else could also be involved like Fernando or Eren.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example: &lt;br /&gt;
*OpenPose for robot exercising.&lt;br /&gt;
Baxter will lead a person or group of people to exercise.&lt;br /&gt;
Poses will be recognized with OpenPose, and we can maybe do something with deep learning.&lt;br /&gt;
An interesting point could be in adapting to interruptions (maybe the people cannot follow along sometimes, so the robot should stop/slow down sometimes and regain control) and/or recognizing exercises for dementia (they can be different than for young adults, like turning a screw).&lt;br /&gt;
This class could be at the university gym if we get permission (I have received some contacts and will check). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Or, OpenPose for robot first aid. &lt;br /&gt;
Baxter will carry out the Glasgow Coma Scale assessment to determine how conscious a patient is.&lt;br /&gt;
This will involve using computer vision to detect where a robot should touch a person to elicit a reaction, then classifying the way a person moves in reaction (e.g., decerebrate, decorticate).&lt;br /&gt;
I think this kind of ability to save lives and give real value will be very important for bringing robots into real homes.)&lt;br /&gt;
&lt;br /&gt;
Focus: software&lt;br /&gt;
Outcomes: a thesis, demonstrator, video, a local conference paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4072</id>
		<title>Something to do with social robots</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Something_to_do_with_social_robots&amp;diff=4072"/>
		<updated>2018-10-20T10:50:21Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Stub |Programme=Stub |Keywords=Social robot, human-robot interaction, recognition, behavior |TimeFrame=Jan-July 2019 |References=Stub |Prereq...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Stub&lt;br /&gt;
|Programme=Stub&lt;br /&gt;
|Keywords=Social robot, human-robot interaction, recognition, behavior&lt;br /&gt;
|TimeFrame=Jan-July 2019&lt;br /&gt;
|References=Stub&lt;br /&gt;
|Prerequisites=Stub&lt;br /&gt;
|Supervisor=Martin Cooney, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
This is a stub to be completed later.&lt;br /&gt;
The project will have something to do with social robotics.&lt;br /&gt;
And it would be nice to work with a company like Sony Mobile and/or Meta Bytes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example: &lt;br /&gt;
*OpenPose for robot exercising.&lt;br /&gt;
Baxter will lead a person or group of people to exercise.&lt;br /&gt;
Poses will be recognized with OpenPose, and we can maybe do something with deep learning.&lt;br /&gt;
An interesting point could be in adapting to interruptions (maybe the people cannot follow along sometimes, so the robot should stop/slow down sometimes and regain control) and/or recognizing exercises for dementia (they can be different than for young adults, like turning a screw).&lt;br /&gt;
This class could be at the university gym if we get permission (I have received some contacts and will check). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Or, OpenPose for robot first aid. &lt;br /&gt;
Baxter will carry out the Glasgow Coma Scale assessment to determine how conscious a patient is.&lt;br /&gt;
This will involve using computer vision to detect where a robot should touch a person to elicit a reaction, then classifying the way a person moves in reaction (e.g., decerebrate, decorticate).&lt;br /&gt;
I think this kind of ability to save lives and give real value will be very important for bringing robots into real homes.)&lt;br /&gt;
&lt;br /&gt;
Focus: software&lt;br /&gt;
Outcomes: a thesis, demonstrator, video, a local conference paper.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:Seadrone-ssf-references.pdf&amp;diff=3907</id>
		<title>File:Seadrone-ssf-references.pdf</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:Seadrone-ssf-references.pdf&amp;diff=3907"/>
		<updated>2018-03-21T17:05:23Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Marcoo uploaded a new version of &amp;amp;quot;File:Seadrone-ssf-references.pdf&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:Seadrone-ssf-references.pdf&amp;diff=3906</id>
		<title>File:Seadrone-ssf-references.pdf</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:Seadrone-ssf-references.pdf&amp;diff=3906"/>
		<updated>2018-03-21T09:28:46Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=3874</id>
		<title>Martin Cooney</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=3874"/>
		<updated>2018-02-22T21:09:31Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cooney&lt;br /&gt;
|Given Name=Martin&lt;br /&gt;
|Title=PhD&lt;br /&gt;
|Phone=+46 35 167623&lt;br /&gt;
|Position=Postdoctoral Researcher&lt;br /&gt;
|Email=martin.cooney@hh.se&lt;br /&gt;
|Image=martin_200.jpg&lt;br /&gt;
|Office=E530&lt;br /&gt;
|url=http://martin-cooney.com&lt;br /&gt;
|Subject=Intelligent Systems for healthcare&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Current Activities&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Work&lt;br /&gt;
** Smart environment for healthcare with robots&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Research Interests&lt;br /&gt;
** Human robot interaction&lt;br /&gt;
** Health technology and health innovation&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Projects&lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/Health_Technology CAISR Health Technology Application Area] &lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/CAISR_Intelligent_Environment CAISR Intelligent Environment]&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Teaching&lt;br /&gt;
** [http://www.hh.se/sitevision/proxy/english/education/coursesyllabi.3678.html/svid12_70cf2e49129168da015800070213/752680950/se_proxy/utb_kursplan.asp;jsessionid=42994EA1681258ED2A13A7FE143DDA7E?kurskod=DT8007&amp;amp;revisionsnr=1%2C1&amp;amp;format=pdf&amp;amp;lang=SV Design of Embedded and Intelligent Systems ] (Course responsible from 2016)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Manuals&lt;br /&gt;
** Turtlebot Manual: [http://islab.hh.se/mediawiki/images/0/02/Turtlebot_manual_students_2015_1_30.pdf click here]&lt;br /&gt;
** Nao Manual: [http://islab.hh.se/mediawiki/images/1/14/Nao_manual_6_27.pdf click here]&lt;br /&gt;
** Lego Manual: [http://islab.hh.se/mediawiki/images/6/68/Lego_manual_6_27.pdf click here]&lt;br /&gt;
** Skypebot Manual: [http://islab.hh.se/mediawiki/images/1/1b/Skypebot_manual_10_15.pdf click here]&lt;br /&gt;
** Energy harvesting prototype Manual: [http://islab.hh.se/mediawiki/images/c/ca/Energy_harvester_10_16.pdf click here]&lt;br /&gt;
** Deis Course Description 2017: [http://islab.hh.se/mediawiki/images/7/72/Deis_course_description_2017.zip click here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=3873</id>
		<title>Martin Cooney</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=3873"/>
		<updated>2018-02-22T21:08:45Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: /* Current Activities */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cooney&lt;br /&gt;
|Given Name=Martin&lt;br /&gt;
|Title=PhD&lt;br /&gt;
|Phone=+46 35 167623&lt;br /&gt;
|Position=Postdoctoral Researcher&lt;br /&gt;
|Email=martin.cooney@hh.se&lt;br /&gt;
|Image=martin_200.jpg&lt;br /&gt;
|Office=E530&lt;br /&gt;
|url=http://martin-cooney.com&lt;br /&gt;
|Subject=Intelligent Systems for healthcare&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Current Activities&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Work&lt;br /&gt;
** Smart environment for healthcare with robots&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Research Interests&lt;br /&gt;
** Human robot interaction&lt;br /&gt;
** Health technology and health innovation&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Projects&lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/Health_Technology CAISR Health Technology Application Area] &lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/CAISR_Intelligent_Environment CAISR Intelligent Environment]&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Teaching&lt;br /&gt;
** [http://www.hh.se/sitevision/proxy/english/education/coursesyllabi.3678.html/svid12_70cf2e49129168da015800070213/752680950/se_proxy/utb_kursplan.asp;jsessionid=42994EA1681258ED2A13A7FE143DDA7E?kurskod=DT8007&amp;amp;revisionsnr=1%2C1&amp;amp;format=pdf&amp;amp;lang=SV Design of Embedded and Intelligent Systems ] (Course responsible from 2016)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Manuals&lt;br /&gt;
** Turtlebot Manual: [http://islab.hh.se/mediawiki/images/0/02/Turtlebot_manual_students_2015_1_30.pdf click here]&lt;br /&gt;
** Nao Manual: [http://islab.hh.se/mediawiki/images/1/14/Nao_manual_6_27.pdf click here]&lt;br /&gt;
** Lego Manual: [http://islab.hh.se/mediawiki/images/6/68/Lego_manual_6_27.pdf click here]&lt;br /&gt;
** Skypebot Manual: [http://islab.hh.se/mediawiki/images/1/1b/Skypebot_manual_10_15.pdf click here]&lt;br /&gt;
** Energy harvesting prototype Manual: [http://islab.hh.se/mediawiki/images/c/ca/Energy_harvester_10_16.pdf click here]&lt;br /&gt;
** Deis Course Description 2017:&lt;br /&gt;
[http://islab.hh.se/mediawiki/images/7/72/Deis_course_description_2017.zip click here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:Deis_course_description_2017.zip&amp;diff=3872</id>
		<title>File:Deis course description 2017.zip</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:Deis_course_description_2017.zip&amp;diff=3872"/>
		<updated>2018-02-22T21:07:23Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Social_robot_assistant&amp;diff=3858</id>
		<title>Social robot assistant</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Social_robot_assistant&amp;diff=3858"/>
		<updated>2018-02-01T15:15:07Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Marcoo moved page Social robot assistant to Smart sensor: project concept changed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Smart sensor]]&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3857</id>
		<title>Smart sensor</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3857"/>
		<updated>2018-02-01T15:15:07Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Marcoo moved page Social robot assistant to Smart sensor: project concept changed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Small smart sensors&lt;br /&gt;
|Keywords=Smart sensors, adaptation, sensors&lt;br /&gt;
|TimeFrame=First half of 2018&lt;br /&gt;
|References=Surya G. Nurzaman, Utku Culha, Luzius Brodbeck, Liyu Wang, Fumiya Iida. (2013) Active Sensing System with In Situ Adjustable Sensor Morphology. PLoS ONE 8(12):e84090. doi:10.1371/journal.pone.0084090&lt;br /&gt;
Robin R. Murphy. Dempster–Shafer Theory for Sensor Fusion in Autonomous Mobile Robots. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 2, APRIL 1998 197.&lt;br /&gt;
|Prerequisites=Projects involving both software, hardware, and electronics require much development/time&lt;br /&gt;
|Supervisor=Martin Cooney, Håkan Petterson&lt;br /&gt;
|Author=Can Yang&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
Goal: small smart sensors&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
Current sensors are &amp;quot;static&amp;quot;, detecting just one thing, such as light, sound, or touch, because of which in complex systems such as vehicles or homes many sensors are typically required.&lt;br /&gt;
A &amp;quot;smart&amp;quot; sensor which could dynamically adapt itself (to detect different signals) could replace multiple static sensors, leading to less space taken, less cost, less work for installations, a greater ability to operate when changes occur, and possibly even easier repairs (self-healing).&lt;br /&gt;
Alternatively, an ensemble of dynamic sensors could acquire more information than the same number of static sensors.&lt;br /&gt;
Furthermore, if such sensors can be made small, various benefits would emerge: e.g., making arbitrary surfaces embedded with sensors for robots such as intelligent skin, and interesting tasks at the micro scale could potentially be facilitated, such as monitoring cells inside of a person.&lt;br /&gt;
&lt;br /&gt;
Challenges: &lt;br /&gt;
It is unknown how a sensor could be automatically changed into a different kind of sensor, a good strategy for changing a sensor or an ensemble of sensors, and how to achieve this at a small scale.&lt;br /&gt;
&lt;br /&gt;
Approach:&lt;br /&gt;
The student will &lt;br /&gt;
&lt;br /&gt;
-Design transitions between some typical sensors&lt;br /&gt;
&lt;br /&gt;
-Design algorithms for calculating information/interestingness for each modality (possibly using change point detection) and for strategically allocating roles (like particles in a particle filter).&lt;br /&gt;
&lt;br /&gt;
-Design a strategy for the micro scale&lt;br /&gt;
&lt;br /&gt;
-Implement a macro-scale proof-of-concept, which can be switched manually or algorithmically, and a smaller proof-of-concept (mini, or micro; nano would be cool but would probably not be possible)&lt;br /&gt;
&lt;br /&gt;
-Evaluate&lt;br /&gt;
&lt;br /&gt;
-Thus, students will develop both software, and hardware/electronics &lt;br /&gt;
&lt;br /&gt;
Expected result: some ideas (new knowledge) of how to adapt different sensors, at different scales. Thesis, prototypes, video, hopefully a paper, possibly other such as code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3799</id>
		<title>Smart sensor</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3799"/>
		<updated>2017-11-30T13:46:02Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Small smart sensors&lt;br /&gt;
|Keywords=Smart sensors, adaptation, sensors&lt;br /&gt;
|TimeFrame=First half of 2018&lt;br /&gt;
|References=Surya G. Nurzaman, Utku Culha, Luzius Brodbeck, Liyu Wang, Fumiya Iida. (2013) Active Sensing System with In Situ Adjustable Sensor Morphology. PLoS ONE 8(12):e84090. doi:10.1371/journal.pone.0084090&lt;br /&gt;
Robin R. Murphy. Dempster–Shafer Theory for Sensor Fusion in Autonomous Mobile Robots. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 2, APRIL 1998 197.&lt;br /&gt;
|Prerequisites=Projects involving both software, hardware, and electronics require much development/time&lt;br /&gt;
|Supervisor=Martin Cooney, Håkan Petterson&lt;br /&gt;
|Author=Can Yang&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
Goal: small smart sensors&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
Current sensors are &amp;quot;static&amp;quot;, detecting just one thing, such as light, sound, or touch, because of which in complex systems such as vehicles or homes many sensors are typically required.&lt;br /&gt;
A &amp;quot;smart&amp;quot; sensor which could dynamically adapt itself (to detect different signals) could replace multiple static sensors, leading to less space taken, less cost, less work for installations, a greater ability to operate when changes occur, and possibly even easier repairs (self-healing).&lt;br /&gt;
Alternatively, an ensemble of dynamic sensors could acquire more information than the same number of static sensors.&lt;br /&gt;
Furthermore, if such sensors can be made small, various benefits would emerge: e.g., making arbitrary surfaces embedded with sensors for robots such as intelligent skin, and interesting tasks at the micro scale could potentially be facilitated, such as monitoring cells inside of a person.&lt;br /&gt;
&lt;br /&gt;
Challenges: &lt;br /&gt;
It is unknown how a sensor could be automatically changed into a different kind of sensor, a good strategy for changing a sensor or an ensemble of sensors, and how to achieve this at a small scale.&lt;br /&gt;
&lt;br /&gt;
Approach:&lt;br /&gt;
The student will &lt;br /&gt;
&lt;br /&gt;
-Design transitions between some typical sensors&lt;br /&gt;
&lt;br /&gt;
-Design algorithms for calculating information/interestingness for each modality (possibly using change point detection) and for strategically allocating roles (like particles in a particle filter).&lt;br /&gt;
&lt;br /&gt;
-Design a strategy for the micro scale&lt;br /&gt;
&lt;br /&gt;
-Implement a macro-scale proof-of-concept, which can be switched manually or algorithmically, and a smaller proof-of-concept (mini, or micro; nano would be cool but would probably not be possible)&lt;br /&gt;
&lt;br /&gt;
-Evaluate&lt;br /&gt;
&lt;br /&gt;
-Thus, students will develop both software, and hardware/electronics &lt;br /&gt;
&lt;br /&gt;
Expected result: some ideas (new knowledge) of how to adapt different sensors, at different scales. Thesis, prototypes, video, hopefully a paper, possibly other such as code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=RAQUEL_Robot_Assisted_QUiz_Espying_of_Learners&amp;diff=3785</id>
		<title>RAQUEL Robot Assisted QUiz Espying of Learners</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=RAQUEL_Robot_Assisted_QUiz_Espying_of_Learners&amp;diff=3785"/>
		<updated>2017-11-23T16:32:04Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=RAQUEL Robot Assisted QUiz Espying  Learners&lt;br /&gt;
|Prerequisites=Image Analysis&lt;br /&gt;
|Supervisor=Josef Bigun, Martin Cooney, Fernando Alonso Fernandez&lt;br /&gt;
|Author=Sanjana Arunesh, Abhilash Padisiva&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
|Title=RAQUEL Robot Assisted QUiz Espying  Learners&lt;br /&gt;
}}&lt;br /&gt;
Robot Assisted QUiz Labeling  by face recognition.&lt;br /&gt;
We teach baxter to be a quiz assistant.&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3784</id>
		<title>Smart sensor</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3784"/>
		<updated>2017-11-23T16:23:36Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Small smart sensors&lt;br /&gt;
|Keywords=Smart sensors, adaptation, sensors&lt;br /&gt;
|TimeFrame=First half of 2018&lt;br /&gt;
|References=Surya G. Nurzaman, Utku Culha, Luzius Brodbeck, Liyu Wang, Fumiya Iida. (2013) Active Sensing System with In Situ Adjustable Sensor Morphology. PLoS ONE 8(12):e84090. doi:10.1371/journal.pone.0084090&lt;br /&gt;
Robin R. Murphy. Dempster–Shafer Theory for Sensor Fusion in Autonomous Mobile Robots. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 2, APRIL 1998 197.&lt;br /&gt;
|Prerequisites=Projects involving both software, hardware, and electronics require much development/time&lt;br /&gt;
|Supervisor=Martin Cooney, (possibly Håkan Petterson)&lt;br /&gt;
|Author=Can Yang&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
Goal: small smart sensors&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
Current sensors are &amp;quot;static&amp;quot;, detecting just one thing, such as light, sound, or touch, because of which in complex systems such as vehicles or homes many sensors are typically required.&lt;br /&gt;
A &amp;quot;smart&amp;quot; sensor which could dynamically adapt itself (to detect different signals) could replace multiple static sensors, leading to less space taken, less cost, less work for installations, a greater ability to operate when changes occur, and possibly even easier repairs (self-healing).&lt;br /&gt;
Alternatively, an ensemble of dynamic sensors could acquire more information than the same number of static sensors.&lt;br /&gt;
Furthermore, if such sensors can be made small, various benefits would emerge: e.g., making arbitrary surfaces embedded with sensors for robots such as intelligent skin, and interesting tasks at the micro scale could potentially be facilitated, such as monitoring cells inside of a person.&lt;br /&gt;
&lt;br /&gt;
Challenges: &lt;br /&gt;
It is unknown how a sensor could be automatically changed into a different kind of sensor, a good strategy for changing a sensor or an ensemble of sensors, and how to achieve this at a small scale.&lt;br /&gt;
&lt;br /&gt;
Approach:&lt;br /&gt;
The student will &lt;br /&gt;
&lt;br /&gt;
-Design transitions between some typical sensors&lt;br /&gt;
&lt;br /&gt;
-Design algorithms for calculating information/interestingness for each modality (possibly using change point detection) and for strategically allocating roles (like particles in a particle filter).&lt;br /&gt;
&lt;br /&gt;
-Design a strategy for the micro scale&lt;br /&gt;
&lt;br /&gt;
-Implement a macro-scale proof-of-concept, which can be switched manually or algorithmically, and a smaller proof-of-concept (mini, or micro; nano would be cool but would probably not be possible)&lt;br /&gt;
&lt;br /&gt;
-Evaluate&lt;br /&gt;
&lt;br /&gt;
-Thus, students will develop both software, and hardware/electronics &lt;br /&gt;
&lt;br /&gt;
Expected result: some ideas (new knowledge) of how to adapt different sensors, at different scales. Thesis, prototypes, video, hopefully a paper, possibly other such as code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3783</id>
		<title>Smart sensor</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3783"/>
		<updated>2017-11-23T16:11:14Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Small smart sensors&lt;br /&gt;
|Keywords=Smart sensors, adaptation, sensors&lt;br /&gt;
|TimeFrame=First half of 2018&lt;br /&gt;
|References=Surya G. Nurzaman, Utku Culha, Luzius Brodbeck, Liyu Wang, Fumiya Iida. (2013) Active Sensing System with In Situ Adjustable Sensor Morphology. PLoS ONE 8(12):e84090. doi:10.1371/journal.pone.0084090&lt;br /&gt;
Robin R. Murphy. Dempster–Shafer Theory for Sensor Fusion in Autonomous Mobile Robots. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 2, APRIL 1998 197.&lt;br /&gt;
|Prerequisites=Projects involving both software, hardware, and electronics require much development/time&lt;br /&gt;
|Supervisor=Martin Cooney, (possibly Håkan Petterson)&lt;br /&gt;
|Author=Can Yang&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
Goal: small smart sensors&lt;br /&gt;
Motivation:&lt;br /&gt;
Current sensors are &amp;quot;static&amp;quot;, detecting just one thing, such as light, sound, or touch, because of which in complex systems such as vehicles or homes many sensors are typically required.&lt;br /&gt;
A &amp;quot;smart&amp;quot; sensor which could dynamically adapt itself (to detect different signals) could replace multiple static sensors, leading to less space taken, less cost, less work for installations, a greater ability to operate when changes occur, and possibly even easier repairs (self-healing).&lt;br /&gt;
Alternatively, an ensemble of dynamic sensors could acquire more information than the same number of static sensors.&lt;br /&gt;
Furthermore, if such sensors can be made small, various benefits would emerge: e.g., making arbitrary surfaces embedded with sensors for robots such as intelligent skin, and interesting tasks at the micro scale could potentially be facilitated, such as monitoring cells inside of a person.&lt;br /&gt;
Challenges: &lt;br /&gt;
It is unknown how a sensor could be automatically changed into a different kind of sensor, a good strategy for changing a sensor or an ensemble of sensors, and how to achieve this at a small scale.&lt;br /&gt;
Approach:&lt;br /&gt;
The student will &lt;br /&gt;
-Design transitions between some typical sensors&lt;br /&gt;
-Design algorithms for calculating information/interestingness for each modality (possibly using change point detection) and for strategically allocating roles (like particles in a particle filter).&lt;br /&gt;
-Design a strategy for the micro scale&lt;br /&gt;
-Implement a macro-scale proof-of-concept, which can be switched manually or algorithmically, and a smaller proof-of-concept (mini, or micro; nano would be cool but would probably not be possible)&lt;br /&gt;
-Evaluate&lt;br /&gt;
-Thus, students will develop both software, and hardware/electronics &lt;br /&gt;
Expected result: some ideas (new knowledge) of how to adapt different sensors, at different scales. Thesis, prototypes, video, hopefully a paper, possibly other such as code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3782</id>
		<title>Smart sensor</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3782"/>
		<updated>2017-11-23T16:10:11Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Small smart sensors&lt;br /&gt;
|Keywords=Smart sensors, adaptation, sensors&lt;br /&gt;
|TimeFrame=First half of 2018&lt;br /&gt;
|References=Surya G. Nurzaman, Utku Culha, Luzius Brodbeck, Liyu Wang, Fumiya Iida. (2013) Active Sensing System with In Situ Adjustable Sensor Morphology. PLoS ONE 8(12):e84090. doi:10.1371/journal.pone.0084090&lt;br /&gt;
Robin R. Murphy. Dempster–Shafer Theory for Sensor Fusion in Autonomous Mobile Robots. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 14, NO. 2, APRIL 1998 197.&lt;br /&gt;
|Prerequisites=Projects involving both software, hardware, and electronics require much development/time &lt;br /&gt;
|Supervisor=Martin Cooney (possibly Håkan Petterson)&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Goal: small smart sensors&lt;br /&gt;
Motivation:&lt;br /&gt;
Current sensors are &amp;quot;static&amp;quot;, detecting just one thing, such as light, sound, or touch, because of which in complex systems such as vehicles or homes many sensors are typically required.&lt;br /&gt;
A &amp;quot;smart&amp;quot; sensor which could dynamically adapt itself (to detect different signals) could replace multiple static sensors, leading to less space taken, less cost, less work for installations, a greater ability to operate when changes occur, and possibly even easier repairs (self-healing).&lt;br /&gt;
Alternatively, an ensemble of dynamic sensors could acquire more information than the same number of static sensors.&lt;br /&gt;
Furthermore, if such sensors can be made small, various benefits would emerge: e.g., making arbitrary surfaces embedded with sensors for robots such as intelligent skin, and interesting tasks at the micro scale could potentially be facilitated, such as monitoring cells inside of a person.&lt;br /&gt;
Challenges: &lt;br /&gt;
It is unknown how a sensor could be automatically changed into a different kind of sensor, a good strategy for changing a sensor or an ensemble of sensors, and how to achieve this at a small scale.&lt;br /&gt;
Approach:&lt;br /&gt;
The student will &lt;br /&gt;
-Design transitions between some typical sensors&lt;br /&gt;
-Design algorithms for calculating information/interestingness for each modality (possibly using change point detection) and for strategically allocating roles (like particles in a particle filter).&lt;br /&gt;
-Design a strategy for the micro scale&lt;br /&gt;
-Implement a macro-scale proof-of-concept, which can be switched manually or algorithmically, and a smaller proof-of-concept (mini, or micro; nano would be cool but would probably not be possible)&lt;br /&gt;
-Evaluate&lt;br /&gt;
-Thus, students will develop both software, and hardware/electronics &lt;br /&gt;
Expected result: some ideas (new knowledge) of how to adapt different sensors, at different scales. Thesis, prototypes, video, hopefully a paper, possibly other such as code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3530</id>
		<title>Smart sensor</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Smart_sensor&amp;diff=3530"/>
		<updated>2017-09-27T12:44:04Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Social robot |Keywords=Social robot |TimeFrame=First half of 2018 |References=Scheeff M., Pinto J., Rahardja K., Snibbe S., Tow R. (2002) Exp...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Social robot&lt;br /&gt;
|Keywords=Social robot&lt;br /&gt;
|TimeFrame=First half of 2018&lt;br /&gt;
|References=Scheeff M., Pinto J., Rahardja K., Snibbe S., Tow R. (2002) Experiences with Sparky, a Social Robot. In: Dautenhahn K., Bond A., Cañamero L., Edmonds B. (eds) Socially Intelligent Agents. Multiagent Systems, Artificial Societies, and Simulated Organizations, vol 3. Springer, Boston, MA &lt;br /&gt;
Cynthia Breazeal. 2003 Emotion and sociable humanoid robots.. International Journal of Human-Computer Studies 59(1-2):119-155&lt;br /&gt;
|Prerequisites=Robotics projects require much development/time &lt;br /&gt;
|Supervisor=Martin, maybe others&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
This is a stub which will be filled out later.&lt;br /&gt;
-The general research area is social robotics, an applied area at the junction between robotics, pattern recognition, image processing, and human science.&lt;br /&gt;
Robots are increasingly being introduced into public and domestic settings to conduct various useful tasks for humans and alongside of humans.&lt;br /&gt;
For such technologies to perform effectively, it is crucial to ensure safety, trust, and acceptance.&lt;br /&gt;
Toward this, researchers are aiming to facilitate mutual recognition of actions and intentions between humans and the autonomous systems.&lt;br /&gt;
Recognition by robots can involve cameras, thermal cameras, and other kinds of sensors, and behavior generation is conducted to facilitate human recognition of robot actions and intentions. &lt;br /&gt;
The challenge is human behaviors and intentions are complex and difficult to model, recognize, and generate.&lt;br /&gt;
-Students will mostly develop software, not hardware. Students will use an existing robot (probably Baxter, which will also be shared with other students and researchers as needed.)&lt;br /&gt;
Expected results: a thesis, code, video, etc (it would also be nice, but not required, if the students would be willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3385</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3385"/>
		<updated>2016-11-28T12:09:38Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots, Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
&lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
&lt;br /&gt;
Rakesh Gupta, Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Author=Chandrashekhar Shankarrao Nasurade, Vamsi Krishna Nathani&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=3384</id>
		<title>Robot Artwork</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=3384"/>
		<updated>2016-11-28T12:07:26Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Capability for a robot to paint to express human feelings&lt;br /&gt;
|Keywords=Robots, Artwork, Emotion, BMI, Pattern recognition, Computer vision&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot artwork&lt;br /&gt;
&lt;br /&gt;
Michael Raschke, Katja Mombaur, Alexander Schubert. An optimisation-based robot platform for the generation of action paintings. Int. J. Arts and Technology, Vol. 4, No. 2, 2011 181&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-emotion recognition from eeg&lt;br /&gt;
&lt;br /&gt;
Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, and Jyh-Horng Chen. EEG-Based Emotion Recognition in Music Listening. &lt;br /&gt;
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 57, NO. 7, JULY 2010&lt;br /&gt;
|Prerequisites=Interest in robots and humans (artwork and emotions);&lt;br /&gt;
Willingness to work to help others, also by competing in a contest with a chance to earn prestige and money for charities and the university;&lt;br /&gt;
Software skills&lt;br /&gt;
|Supervisor=Martin Cooney, Maria Luiza Recena Menezes,&lt;br /&gt;
|Examiner=Slawomir, Antanas&lt;br /&gt;
|Author=Daniel Westerlund, Sowmya Narasimman&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
There is something profound about art.&lt;br /&gt;
Part of it may be the freedom to visit new &amp;quot;spaces&amp;quot; and feel in a new way.&lt;br /&gt;
Another part may be the way it captures aspects of our personalities, our priorities, our feelings, our desires toward others, our motives for doing art, how we wish to be treated, and our understanding of ourselves and the things around us--in a visceral way which can be communicated to others.&lt;br /&gt;
(For example, young children&amp;#039;s artwork typically proceeds in stages, from scribbles which feel good, to &amp;quot;functional&amp;quot; symbols like cylinders with minimal appendages to represent humans, to &amp;quot;logical&amp;quot; renderings in which all body parts are drawn, to &amp;quot;realistic&amp;quot; shaded renderings; likewise, differences have been found in adults in drawings of nerve cells by students and experts.) &lt;br /&gt;
In this project, the main idea is to have a robot create a painting based on something recognized from a human.&lt;br /&gt;
There are many possibilities to do something useful in this context.&lt;br /&gt;
For example, art therapy can help people to feel better.&lt;br /&gt;
Using sensors, the feelings of people with autism can be conveyed, who may have problems communicating feelings socially.&lt;br /&gt;
Using a robot Parkinson&amp;#039;s disease patients could draw straight lines without tremors.&lt;br /&gt;
Feedback can be incorporated while painting to make changes; a caricature could be made more or less exaggerated, or a sketch more or less abstract, based on if a human looks happy or unhappy with progress. &lt;br /&gt;
Neural networks could be an interesting mechanism to express emotions, like Google&amp;#039;s work with &amp;quot;inceptionism&amp;quot;...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Some notes:&lt;br /&gt;
*One goal of this project is to produce a submission to the RobotArt contest (http://robotart.org/, founded by Andrew Conru), which offers $100,000USD in various prizes (25% university, 75% charities in the US). The deadline for submitting art will be April 15th, with a decision in May.&lt;br /&gt;
*The robot used will (probably) be the Baxter robot, an advanced humanoid robot with two seven degree-of-freedom arms; the student will receive time each week to work with the robot, but time with the robot will also be shared with other students and researchers as needed.&lt;br /&gt;
*The sensor used will (probably) be a brain-machine interface.&lt;br /&gt;
*We will seek to obtain advice from various experts, e.g., in art, psychology, or computer vision.&lt;br /&gt;
*The research focus will be decided after initial discussion.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preliminary workplan:&lt;br /&gt;
 &lt;br /&gt;
*Learning how to use the robot and sensor.&lt;br /&gt;
*Literature review.&lt;br /&gt;
*Building a system.&lt;br /&gt;
*Evaluating the system&lt;br /&gt;
*Working on thesis and presentation&lt;br /&gt;
&lt;br /&gt;
Focus on software or hardware?: software&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis, code, video, submission to RobotArt Competition &lt;br /&gt;
(it would also be nice, but not required, if the student is willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3375</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3375"/>
		<updated>2016-11-15T10:47:04Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots, Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
&lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
&lt;br /&gt;
Rakesh Gupta, Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3374</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3374"/>
		<updated>2016-11-15T10:46:50Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots, Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
&lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
&lt;br /&gt;
Rakesh Gupta, Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=3373</id>
		<title>Robot Artwork</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=3373"/>
		<updated>2016-11-15T10:46:20Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Capability for a robot to paint to express human feelings&lt;br /&gt;
|Keywords=Robots, Artwork, Emotion, BMI, Pattern recognition, Computer vision&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot artwork&lt;br /&gt;
&lt;br /&gt;
Michael Raschke, Katja Mombaur, Alexander Schubert. An optimisation-based robot platform for the generation of action paintings. Int. J. Arts and Technology, Vol. 4, No. 2, 2011 181&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-emotion recognition from eeg&lt;br /&gt;
&lt;br /&gt;
Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, and Jyh-Horng Chen. EEG-Based Emotion Recognition in Music Listening. &lt;br /&gt;
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 57, NO. 7, JULY 2010&lt;br /&gt;
|Prerequisites=Interest in robots and humans (artwork and emotions);&lt;br /&gt;
Willingness to work to help others, also by competing in a contest with a chance to earn prestige and money for charities and the university;&lt;br /&gt;
Software skills&lt;br /&gt;
|Supervisor=Martin Cooney, Maria Luiza Recena Menezes,&lt;br /&gt;
|Examiner=Slawomir, Antanas&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
There is something profound about art.&lt;br /&gt;
Part of it may be the freedom to visit new &amp;quot;spaces&amp;quot; and feel in a new way.&lt;br /&gt;
Another part may be the way it captures aspects of our personalities, our priorities, our feelings, our desires toward others, our motives for doing art, how we wish to be treated, and our understanding of ourselves and the things around us--in a visceral way which can be communicated to others.&lt;br /&gt;
(For example, young children&amp;#039;s artwork typically proceeds in stages, from scribbles which feel good, to &amp;quot;functional&amp;quot; symbols like cylinders with minimal appendages to represent humans, to &amp;quot;logical&amp;quot; renderings in which all body parts are drawn, to &amp;quot;realistic&amp;quot; shaded renderings; likewise, differences have been found in adults in drawings of nerve cells by students and experts.) &lt;br /&gt;
In this project, the main idea is to have a robot create a painting based on something recognized from a human.&lt;br /&gt;
There are many possibilities to do something useful in this context.&lt;br /&gt;
For example, art therapy can help people to feel better.&lt;br /&gt;
Using sensors, the feelings of people with autism can be conveyed, who may have problems communicating feelings socially.&lt;br /&gt;
Using a robot Parkinson&amp;#039;s disease patients could draw straight lines without tremors.&lt;br /&gt;
Feedback can be incorporated while painting to make changes; a caricature could be made more or less exaggerated, or a sketch more or less abstract, based on if a human looks happy or unhappy with progress. &lt;br /&gt;
Neural networks could be an interesting mechanism to express emotions, like Google&amp;#039;s work with &amp;quot;inceptionism&amp;quot;...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Some notes:&lt;br /&gt;
*One goal of this project is to produce a submission to the RobotArt contest (http://robotart.org/, founded by Andrew Conru), which offers $100,000USD in various prizes (25% university, 75% charities in the US). The deadline for submitting art will be April 15th, with a decision in May.&lt;br /&gt;
*The robot used will (probably) be the Baxter robot, an advanced humanoid robot with two seven degree-of-freedom arms; the student will receive time each week to work with the robot, but time with the robot will also be shared with other students and researchers as needed.&lt;br /&gt;
*The sensor used will (probably) be a brain-machine interface.&lt;br /&gt;
*We will seek to obtain advice from various experts, e.g., in art, psychology, or computer vision.&lt;br /&gt;
*The research focus will be decided after initial discussion.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preliminary workplan:&lt;br /&gt;
 &lt;br /&gt;
*Learning how to use the robot and sensor.&lt;br /&gt;
*Literature review.&lt;br /&gt;
*Building a system.&lt;br /&gt;
*Evaluating the system&lt;br /&gt;
*Working on thesis and presentation&lt;br /&gt;
&lt;br /&gt;
Focus on software or hardware?: software&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis, code, video, submission to RobotArt Competition &lt;br /&gt;
(it would also be nice, but not required, if the student is willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_First_Aid&amp;diff=3342</id>
		<title>Robot First Aid</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_First_Aid&amp;diff=3342"/>
		<updated>2016-11-06T14:57:26Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Marcoo moved page Robot First Aid to Robot Cooking: students have proposed a different topic&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Robot Cooking]]&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3341</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3341"/>
		<updated>2016-11-06T14:57:26Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Marcoo moved page Robot First Aid to Robot Cooking: students have proposed a different topic&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots, Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
&lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
&lt;br /&gt;
Rakesh Gupta, Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Author=Chandrashekhar Shankarrao Nasurade, Vamsi Krishna Nathani&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3340</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3340"/>
		<updated>2016-11-06T14:56:24Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots, Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
&lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
&lt;br /&gt;
Rakesh Gupta, Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Author=Chandrashekhar Shankarrao Nasurade, Vamsi Krishna Nathani&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3339</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3339"/>
		<updated>2016-11-06T14:54:56Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots, Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
&lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
&lt;br /&gt;
Rakesh Gupta, Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Author=Chandrashekhar Shankarrao Nasurade, Vamsi Krishna Nathani&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3338</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3338"/>
		<updated>2016-11-06T14:54:22Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Common sense for a robot to cook healthy food&lt;br /&gt;
|Keywords=Robots,Healthcare, Visual Recognition&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot cooking &lt;br /&gt;
Christian Østergaard Laursen, Søren Pedersen, Timothy Merritt, Ole Caprani. Robot-Supported Food Experiences: Exploring Aesthetic Plating with Design Prototypes. In J.T.K.V. Koh et al. (Eds.): Cultural Robotics 2015, LNAI 9549, pp. 107–130, 2016. DOI: 10.1007/978-3-319-42945-8 10 Springer International Publishing Switzerland 2016.012.241&lt;br /&gt;
&lt;br /&gt;
-common sense acquisition&lt;br /&gt;
Rakesh Gupta and Mykel J. Kochenderfer. Common Sense Data Acquisition for Indoor Mobile Robots. ROBOTICS&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some small work with hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Author=Chandrashekhar Shankarrao Nasurade, Vamsi Krishna Nathani&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
This project will be about designing a capability for &amp;quot;common sense&amp;quot; in a robot, within the context of helping an elderly person at home with cooking.&lt;br /&gt;
Our motivation is that robots could be helpful to people and contribute to their well-being, health, and quality of life, but first some major challenges must be overcome. One challenge is that everyday tasks which humans perform can be complex and confusing. This is especially a problem with elderly persons with declined physical and cognitive abilities (e.g. dementia), when the person can no longer function by themselves but requires support, which is sometimes not available from other humans. To support such a person, we believe a robot should have some degree of what we call here &amp;quot;common sense&amp;quot;; related to robustness, this is an ability to function correctly in the presence of some errors which a healthy adult person can typically detect. &lt;br /&gt;
In the case of cooking, to cook in a healthy and good way, this means that a robot should be able to detect and compensate for errors in three main facets of cooking: the recipe, the tools used, and the ingredients.&lt;br /&gt;
For example, a human can determine that if a recipe for one person calls for 10kg of salt, this is probably a mistake, and conclude from experience with similar recipes that it should be 10g of salt.&lt;br /&gt;
If a recipe calls for a spatula but instead a knife has been provided, a human can determine that this is wrong and seek from experience some tool which is more appropriate.&lt;br /&gt;
If instead of salt, a bag of sugar has been provided, a human can determine that this is wrong and seek more appropriate ingredients.&lt;br /&gt;
To address the challenge, in this project the robot will learn a model of common sense regarding these three main facets of cooking by unsupervised learning (forming clusters, detecting anomalies, and inferring how anomalies can be rectified).&lt;br /&gt;
The robot used will be Baxter on a Ridgeback mobile base, which will be shared with other students and researchers.&lt;br /&gt;
The evaluation will measure the degree to which the robot can complete a simple cooking task in the presence of various errors which we introduce into the cooking process.&lt;br /&gt;
&lt;br /&gt;
Timeline:&lt;br /&gt;
January-February: Preparation: literature review; setting up basic capability for a robot to do a simple cooking task (formulate instructions from a recipe, detect tools and ingredients, and carry out instructions)&lt;br /&gt;
March-April: Main point:learning a common sense model to avoid errors&lt;br /&gt;
May: Evaluation, writing/presenting&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video (we also hope to offer some food cooked by the robot)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3330</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=3330"/>
		<updated>2016-10-29T18:21:17Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Capability to physically perform first aid for an autonomous (mobile) robot&lt;br /&gt;
|Keywords=Robots, First Aid, Visual Recognition, Healthcare&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-first aid &lt;br /&gt;
&lt;br /&gt;
Travers, A. H., Rea, T. D., Bobrow, B. J., et al. (2010). Part 4: CPR overview &lt;br /&gt;
2010 American Heart Association guidelines for cardiopulmonary resuscitation and &lt;br /&gt;
emergency cardiovascular care. Circulation, 122(18 suppl 3), S676-S684. &lt;br /&gt;
&lt;br /&gt;
-pose recognition &lt;br /&gt;
&lt;br /&gt;
Jamie Shotton, Ross Girshick, Andrew Fitzgibbon, Toby Sharp, Mat Cook, Mark Finocchio, Richard Moore, Pushmeet Kohli, Antonio Criminisi, Alex Kipman, Andrew Blake, &amp;quot;Efficient Human Pose Estimation from Single Depth Images&amp;quot;, IEEE Transactions on Pattern Analysis &amp;amp; Machine Intelligence, vol.35, no. 12, pp. 2821-2840, Dec. 2013, doi:10.1109/TPAMI.2012.241&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney,&lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Author=Chandrashekhar Shankarrao Nasurade, Vamsi Krishna Nathani &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
The student will be part of a research drive at HH connected to the Intelligent Home to develop the capability for home robots to help save people&amp;#039;s lives in emergencies.&lt;br /&gt;
So far, although smart phones have acquired huge popularity, robots have failed to gain wide acceptance in people&amp;#039;s homes; the cause is that a very good reason for having a versatile (expensive) embodiment in a home has not yet been established.&lt;br /&gt;
We think first aid can be such a reason.&lt;br /&gt;
First, many of us know of someone who has fallen down and not received immediate medical attention; the repercussions can be very serious.&lt;br /&gt;
Second, first aid is a complex problem which requires a complex robotic embodiment; a simple vacuum cleaner-like machine is not enough to help.&lt;br /&gt;
Within this context, the student will use an excellent robot (Baxter from Rodney Brooks, on a Clearpath mobile base) to develop some capability to conduct first aid.&lt;br /&gt;
Some notes: &lt;br /&gt;
*In this exploratory project the robot will touch a mannequin in place of a real human, due to ethics and safety concerns.&lt;br /&gt;
*We will seek to obtain advice from various experts, e.g., in nursing and computer vision.&lt;br /&gt;
*The robot will also be used from time to time by others (researchers and students).&lt;br /&gt;
*Various possibilities exist for contributions to intelligent systems:&lt;br /&gt;
**an approach for carrying out basic first aid steps: e.g., chest compressions, adjusting pose to assist airway, and artificial ventilation. This could involve designing a special gripper and approach for using it. OR&lt;br /&gt;
**an approach for stemming bleeding by detecting bleeding (rate, location of wounds and arteries) and treating (cooling a wound with a peltier, applying pressure to the wound with pads or to arteries, and elevating limbs). OR&lt;br /&gt;
**an approach for assessing a person&amp;#039;s mental state/confusion such as AVPU or the Glascow coma scale, which requires the robot to execute multimodal behavior like touching a person and recognize reactions such as the manner of contraction of an arm.&lt;br /&gt;
&lt;br /&gt;
Goal: the robot will, while recognizing, conduct some basic steps for first aid on a mannequin (involving physical touch) &lt;br /&gt;
&lt;br /&gt;
Relation to some previous work: &lt;br /&gt;
&lt;br /&gt;
Several student projects have focused on preparation for robotic first aid.&lt;br /&gt;
In 2014, Meyr built a mobile robot system which could be commanded to go to the location of an emergency and ask humans if they were okay; Lazaro built a computer vision algorithm to distinguish fallen humans from objects.&lt;br /&gt;
In 2015, Zhang and Zhao built a recognition system using Kinect data to recognize health signs in an unconscious person related to first aid.&lt;br /&gt;
In 2016, Hotze created a mobile robot system to find fallen humans and locate on a map body parts important for first aid, to allow for cpr to be conducted.&lt;br /&gt;
The current project will draw insight from this previous work by students at HH and be the first to result in a system capable of physically conducting some form of first aid (physically touching a human mock-up).&lt;br /&gt;
&lt;br /&gt;
Possible Steps: &lt;br /&gt;
&lt;br /&gt;
•Preparation: (January) Becoming (more) familiar with OpenCV, ROS, (Arduino,) Baxter(/Ridgeback), getting robot/robot arms to move &lt;br /&gt;
&lt;br /&gt;
•(February) Basic literature review; last preparation, such as attaching secondary systems which might be needed such as touch sensors to the robot&lt;br /&gt;
&lt;br /&gt;
•Main part: (March-April) Sensing, Planning, Acting&lt;br /&gt;
For example:&lt;br /&gt;
(sense) the robot will use its sensors to locate important first aid points in 3d&lt;br /&gt;
(plan) the robot will use inverse kinematics to position its end effectors to do first aid&lt;br /&gt;
(act) the robot will move to do first aid, controlling its motions via feedback from sensors&lt;br /&gt;
&lt;br /&gt;
•After-work: (May) Evaluating; writing/presenting &lt;br /&gt;
&lt;br /&gt;
Possible evaluation metrics: e.g., objective measures (time, positioning, force); qualitative assessment by an expert; and/or comparison to a novice human first aider&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video &lt;br /&gt;
(for this project, the student is expected to be willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=3329</id>
		<title>Robot Artwork</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=3329"/>
		<updated>2016-10-29T18:19:26Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Capability for a robot to paint to express human feelings&lt;br /&gt;
|Keywords=Robots, Artwork, Emotion, BMI, Pattern recognition, Computer vision&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot artwork&lt;br /&gt;
&lt;br /&gt;
Michael Raschke, Katja Mombaur, Alexander Schubert. An optimisation-based robot platform for the generation of action paintings. Int. J. Arts and Technology, Vol. 4, No. 2, 2011 181&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-emotion recognition from eeg&lt;br /&gt;
&lt;br /&gt;
Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, and Jyh-Horng Chen. EEG-Based Emotion Recognition in Music Listening. &lt;br /&gt;
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 57, NO. 7, JULY 2010&lt;br /&gt;
|Prerequisites=Interest in robots and humans (artwork and emotions);&lt;br /&gt;
Willingness to work to help others, also by competing in a contest with a chance to earn prestige and money for charities and the university;&lt;br /&gt;
Software skills&lt;br /&gt;
|Supervisor=Martin Cooney, Maria Luiza Recena Menezes,&lt;br /&gt;
|Examiner=Slawomir, Antanas&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
There is something profound about art.&lt;br /&gt;
Part of it may be the freedom to visit new &amp;quot;spaces&amp;quot; and feel in a new way.&lt;br /&gt;
Another part may be the way it captures aspects of our personalities, our priorities, our feelings, our desires toward others, our motives for doing art, how we wish to be treated, and our understanding of ourselves and the things around us--in a visceral way which can be communicated to others.&lt;br /&gt;
(For example, young children&amp;#039;s artwork typically proceeds in stages, from scribbles which feel good, to &amp;quot;functional&amp;quot; symbols like cylinders with minimal appendages to represent humans, to &amp;quot;logical&amp;quot; renderings in which all body parts are drawn, to &amp;quot;realistic&amp;quot; shaded renderings; likewise, differences have been found in adults in drawings of nerve cells by students and experts.) &lt;br /&gt;
In this project, the main idea is to have a robot create a painting based on something recognized from a human.&lt;br /&gt;
There are many possibilities to do something useful in this context.&lt;br /&gt;
For example, art therapy can help people to feel better.&lt;br /&gt;
Using sensors, the feelings of people with autism can be conveyed, who may have problems communicating feelings socially.&lt;br /&gt;
Using a robot Parkinson&amp;#039;s disease patients could draw straight lines without tremors.&lt;br /&gt;
Feedback can be incorporated while painting to make changes; a caricature could be made more or less exaggerated, or a sketch more or less abstract, based on if a human looks happy or unhappy with progress. &lt;br /&gt;
Neural networks could be an interesting mechanism to express emotions, like Google&amp;#039;s work with &amp;quot;inceptionism&amp;quot;...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Some notes:&lt;br /&gt;
*One goal of this project is to produce a submission to the RobotArt contest (http://robotart.org/, founded by Andrew Conru), which offers $100,000USD in various prizes (25% university, 75% charities in the US). The deadline for submitting art will be April 15th, with a decision in May.&lt;br /&gt;
*The robot used will (probably) be the Baxter robot, an advanced humanoid robot with two seven degree-of-freedom arms; the student will receive time each week to work with the robot, but time with the robot will also be shared with other students and researchers as needed.&lt;br /&gt;
*The sensor used will (probably) be a brain-machine interface.&lt;br /&gt;
*We will seek to obtain advice from various experts, e.g., in art, psychology, or computer vision.&lt;br /&gt;
*The research focus will be decided after initial discussion.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preliminary workplan:&lt;br /&gt;
 &lt;br /&gt;
*Learning how to use the robot and sensor.&lt;br /&gt;
*Literature review.&lt;br /&gt;
*Building a system.&lt;br /&gt;
*Evaluating the system&lt;br /&gt;
*Working on thesis and presentation&lt;br /&gt;
&lt;br /&gt;
Focus on software or hardware?: software&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis, code, video, submission to RobotArt Competition &lt;br /&gt;
(it would also be nice, but not required, if the student is willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=CAISR_Intelligent_Environment&amp;diff=2570</id>
		<title>CAISR Intelligent Environment</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=CAISR_Intelligent_Environment&amp;diff=2570"/>
		<updated>2016-06-13T16:25:32Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=CAISR Intelligent Environment&lt;br /&gt;
|ContactInformation=Anita Sant&amp;#039;Anna&lt;br /&gt;
|ShortDescription=Supporting research and education&lt;br /&gt;
|Description=The CAISR Intelligent  Environment is platform for demonstrating and showcasing CAISR research; implementing and testing new technologies in a realistic environment;  supporting data collection and validation of research hypotheses; as well as providing a functional infrastructure for student projects.&lt;br /&gt;
|LogotypeFile=Logo IntelligentEnvironment.jpg&lt;br /&gt;
|ProjectResponsible=Anita Sant&amp;#039;Anna&lt;br /&gt;
|ProjectDetailsPDF=CAISRIntelligentEnvironment.pdf&lt;br /&gt;
|FundingMSEK=2&lt;br /&gt;
|ProjectStart=2014/01/01&lt;br /&gt;
|ProjectEnd=2019/12/31&lt;br /&gt;
|ApplicationArea=Health Technology&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Centre for Health Technology Halland - HCH&lt;br /&gt;
}}&lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
The intelligent environment acquires diverse information about a person’s activities and health, using a number of distributed fixed and mobile sensors.  This data can be analyzed with the help of aware intelligent systems in order to understand a situation and its context, assess the person’s health and wellbeing, support decision-making, detect sudden or slow deviations, provide appropriate services in emergency situations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research projects ==&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/SA3L_-_Situation_Awareness_for_Ambient_Assisted_Living Situation Awareness for Ambient Assisted Living] – [http://islab.hh.se/mediawiki/index.php/Jens_Lundstr%C3%B6m Jens Lundström]&lt;br /&gt;
* A Database-Centric Architecture for Home-Based Health Monitoring – Wagner O. de Morais&lt;br /&gt;
* Impulse Radar in Health Applications – [http://islab.hh.se/mediawiki/index.php/Magnus_H%C3%A5llander Magnus Hållander]&lt;br /&gt;
&lt;br /&gt;
== Student projects ==&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/Mobile_Social_Robot_for_Healthcare Mobile Social Robots for healthcare] – Matthias Mayer (BSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/Activity_monitoring_for_AAL Tracking more than one person in a smart environment using fixed sensors and a mobile social robot] – Jianyuan Ma &amp;amp; Yinan Qiu (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/FirstResponse First response to emergency situation in a smart environment using a mobile social robot] – Gloria Lazzaro (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/%22TROLL%22:_a_regenerating_robot A self-aware robot which can find a mirror in a home, detect anomalies in part of its appearance, and fix them] – Yinrong Ma (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Robotic_First_aid_response Awareness of health state of an unconscious fallen person toward enabling robotic first aid] – Tianyi Zhang &amp;amp; Yuwei Zhao (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Courteous_robot_guide_for_visitors_to_an_intelligent_home Courteous robot guide for visitors to an intelligent home] – Jiamiao Guo &amp;amp; Yu Zhao (BSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Assistance-seeking_strategy_for_a_flying_robot_during_a_healthcare_emergency_response Assistance-seeking strategy for a flying robot during a healthcare emergency response] – Jérémy Heyne (BSc, visiting from Polytech Clermont, France)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Detecting_Points_of_Interest_for_Robotic_First_Aid Detecting Points of Interest for Robotic First Aid] – Wolfgang Hotze (MSc, also enrolled at graduate school in Salzburg, Austria)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Detecting_Points_of_Interest_for_Robotic_First_Aid&amp;diff=2569</id>
		<title>Detecting Points of Interest for Robotic First Aid</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Detecting_Points_of_Interest_for_Robotic_First_Aid&amp;diff=2569"/>
		<updated>2016-06-10T17:11:44Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Detecting Points of Interest for Robotic First Aid&lt;br /&gt;
|Keywords=Robot, First Aid, Visual Recognition&lt;br /&gt;
|TimeFrame=2016/1/1-2016/6/30&lt;br /&gt;
|References=-pose recognition&lt;br /&gt;
&lt;br /&gt;
Jamie Shotton, Ross Girshick, Andrew Fitzgibbon, Toby Sharp, Mat Cook, Mark Finocchio, Richard Moore, Pushmeet Kohli, Antonio Criminisi, Alex Kipman, Andrew Blake, &amp;quot;Efficient Human Pose Estimation from Single Depth Images&amp;quot;, IEEE Transactions on Pattern Analysis &amp;amp; Machine Intelligence, vol.35, no. 12, pp. 2821-2840, Dec. 2013, doi:10.1109/TPAMI.2012.241&lt;br /&gt;
 &lt;br /&gt;
-first aid&lt;br /&gt;
&lt;br /&gt;
Travers, A. H., Rea, T. D., Bobrow, B. J., et al. (2010). Part 4: CPR overview 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation, 122(18 suppl 3), S676-S684.&lt;br /&gt;
|Prerequisites=some capability to work with software, and interest in visual recognition and robots&lt;br /&gt;
|Supervisor=Anita Sant&amp;#039;Anna, Martin Cooney,&lt;br /&gt;
|Examiner=Sławomir Nowaczyk, Antanas Verikas&lt;br /&gt;
|Author=Wolfgang Hotze&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Finished&lt;br /&gt;
}}&lt;br /&gt;
Robots will save lives by conducting first aid in homes and public places when medical experts are unavailable.&lt;br /&gt;
This project focuses on the case of an unconscious fall victim in a home.&lt;br /&gt;
First aid, as described by the acronym CABD, can involve chest compressions,&lt;br /&gt;
airway adjustments, artificial respiration, and treatment for bleeding.&lt;br /&gt;
&lt;br /&gt;
Goal: the capability for a robot to locate points of interest for first aid: the sternum for chest compressions, the chin for the airway, the mouth for breathing, and points of possible bleeding.&lt;br /&gt;
&lt;br /&gt;
Relation to some previous work: &lt;br /&gt;
&lt;br /&gt;
A previous project by Tianyi Zhang and Yuwei Zhao involved a robot assessing some simple factors related to a fallen person&amp;#039;s health state assuming depth cameras where placed directly above a person.&lt;br /&gt;
The current project does not require this assumption.&lt;br /&gt;
Another previous project involved a robot sent by an intelligent environment to ask a conscious fall victim if they&lt;br /&gt;
were okay, but could not recognize a person&amp;#039;s pose to conduct first aid.&lt;br /&gt;
The current project will allow both of these works to be brought together.&lt;br /&gt;
&lt;br /&gt;
Basic approach:&lt;br /&gt;
*a robot will move to acquire visual data of a fallen person using a camera located on its arm&lt;br /&gt;
*visual data will be processed to find points of interest for first aid&lt;br /&gt;
*the robot will somehow indicate the points it has found&lt;br /&gt;
&lt;br /&gt;
Steps:&lt;br /&gt;
*Becoming (more) familiar with OpenCV, ROS, Arduino&lt;br /&gt;
*Basic literature review&lt;br /&gt;
*Getting robot and robot arm to move&lt;br /&gt;
*Acquiring some data and labelling&lt;br /&gt;
*Building a recognition system (the focus of the project)&lt;br /&gt;
*Evaluating&lt;br /&gt;
*Writing/presenting&lt;br /&gt;
&lt;br /&gt;
Evaluation metric:&lt;br /&gt;
Distance of output locations of points of interest and ground truth&lt;br /&gt;
(Time until finding points.)&lt;br /&gt;
&lt;br /&gt;
Focus: software (visual recognition)&lt;br /&gt;
&lt;br /&gt;
Expected results: &lt;br /&gt;
a thesis/report, code, video, (if possible a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=2568</id>
		<title>Robot Artwork</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Artwork&amp;diff=2568"/>
		<updated>2016-06-10T17:02:41Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Capability for a robot to paint to express human feelings |Keywords=Robots, Artwork, Emotion, BMI, Pattern recognition, Computer vision |Time...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Capability for a robot to paint to express human feelings&lt;br /&gt;
|Keywords=Robots, Artwork, Emotion, BMI, Pattern recognition, Computer vision&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30&lt;br /&gt;
|References=-robot artwork&lt;br /&gt;
&lt;br /&gt;
Michael Raschke, Katja Mombaur, Alexander Schubert. An optimisation-based robot platform for the generation of action paintings. Int. J. Arts and Technology, Vol. 4, No. 2, 2011 181&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-emotion recognition from eeg&lt;br /&gt;
&lt;br /&gt;
Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, and Jyh-Horng Chen. EEG-Based Emotion Recognition in Music Listening. &lt;br /&gt;
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 57, NO. 7, JULY 2010&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Interest in robots and humans (artwork and emotions);&lt;br /&gt;
Willingness to work to help others, also by competing in a contest with a chance to earn prestige and money for charities and the university;&lt;br /&gt;
Software skills&lt;br /&gt;
|Supervisor=Martin Cooney, Maria Luiza Recena Menezes, &lt;br /&gt;
|Examiner=Slawomir, Antanas&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
There is something profound about art.&lt;br /&gt;
Part of it may be the freedom to visit new &amp;quot;spaces&amp;quot; and feel in a new way.&lt;br /&gt;
Another part may be the way it captures aspects of our personalities, our priorities, our feelings, our desires toward others, our motives for doing art, how we wish to be treated, and our understanding of ourselves and the things around us--in a visceral way which can be communicated to others.&lt;br /&gt;
(For example, young children&amp;#039;s artwork typically proceeds in stages, from scribbles which feel good, to &amp;quot;functional&amp;quot; symbols like cylinders with minimal appendages to represent humans, to &amp;quot;logical&amp;quot; renderings in which all body parts are drawn, to &amp;quot;realistic&amp;quot; shaded renderings; likewise, differences have been found in adults in drawings of nerve cells by students and experts.) &lt;br /&gt;
In this project, the main idea is to have a robot create a painting based on something recognized from a human.&lt;br /&gt;
There are many possibilities to do something useful in this context.&lt;br /&gt;
For example, art therapy can help people to feel better.&lt;br /&gt;
Using sensors, the feelings of people with autism can be conveyed, who may have problems communicating feelings socially.&lt;br /&gt;
Using a robot Parkinson&amp;#039;s disease patients could draw straight lines without tremors.&lt;br /&gt;
Feedback can be incorporated while painting to make changes; a caricature could be made more or less exaggerated, or a sketch more or less abstract, based on if a human looks happy or unhappy with progress. &lt;br /&gt;
Neural networks could be an interesting mechanism to express emotions, like Google&amp;#039;s work with &amp;quot;inceptionism&amp;quot;...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Some notes:&lt;br /&gt;
*One goal of this project is to produce a submission to the RobotArt contest (http://robotart.org/, founded by Andrew Conru), which offers $100,000USD in various prizes (25% university, 75% charities in the US). The deadline for submitting art will be April 15th, with a decision in May.&lt;br /&gt;
*The robot used will (probably) be the Baxter robot, an advanced humanoid robot with two seven degree-of-freedom arms; the student will receive time each week to work with the robot, but time with the robot will also be shared with other students and researchers as needed.&lt;br /&gt;
*The sensor used will (probably) be a brain-machine interface.&lt;br /&gt;
*We will seek to obtain advice from various experts, e.g., in art, psychology, or computer vision.&lt;br /&gt;
*The research focus will be decided after initial discussion.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preliminary workplan:&lt;br /&gt;
 &lt;br /&gt;
*Learning how to use the robot and sensor.&lt;br /&gt;
*Literature review.&lt;br /&gt;
*Building a system.&lt;br /&gt;
*Evaluating the system&lt;br /&gt;
*Working on thesis and presentation&lt;br /&gt;
&lt;br /&gt;
Focus on software or hardware?: software&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis, code, video, submission to RobotArt Competition &lt;br /&gt;
(it would also be nice, but not required, if the student is willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=2567</id>
		<title>Robot Cooking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Robot_Cooking&amp;diff=2567"/>
		<updated>2016-06-10T15:16:42Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Capability to physically perform first aid for an autonomous (mobile) robot  |Keywords=Robots, First Aid, Visual Recognition, Healthcare |Tim...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Capability to physically perform first aid for an autonomous (mobile) robot &lt;br /&gt;
|Keywords=Robots, First Aid, Visual Recognition, Healthcare&lt;br /&gt;
|TimeFrame=2017/1/1-2017/8/30 &lt;br /&gt;
|References=-first aid &lt;br /&gt;
&lt;br /&gt;
Travers, A. H., Rea, T. D., Bobrow, B. J., et al. (2010). Part 4: CPR overview &lt;br /&gt;
2010 American Heart Association guidelines for cardiopulmonary resuscitation and &lt;br /&gt;
emergency cardiovascular care. Circulation, 122(18 suppl 3), S676-S684. &lt;br /&gt;
&lt;br /&gt;
-pose recognition &lt;br /&gt;
&lt;br /&gt;
Jamie Shotton, Ross Girshick, Andrew Fitzgibbon, Toby Sharp, Mat Cook, Mark Finocchio, Richard Moore, Pushmeet Kohli, Antonio Criminisi, Alex Kipman, Andrew Blake, &amp;quot;Efficient Human Pose Estimation from Single Depth Images&amp;quot;, IEEE Transactions on Pattern Analysis &amp;amp; Machine Intelligence, vol.35, no. 12, pp. 2821-2840, Dec. 2013, doi:10.1109/TPAMI.2012.241 &lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Strong multidisciplinary interest, strong work ethic, software (also ability to work with libraries), possibly some hardware/electronics&lt;br /&gt;
|Supervisor=Martin Cooney, &lt;br /&gt;
|Examiner=Antanas, Slawomir&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The student will be part of a research drive at HH connected to the Intelligent Home to develop the capability for home robots to help save people&amp;#039;s lives in emergencies.&lt;br /&gt;
So far, although smart phones have acquired huge popularity, robots have failed to gain wide acceptance in people&amp;#039;s homes; the cause is that a very good reason for having a versatile (expensive) embodiment in a home has not yet been established.&lt;br /&gt;
We think first aid can be such a reason.&lt;br /&gt;
First, many of us know of someone who has fallen down and not received immediate medical attention; the repercussions can be very serious.&lt;br /&gt;
Second, first aid is a complex problem which requires a complex robotic embodiment; a simple vacuum cleaner-like machine is not enough to help.&lt;br /&gt;
Within this context, the student will use an excellent robot (Baxter from Rodney Brooks, on a Clearpath mobile base) to develop some capability to conduct first aid.&lt;br /&gt;
Some notes: &lt;br /&gt;
*In this exploratory project the robot will touch a mannequin in place of a real human, due to ethics and safety concerns.&lt;br /&gt;
*We will seek to obtain advice from various experts, e.g., in nursing and computer vision.&lt;br /&gt;
*The robot will also be used from time to time by others (researchers and students).&lt;br /&gt;
*Various possibilities exist for contributions to intelligent systems:&lt;br /&gt;
**an approach for carrying out basic first aid steps: e.g., chest compressions, adjusting pose to assist airway, and artificial ventilation. This could involve designing a special gripper and approach for using it. OR&lt;br /&gt;
**an approach for stemming bleeding by detecting bleeding (rate, location of wounds and arteries) and treating (cooling a wound with a peltier, applying pressure to the wound with pads or to arteries, and elevating limbs). OR&lt;br /&gt;
**an approach for assessing a person&amp;#039;s mental state/confusion such as AVPU or the Glascow coma scale, which requires the robot to execute multimodal behavior like touching a person and recognize reactions such as the manner of contraction of an arm.&lt;br /&gt;
&lt;br /&gt;
Goal: the robot will, while recognizing, conduct some basic steps for first aid on a mannequin (involving physical touch) &lt;br /&gt;
&lt;br /&gt;
Relation to some previous work: &lt;br /&gt;
&lt;br /&gt;
Several student projects have focused on preparation for robotic first aid.&lt;br /&gt;
In 2014, Meyr built a mobile robot system which could be commanded to go to the location of an emergency and ask humans if they were okay; Lazaro built a computer vision algorithm to distinguish fallen humans from objects.&lt;br /&gt;
In 2015, Zhang and Zhao built a recognition system using Kinect data to recognize health signs in an unconscious person related to first aid.&lt;br /&gt;
In 2016, Hotze created a mobile robot system to find fallen humans and locate on a map body parts important for first aid, to allow for cpr to be conducted.&lt;br /&gt;
The current project will draw insight from this previous work by students at HH and be the first to result in a system capable of physically conducting some form of first aid (physically touching a human mock-up).&lt;br /&gt;
&lt;br /&gt;
Possible Steps: &lt;br /&gt;
&lt;br /&gt;
•Preparation: (January) Becoming (more) familiar with OpenCV, ROS, (Arduino,) Baxter(/Ridgeback), getting robot/robot arms to move &lt;br /&gt;
&lt;br /&gt;
•(February) Basic literature review; last preparation, such as attaching secondary systems which might be needed such as touch sensors to the robot&lt;br /&gt;
&lt;br /&gt;
•Main part: (March-April) Sensing, Planning, Acting&lt;br /&gt;
For example:&lt;br /&gt;
(sense) the robot will use its sensors to locate important first aid points in 3d&lt;br /&gt;
(plan) the robot will use inverse kinematics to position its end effectors to do first aid&lt;br /&gt;
(act) the robot will move to do first aid, controlling its motions via feedback from sensors&lt;br /&gt;
&lt;br /&gt;
•After-work: (May) Evaluating; writing/presenting &lt;br /&gt;
&lt;br /&gt;
Possible evaluation metrics: e.g., objective measures (time, positioning, force); qualitative assessment by an expert; and/or comparison to a novice human first aider&lt;br /&gt;
&lt;br /&gt;
Expected results: a thesis/report, code, video &lt;br /&gt;
(for this project, the student is expected to be willing to also write a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Detecting_Points_of_Interest_for_Robotic_First_Aid&amp;diff=2393</id>
		<title>Detecting Points of Interest for Robotic First Aid</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Detecting_Points_of_Interest_for_Robotic_First_Aid&amp;diff=2393"/>
		<updated>2015-10-29T15:31:09Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Detecting Points of Interest for Robotic First Aid |Keywords=Robot, First Aid, Visual Recognition |TimeFrame=2016/1/1-2016/6/30 |References=-...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Detecting Points of Interest for Robotic First Aid&lt;br /&gt;
|Keywords=Robot, First Aid, Visual Recognition&lt;br /&gt;
|TimeFrame=2016/1/1-2016/6/30&lt;br /&gt;
|References=-pose recognition&lt;br /&gt;
&lt;br /&gt;
Jamie Shotton, Ross Girshick, Andrew Fitzgibbon, Toby Sharp, Mat Cook, Mark Finocchio, Richard Moore, Pushmeet Kohli, Antonio Criminisi, Alex Kipman, Andrew Blake, &amp;quot;Efficient Human Pose Estimation from Single Depth Images&amp;quot;, IEEE Transactions on Pattern Analysis &amp;amp; Machine Intelligence, vol.35, no. 12, pp. 2821-2840, Dec. 2013, doi:10.1109/TPAMI.2012.241&lt;br /&gt;
 &lt;br /&gt;
-first aid&lt;br /&gt;
&lt;br /&gt;
Travers, A. H., Rea, T. D., Bobrow, B. J., et al. (2010). Part 4: CPR overview 2010 American Heart Association guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation, 122(18 suppl 3), S676-S684.&lt;br /&gt;
|Prerequisites=some capability to work with software, and interest in visual recognition and robots&lt;br /&gt;
|Supervisor=Anita Sant&amp;#039;Anna, Martin Cooney, &lt;br /&gt;
|Examiner=Sławomir Nowaczyk, Antanas Verikas&lt;br /&gt;
|Author=Wolfgang Hotze&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Ongoing&lt;br /&gt;
}}&lt;br /&gt;
Robots will save lives by conducting first aid in homes and public places when medical experts are unavailable.&lt;br /&gt;
This project focuses on the case of an unconscious fall victim in a home.&lt;br /&gt;
First aid, as described by the acronym CABD, can involve chest compressions,&lt;br /&gt;
airway adjustments, artificial respiration, and treatment for bleeding.&lt;br /&gt;
&lt;br /&gt;
Goal: the capability for a robot to locate points of interest for first aid: the sternum for chest compressions, the chin for the airway, the mouth for breathing, and points of possible bleeding.&lt;br /&gt;
&lt;br /&gt;
Relation to some previous work: &lt;br /&gt;
&lt;br /&gt;
A previous project by Tianyi Zhang and Yuwei Zhao involved a robot assessing some simple factors related to a fallen person&amp;#039;s health state assuming depth cameras where placed directly above a person.&lt;br /&gt;
The current project does not require this assumption.&lt;br /&gt;
Another previous project involved a robot sent by an intelligent environment to ask a conscious fall victim if they&lt;br /&gt;
were okay, but could not recognize a person&amp;#039;s pose to conduct first aid.&lt;br /&gt;
The current project will allow both of these works to be brought together.&lt;br /&gt;
&lt;br /&gt;
Basic approach:&lt;br /&gt;
*a robot will move to acquire visual data of a fallen person using a camera located on its arm&lt;br /&gt;
*visual data will be processed to find points of interest for first aid&lt;br /&gt;
*the robot will somehow indicate the points it has found&lt;br /&gt;
&lt;br /&gt;
Steps:&lt;br /&gt;
*Becoming (more) familiar with OpenCV, ROS, Arduino&lt;br /&gt;
*Basic literature review&lt;br /&gt;
*Getting robot and robot arm to move&lt;br /&gt;
*Acquiring some data and labelling&lt;br /&gt;
*Building a recognition system (the focus of the project)&lt;br /&gt;
*Evaluating&lt;br /&gt;
*Writing/presenting&lt;br /&gt;
&lt;br /&gt;
Evaluation metric:&lt;br /&gt;
Distance of output locations of points of interest and ground truth&lt;br /&gt;
(Time until finding points.)&lt;br /&gt;
&lt;br /&gt;
Focus: software (visual recognition)&lt;br /&gt;
&lt;br /&gt;
Expected results: &lt;br /&gt;
a thesis/report, code, video, (if possible a six page shortened version of the thesis, to be submitted to a conference)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=2314</id>
		<title>Martin Cooney</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Martin_Cooney&amp;diff=2314"/>
		<updated>2015-09-25T13:32:47Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: /* Current Activities */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{{Person&lt;br /&gt;
|Family Name=Cooney&lt;br /&gt;
|Given Name=Martin&lt;br /&gt;
|Title=PhD&lt;br /&gt;
|Phone=+46 35 167623&lt;br /&gt;
|Position=Postdoctoral Researcher&lt;br /&gt;
|Email=martin.cooney@hh.se&lt;br /&gt;
|Image=martin_200.jpg&lt;br /&gt;
|Office=E530&lt;br /&gt;
|url=http://martin-cooney.com&lt;br /&gt;
|Subject=Intelligent Systems for healthcare&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Current Activities&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Work&lt;br /&gt;
** Smart environment for healthcare with robots&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Research Interests&lt;br /&gt;
** Human robot interaction&lt;br /&gt;
** Health technology and health innovation&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Current Projects&lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/Health_Technology CAISR Health Technology Application Area] &lt;br /&gt;
** [http://islab.hh.se/mediawiki/index.php/CAISR_Intelligent_Environment CAISR Intelligent Environment]&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Teaching&lt;br /&gt;
** [http://www.hh.se/sitevision/proxy/english/education/coursesyllabi.3678.html/svid12_70cf2e49129168da015800070213/752680950/se_proxy/utb_kursplan.asp;jsessionid=42994EA1681258ED2A13A7FE143DDA7E?kurskod=DT8007&amp;amp;revisionsnr=1%2C1&amp;amp;format=pdf&amp;amp;lang=SV Design of Embedded and Intelligent Systems ] (Course responsible from 2016)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Manuals&lt;br /&gt;
** Turtlebot Manual: [http://islab.hh.se/mediawiki/images/0/02/Turtlebot_manual_students_2015_1_30.pdf click here]&lt;br /&gt;
** Nao Manual: [http://islab.hh.se/mediawiki/images/1/14/Nao_manual_6_27.pdf click here]&lt;br /&gt;
** Lego Manual: [http://islab.hh.se/mediawiki/images/6/68/Lego_manual_6_27.pdf click here]&lt;br /&gt;
** Skypebot Manual: [http://islab.hh.se/mediawiki/images/1/1b/Skypebot_manual_10_15.pdf click here]&lt;br /&gt;
** Energy harvesting prototype Manual: [http://islab.hh.se/mediawiki/images/c/ca/Energy_harvester_10_16.pdf click here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=CAISR_Intelligent_Environment&amp;diff=2312</id>
		<title>CAISR Intelligent Environment</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=CAISR_Intelligent_Environment&amp;diff=2312"/>
		<updated>2015-09-25T13:21:32Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: /* Student projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=CAISR Intelligent Environment&lt;br /&gt;
|ContactInformation=Anita Sant&amp;#039;Anna&lt;br /&gt;
|ShortDescription=Supporting research and education&lt;br /&gt;
|Description=The CAISR Intelligent  Environment is platform for demonstrating and showcasing CAISR research; implementing and testing new technologies in a realistic environment;  supporting data collection and validation of research hypotheses; as well as providing a functional infrastructure for student projects.&lt;br /&gt;
|LogotypeFile=Logo IntelligentEnvironment.jpg&lt;br /&gt;
|ProjectResponsible=Anita Sant&amp;#039;Anna&lt;br /&gt;
|ProjectDetailsPDF=CAISRIntelligentEnvironment.pdf&lt;br /&gt;
|FundingMSEK=2&lt;br /&gt;
|ProjectStart=2014/01/01&lt;br /&gt;
|ProjectEnd=2019/12/31&lt;br /&gt;
|ApplicationArea=Health Technology&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Centre for Health Technology Halland - HCH&lt;br /&gt;
}}&lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
The intelligent environment acquires diverse information about a person’s activities and health, using a number of distributed fixed and mobile sensors.  This data can be analyzed with the help of aware intelligent systems in order to understand a situation and its context, assess the person’s health and wellbeing, support decision-making, detect sudden or slow deviations, provide appropriate services in emergency situations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research projects ==&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/SA3L_-_Situation_Awareness_for_Ambient_Assisted_Living Situation Awareness for Ambient Assisted Living] – [http://islab.hh.se/mediawiki/index.php/Jens_Lundstr%C3%B6m Jens Lundström]&lt;br /&gt;
* A Database-Centric Architecture for Home-Based Health Monitoring – Wagner O. de Morais&lt;br /&gt;
* Impulse Radar in Health Applications – [http://islab.hh.se/mediawiki/index.php/Magnus_H%C3%A5llander Magnus Hållander]&lt;br /&gt;
&lt;br /&gt;
== Student projects ==&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/Mobile_Social_Robot_for_Healthcare Mobile Social Robots for healthcare] – Matthias Mayer (BSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/Activity_monitoring_for_AAL Tracking more than one person in a smart environment using fixed sensors and a mobile social robot] – Jianyuan Ma &amp;amp; Yinan Qiu (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/FirstResponse First response to emergency situation in a smart environment using a mobile social robot] – Gloria Lazzaro (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/%22TROLL%22:_a_regenerating_robot A self-aware robot which can find a mirror in a home, detect anomalies in part of its appearance, and fix them] – Yinrong Ma (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Robotic_First_aid_response Awareness of health state of an unconscious fallen person toward enabling robotic first aid] – Tianyi Zhang &amp;amp; Yuwei Zhao (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Courteous_robot_guide_for_visitors_to_an_intelligent_home Courteous robot guide for visitors to an intelligent home] – Jiamiao Guo &amp;amp; Yu Zhao (BSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Assistance-seeking_strategy_for_a_flying_robot_during_a_healthcare_emergency_response Assistance-seeking strategy for a flying robot during a healthcare emergency response] – Jérémy Heyne (BSc, visiting from Polytech Clermont, France)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Courteous_robot_guide_for_visitors_to_an_intelligent_home&amp;diff=2311</id>
		<title>Courteous robot guide for visitors to an intelligent home</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Courteous_robot_guide_for_visitors_to_an_intelligent_home&amp;diff=2311"/>
		<updated>2015-09-25T13:17:37Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Courteous robot guide for visitors to an intelligent home |Keywords=robot, intelligent home, polite, courteous |TimeFrame=Jan. - July, 2015 |...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Courteous robot guide for visitors to an intelligent home&lt;br /&gt;
|Keywords=robot, intelligent home, polite, courteous&lt;br /&gt;
|TimeFrame=Jan. - July, 2015&lt;br /&gt;
|References=Yusuke Kato, Takayuki Kanda, Hiroshi Ishiguro. May I help you? Design of Human-like Polite Approaching Behavior. HRI 2015: 35-42&lt;br /&gt;
Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe. Anthropomorphic &lt;br /&gt;
awareness of partner robot to user’s situation based on gaze and speech detection. International Journal of Autonomous and Adaptive Communications Systems. Volume 5, Issue 1. DOI: 10.1504/IJAACS.2012.044782&lt;br /&gt;
&lt;br /&gt;
|Supervisor=Wagner de Morais, Martin Cooney&lt;br /&gt;
|Author=Jiamiao Guo, Yu Zhao &lt;br /&gt;
|Level=Bachelor&lt;br /&gt;
|Status=Finished&lt;br /&gt;
}}&lt;br /&gt;
How can a robot guide a visitor to an intelligent home in a courteous manner?&lt;br /&gt;
Software-related (using ROS, Arduino, Festival, CMU Pocketsphinx)&lt;br /&gt;
Deliverables: report, presentation, video, code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Assistance-seeking_strategy_for_a_flying_robot_during_a_healthcare_emergency_response&amp;diff=2310</id>
		<title>Assistance-seeking strategy for a flying robot during a healthcare emergency response</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Assistance-seeking_strategy_for_a_flying_robot_during_a_healthcare_emergency_response&amp;diff=2310"/>
		<updated>2015-09-25T13:09:56Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Assistance-seeking strategy for a flying robot during a healthcare emergency response |Keywords=robot, healthcare, awareness, helpfulness det...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Assistance-seeking strategy for a flying robot during a healthcare emergency response&lt;br /&gt;
|Keywords=robot, healthcare, awareness, helpfulness detection&lt;br /&gt;
|TimeFrame=May-August, 2015&lt;br /&gt;
|References=Dominique Feillet, Pierre Dejax, Michel Gendreau. Traveling Salesman Problems with Profits. TRANSPORTATION SCIENCE, Vol. 39, No. 2, May 2005, pp. 188–205&lt;br /&gt;
‘Ambulance Drone with Integrated Defibrillator’ (October 29, 2014), [Online]Available: http://www.uasvision.com/2014/10/29/ambulance-drone-with-integrated-defibrillator/ (2015/2/20)&lt;br /&gt;
|Supervisor=Anita Sant&amp;#039;Anna, Yuantao Fan, Martin Cooney&lt;br /&gt;
|Author=Jérémy Heyne&lt;br /&gt;
|Level=Bachelor&lt;br /&gt;
|Status=Finished&lt;br /&gt;
}}&lt;br /&gt;
How can a flying robot detect helpful people in an emergency and approach them?&lt;br /&gt;
Software-related: using OpenCV, ROS&lt;br /&gt;
Deliverable: a report, a presentation, a video, code&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=CAISR_Intelligent_Environment&amp;diff=2308</id>
		<title>CAISR Intelligent Environment</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=CAISR_Intelligent_Environment&amp;diff=2308"/>
		<updated>2015-09-25T12:58:58Z</updated>

		<summary type="html">&lt;p&gt;Marcoo: /* Student projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=CAISR Intelligent Environment&lt;br /&gt;
|ContactInformation=Anita Sant&amp;#039;Anna&lt;br /&gt;
|ShortDescription=Supporting research and education&lt;br /&gt;
|Description=The CAISR Intelligent  Environment is platform for demonstrating and showcasing CAISR research; implementing and testing new technologies in a realistic environment;  supporting data collection and validation of research hypotheses; as well as providing a functional infrastructure for student projects.&lt;br /&gt;
|LogotypeFile=Logo IntelligentEnvironment.jpg&lt;br /&gt;
|ProjectResponsible=Anita Sant&amp;#039;Anna&lt;br /&gt;
|ProjectDetailsPDF=CAISRIntelligentEnvironment.pdf&lt;br /&gt;
|FundingMSEK=2&lt;br /&gt;
|ProjectStart=2014/01/01&lt;br /&gt;
|ProjectEnd=2019/12/31&lt;br /&gt;
|ApplicationArea=Health Technology&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Centre for Health Technology Halland - HCH&lt;br /&gt;
}}&lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
The intelligent environment acquires diverse information about a person’s activities and health, using a number of distributed fixed and mobile sensors.  This data can be analyzed with the help of aware intelligent systems in order to understand a situation and its context, assess the person’s health and wellbeing, support decision-making, detect sudden or slow deviations, provide appropriate services in emergency situations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Research projects ==&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/SA3L_-_Situation_Awareness_for_Ambient_Assisted_Living Situation Awareness for Ambient Assisted Living] – [http://islab.hh.se/mediawiki/index.php/Jens_Lundstr%C3%B6m Jens Lundström]&lt;br /&gt;
* A Database-Centric Architecture for Home-Based Health Monitoring – Wagner O. de Morais&lt;br /&gt;
* Impulse Radar in Health Applications – [http://islab.hh.se/mediawiki/index.php/Magnus_H%C3%A5llander Magnus Hållander]&lt;br /&gt;
&lt;br /&gt;
== Student projects ==&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/Mobile_Social_Robot_for_Healthcare Mobile Social Robots for healthcare] – Matthias Mayer (BSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/Activity_monitoring_for_AAL Tracking more than one person in a smart environment using fixed sensors and a mobile social robot] – Jianyuan Ma &amp;amp; Yinan Qiu (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/index.php/FirstResponse First response to emergency situation in a smart environment using a mobile social robot] – Gloria Lazzaro (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/%22TROLL%22:_a_regenerating_robot A self-aware robot which can find a mirror in a home, detect anomalies in part of its appearance, and fix them] – Yinrong Ma (MSc)&lt;br /&gt;
* [http://islab.hh.se/mediawiki/Robotic_First_aid_response Awareness of health state of an unconscious fallen person toward enabling robotic first aid] – Tianyi Zhang &amp;amp; Yuwei Zhao (MSc)&lt;br /&gt;
* Courteous robot guide for visitors to an intelligent home – Jiamiao Guo &amp;amp; Yu Zhao (BSc)&lt;br /&gt;
* Assistance-seeking strategy for a flying robot during a healthcare emergency response – Jérémy Heyne (BSc, visiting from Polytech Clermont, France)&lt;/div&gt;</summary>
		<author><name>Marcoo</name></author>
	</entry>
</feed>