<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/index.php?action=history&amp;feed=atom&amp;title=Feature-wise_normalization_for_3D_medical_images</id>
	<title>Feature-wise normalization for 3D medical images - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/index.php?action=history&amp;feed=atom&amp;title=Feature-wise_normalization_for_3D_medical_images"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;action=history"/>
	<updated>2026-04-04T15:57:57Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4638&amp;oldid=prev</id>
		<title>Amira at 13:40, 29 September 2020</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4638&amp;oldid=prev"/>
		<updated>2020-09-29T13:40:00Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 13:40, 29 September 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot; &gt;Line 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Normalization of 3D medical imaging either as a data pre-processing or as feature-wise batch normalization during CNN model training&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Normalization of 3D medical imaging either as a data pre-processing or as feature-wise batch normalization during CNN model training&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=CNN, 3D models&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=CNN, 3D models&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|TimeFrame=2020 Fall - 2021 Summer&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|Prerequisites=Excellent Programming Skills&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Excellent knowledge in Machine Learning and Neural Networks&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Supervisor=Amira Soliman, Stefan Byttner,  Kobra Etminani&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Supervisor=Amira Soliman, Stefan Byttner,  Kobra Etminani&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Level=Master&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Level=Master&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Amira</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4636&amp;oldid=prev</id>
		<title>Amira at 13:38, 29 September 2020</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4636&amp;oldid=prev"/>
		<updated>2020-09-29T13:38:10Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 13:38, 29 September 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{StudentProjectTemplate&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{StudentProjectTemplate&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Normalization of 3D medical imaging either as a data &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;reprocessing &lt;/del&gt;or as feature-wise batch normalization during CNN model training&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Normalization of 3D medical imaging either as a data &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;pre-processing &lt;/ins&gt;or as feature-wise batch normalization during CNN model training&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=CNN, 3D models&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=CNN, 3D models&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=2020 Fall - 2021 Summer&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=2020 Fall - 2021 Summer&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Amira</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4633&amp;oldid=prev</id>
		<title>Amira: Amira moved page Name of the new project to Feature-wise normalization for 3D medical images</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4633&amp;oldid=prev"/>
		<updated>2020-09-29T13:35:30Z</updated>

		<summary type="html">&lt;p&gt;Amira moved page &lt;a href=&quot;/caisr/index.php?title=Name_of_the_new_project&quot; class=&quot;mw-redirect&quot; title=&quot;Name of the new project&quot;&gt;Name of the new project&lt;/a&gt; to &lt;a href=&quot;/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&quot; title=&quot;Feature-wise normalization for 3D medical images&quot;&gt;Feature-wise normalization for 3D medical images&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 13:35, 29 September 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-notice&quot; lang=&quot;en&quot;&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Amira</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4631&amp;oldid=prev</id>
		<title>Amira at 13:29, 29 September 2020</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4631&amp;oldid=prev"/>
		<updated>2020-09-29T13:29:49Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 13:29, 29 September 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{StudentProjectTemplate&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{StudentProjectTemplate&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;The topic focuses on generative models (GAN) for CAN&lt;/del&gt;-&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;bus data and investigating the representation learning capabilities of such techniques	&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Normalization of 3D medical imaging either as a data reprocessing or as feature&lt;/ins&gt;-&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;wise batch normalization during CNN model training&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;GAN&lt;/del&gt;, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;CAN data, MAR&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;CNN&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;3D models&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=2020 Fall - 2021 Summer&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=2020 Fall - 2021 Summer&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://arxiv.org/abs/1706.02633&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://openreview.net/pdf?id=rJedV3R5tm&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/12-Yu-L-14344.pdf&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://arxiv.org/pdf/1511.06434.pdf&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Prerequisites=Excellent Programming Skills&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Prerequisites=Excellent Programming Skills&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Excellent knowledge in Machine Learning and Neural Networks  &lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Excellent knowledge in Machine Learning and Neural Networks&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Supervisor=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Kunru Chen&lt;/del&gt;, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Tiago Cortinhal, Thorsteinn Rögnvaldsson&lt;/del&gt;,  &lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Supervisor=&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Amira Soliman&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Stefan Byttner&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt; Kobra Etminani&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Level=Master&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Level=Master&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Status=&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Internal Draft&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Status=&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Open&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Control Area Network (CAN) &lt;/del&gt;is a &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;protocol &lt;/del&gt;that &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;is used to manipulate vehicles. It is multidimensional and consists of control and sensor signals to and from different parts of &lt;/del&gt;the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;equipment. Since this data comes internally from &lt;/del&gt;the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;machine itself, it is stable and cheap to collect it. Previous work has shown that CAN data can be used to build representations for machine activity recognition (MAR) for forklift trucks&lt;/del&gt;. However, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;those representations are limited to only describing the existing data &lt;/del&gt;in &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;both realism and diversity. Creating representation by training a vanilla autoencoder has disadvantages when trying to explore the entire space of CAN signals. &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Normalization &lt;/ins&gt;is a &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;required preprocessing step, especially for deep learning and convolutional neural networks, such &lt;/ins&gt;that the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;network becomes unbiased towards &lt;/ins&gt;the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;different features&lt;/ins&gt;. However, in &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;medical &lt;/ins&gt;images, the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;whole intensity normalization may lead &lt;/ins&gt;to &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;reduced sensitivity for relatively important features&lt;/ins&gt;. &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;The objective &lt;/ins&gt;of this &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;master &lt;/ins&gt;thesis is to &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;study the state-&lt;/ins&gt;of&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;-&lt;/ins&gt;the-&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;art normalization techniques used in 2D images&lt;/ins&gt;, &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;investigate &lt;/ins&gt;the &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;applicability &lt;/ins&gt;of &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;such techniques in 3D medical images&lt;/ins&gt;, and &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;apply them either as &lt;/ins&gt;a &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;preprocessing step or as feature-wise batch normalization during the model training&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Generative approaches have been used mostly in traditional types of data, like &lt;/del&gt;images, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;and have shown to have great capabilities to learn &lt;/del&gt;the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;underlying distribution as well as allowing us &lt;/del&gt;to &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;sample new unseen data points&lt;/del&gt;. &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;This has shown great results as we can see in https://thispersondoesnotexist.com, or even in pictures to picture translations and style transfers. This generative capability also allows us to perform arithmetic operations on the vector and see the underlying structure of each different “class” &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;outputs. &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Nevertheless, the work done in other data modalities is still sparse but nevertheless growing in interest. In &lt;/del&gt;this thesis&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, the main interest &lt;/del&gt;is &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;focused on a very specific type of data that might bring all kinds of hardships and obstacles &lt;/del&gt;to &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;overcome. Some &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;those hardships might come from &lt;/del&gt;the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;type of data we are trying to generate. This needs to be investigated and solutions to overcome these types of situations are a key aspect we will be looking for. &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;The students need to develop a GAN&lt;/del&gt;-&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;based network to generate CAN data&lt;/del&gt;, &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;to evaluate &lt;/del&gt;the &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;quality &lt;/del&gt;of &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;the generated data&lt;/del&gt;, and &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;to use that data in &lt;/del&gt;a &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;MAR task&lt;/del&gt;.&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;   Research Questions:&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;       Can GANs generate realistic CAN data?&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;       Can GANs generate/predict the (near) future CAN signals? &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;       Is the latent space an informative representation about the CAN signals?&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot;&gt; &lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Amira</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4627&amp;oldid=prev</id>
		<title>Tiago: Created page with &quot;{{StudentProjectTemplate |Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques	 ...&quot;</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4627&amp;oldid=prev"/>
		<updated>2020-09-29T07:28:45Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;{{StudentProjectTemplate |Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques	 ...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques	&lt;br /&gt;
|Keywords=GAN, CAN data, MAR&lt;br /&gt;
|TimeFrame=2020 Fall - 2021 Summer&lt;br /&gt;
|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/abs/1706.02633&lt;br /&gt;
&lt;br /&gt;
https://openreview.net/pdf?id=rJedV3R5tm&lt;br /&gt;
&lt;br /&gt;
https://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/12-Yu-L-14344.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1511.06434.pdf&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Excellent Programming Skills&lt;br /&gt;
Excellent knowledge in Machine Learning and Neural Networks &lt;br /&gt;
|Supervisor=Kunru Chen, Tiago Cortinhal, Thorsteinn Rögnvaldsson, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
Control Area Network (CAN) is a protocol that is used to manipulate vehicles. It is multidimensional and consists of control and sensor signals to and from different parts of the equipment. Since this data comes internally from the machine itself, it is stable and cheap to collect it. Previous work has shown that CAN data can be used to build representations for machine activity recognition (MAR) for forklift trucks. However, those representations are limited to only describing the existing data in both realism and diversity. Creating representation by training a vanilla autoencoder has disadvantages when trying to explore the entire space of CAN signals. &lt;br /&gt;
&lt;br /&gt;
Generative approaches have been used mostly in traditional types of data, like images, and have shown to have great capabilities to learn the underlying distribution as well as allowing us to sample new unseen data points. This has shown great results as we can see in https://thispersondoesnotexist.com, or even in pictures to picture translations and style transfers. This generative capability also allows us to perform arithmetic operations on the vector and see the underlying structure of each different “class” of outputs. &lt;br /&gt;
&lt;br /&gt;
Nevertheless, the work done in other data modalities is still sparse but nevertheless growing in interest. In this thesis, the main interest is focused on a very specific type of data that might bring all kinds of hardships and obstacles to overcome. Some of those hardships might come from the type of data we are trying to generate. This needs to be investigated and solutions to overcome these types of situations are a key aspect we will be looking for. &lt;br /&gt;
The students need to develop a GAN-based network to generate CAN data, to evaluate the quality of the generated data, and to use that data in a MAR task.&lt;br /&gt;
&lt;br /&gt;
   Research Questions:&lt;br /&gt;
       Can GANs generate realistic CAN data?&lt;br /&gt;
       Can GANs generate/predict the (near) future CAN signals? &lt;br /&gt;
       Is the latent space an informative representation about the CAN signals?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
</feed>