<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tiago</id>
	<title>ISLAB/CAISR - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tiago"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Special:Contributions/Tiago"/>
	<updated>2026-04-04T08:27:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5343</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5343"/>
		<updated>2023-10-25T11:18:41Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|References=For general understanding: https://www.youtube.com/watch?v=8PQO4P8pR8o&amp;amp;t=839s&lt;br /&gt;
&lt;br /&gt;
Papers:&lt;br /&gt;
&lt;br /&gt;
1.	The strong gravitational lens finding challenge - Metcalf, R. B., Meneghetti, M., Avestruz, C., et al. 2019, A&amp;amp;A, 625, A119 &lt;br /&gt;
&lt;br /&gt;
2.	Testing convolutional neural networks for finding strong gravitational lenses in KiDS  - Petrillo, C. E., Tortora, C., Chatterjee, S., et al. 2019a, MNRAS, 482, 807 &lt;br /&gt;
&lt;br /&gt;
3.	Finding strong gravitational lenses through self-attention - Study based on the Bologna Lens Challenge - Thuruthipilly, H., Adam Zadrozny, Agnieszka Pollo, and Marek Biesiada.  A&amp;amp;A, 664:A4&lt;br /&gt;
&lt;br /&gt;
4.	The use of convolutional neural networks for modelling large optically-selected strong galaxy-lens samples  - Pearson, J., Li, N., &amp;amp; Dye, S. 2019, MNRAS, 488, 991&lt;br /&gt;
&lt;br /&gt;
5.	Deep convolutional neural networks as strong gravitational lens detectors - Schaefer, C., Geiger, M., Kuntzer, T., &amp;amp; Kneib, J.-P. 2018, A&amp;amp;A, 611, A2&lt;br /&gt;
&lt;br /&gt;
6.	Strong lens systems search in the Dark Energy Survey using Convolutional Neural Networks - K. Rojas, E. Savary, B. Clément, M. Maus, F. Courbin, C. Lemon, J. H. H. Chan, G. Vernardos, R. Joseph, R. Cañameras, A. Galan, DOI: 0.1051/0004-6361/202142119&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Strong gravitational lensing (SGL) is a phenomenon in which a massive foreground objecting (lens)  distorting space-time bends the signal coming from background sources. On the sky, this is visible as arc-like structures or double images of the foreground object.&lt;br /&gt;
Strong gravitational lenses (SGLs) are a powerful tool for addressing the dark matter problem and constraining cosmological parameters, while also serving as a natural telescope into otherwise too faint to observe celestial objects. The upcoming large-scale astronomical surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid, are expected to uncover approximately 10^5 SGLs by analyzing datasets of unprecedented scale. However, detecting and accurately modelling these systems is a formidable task, necessitating the development of automated deep learning algorithms.&lt;br /&gt;
This master&amp;#039;s thesis focuses on the development and training of a machine learning model for the detection of SGLs and the segmentation of lensed galaxy images, enabling the differentiation between source (background) galaxies and gravitational lenses permitting the modelling of lensed systems. The dataset utilized comprises simulated images from the Gravitational Lens Finding Challenge, simulated data based on the Euclid survey and simulated data from the Legacy Survey of Space and Time (LSST).&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.	Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, and the data from the Legacy Survey of Space and Time (LSST). &lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and transformers to design a pipeline capable of classifying and subsequently segmenting the lensed galaxy images accurately to predict the parameters of the lensing system.&lt;br /&gt;
&lt;br /&gt;
3.	Strong-Lens classification: Develop a state-of-the-art model for identifying SGLs with a focus on reducing false positives (FP). &lt;br /&gt;
&lt;br /&gt;
4.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
5.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score. Compare the performance of the model with the existing models from the literature. &lt;br /&gt;
&lt;br /&gt;
6.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the other two previously mentioned astronomical survey and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to identify and segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for identifying strong lenses and accurately segmenting the input image into the lens and source galaxies. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of identifying SGLs from the upcoming astronomical surveys such as Euclid and LSST.&lt;br /&gt;
&lt;br /&gt;
2.	A trained machine learning model capable of accurately segmenting lensed galaxy images which would be used for modelling the lensing system and estimating the physical parameters.&lt;br /&gt;
&lt;br /&gt;
3.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
4.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
5.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can contact us at tiago.cortinhal@hh.se or idriss.gouigah@hh.se&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5342</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5342"/>
		<updated>2023-10-25T11:16:39Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|References=For general understanding: https://www.youtube.com/watchv=8PQO4P8pR8o&amp;amp;t=839s&lt;br /&gt;
&lt;br /&gt;
Papers:&lt;br /&gt;
&lt;br /&gt;
1.	The strong gravitational lens finding challenge - Metcalf, R. B., Meneghetti, M., Avestruz, C., et al. 2019, A&amp;amp;A, 625, A119 &lt;br /&gt;
&lt;br /&gt;
2.	Testing convolutional neural networks for finding strong gravitational lenses in KiDS  - Petrillo, C. E., Tortora, C., Chatterjee, S., et al. 2019a, MNRAS, 482, 807 &lt;br /&gt;
&lt;br /&gt;
3.	Finding strong gravitational lenses through self-attention - Study based on the Bologna Lens Challenge - Thuruthipilly, H., Adam Zadrozny, Agnieszka Pollo, and Marek Biesiada.  A&amp;amp;A, 664:A4&lt;br /&gt;
&lt;br /&gt;
4.	The use of convolutional neural networks for modelling large optically-selected strong galaxy-lens samples  - Pearson, J., Li, N., &amp;amp; Dye, S. 2019, MNRAS, 488, 991&lt;br /&gt;
&lt;br /&gt;
5.	Deep convolutional neural networks as strong gravitational lens detectors - Schaefer, C., Geiger, M., Kuntzer, T., &amp;amp; Kneib, J.-P. 2018, A&amp;amp;A, 611, A2&lt;br /&gt;
&lt;br /&gt;
6.	Strong lens systems search in the Dark Energy Survey using Convolutional Neural Networks - K. Rojas, E. Savary, B. Clément, M. Maus, F. Courbin, C. Lemon, J. H. H. Chan, G. Vernardos, R. Joseph, R. Cañameras, A. Galan, DOI: 0.1051/0004-6361/202142119&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Strong gravitational lensing (SGL) is a phenomenon in which a massive foreground objecting (lens)  distorting space-time bends the signal coming from background sources. On the sky, this is visible as arc-like structures or double images of the foreground object.&lt;br /&gt;
Strong gravitational lenses (SGLs) are a powerful tool for addressing the dark matter problem and constraining cosmological parameters, while also serving as a natural telescope into otherwise too faint to observe celestial objects. The upcoming large-scale astronomical surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid, are expected to uncover approximately 10^5 SGLs by analyzing datasets of unprecedented scale. However, detecting and accurately modelling these systems is a formidable task, necessitating the development of automated deep learning algorithms.&lt;br /&gt;
This master&amp;#039;s thesis focuses on the development and training of a machine learning model for the detection of SGLs and the segmentation of lensed galaxy images, enabling the differentiation between source (background) galaxies and gravitational lenses permitting the modelling of lensed systems. The dataset utilized comprises simulated images from the Gravitational Lens Finding Challenge, simulated data based on the Euclid survey and simulated data from the Legacy Survey of Space and Time (LSST).&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.	Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, and the data from the Legacy Survey of Space and Time (LSST). &lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and transformers to design a pipeline capable of classifying and subsequently segmenting the lensed galaxy images accurately to predict the parameters of the lensing system.&lt;br /&gt;
&lt;br /&gt;
3.	Strong-Lens classification: Develop a state-of-the-art model for identifying SGLs with a focus on reducing false positives (FP). &lt;br /&gt;
&lt;br /&gt;
4.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
5.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score. Compare the performance of the model with the existing models from the literature. &lt;br /&gt;
&lt;br /&gt;
6.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the other two previously mentioned astronomical survey and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to identify and segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for identifying strong lenses and accurately segmenting the input image into the lens and source galaxies. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of identifying SGLs from the upcoming astronomical surveys such as Euclid and LSST.&lt;br /&gt;
&lt;br /&gt;
2.	A trained machine learning model capable of accurately segmenting lensed galaxy images which would be used for modelling the lensing system and estimating the physical parameters.&lt;br /&gt;
&lt;br /&gt;
3.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
4.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
5.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can contact us at tiago.cortinhal@hh.se or idriss.gouigah@hh.se&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5341</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5341"/>
		<updated>2023-10-25T10:41:55Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|References=For general understanding: https://www.youtube.com/watchv=8PQO4P8pR8o&amp;amp;t=839s&lt;br /&gt;
&lt;br /&gt;
Papers:&lt;br /&gt;
&lt;br /&gt;
1.	Metcalf, R. B., Meneghetti, M., Avestruz, C., et al. 2019, A&amp;amp;A, 625, A119 &lt;br /&gt;
&lt;br /&gt;
2.	Petrillo, C. E., Tortora, C., Chatterjee, S., et al. 2019a, MNRAS, 482, 807 &lt;br /&gt;
&lt;br /&gt;
3.	Thuruthipilly, H., Adam Zadrozny, Agnieszka Pollo, and Marek Biesiada.  A&amp;amp;A, 664:A4&lt;br /&gt;
&lt;br /&gt;
4.	Pearson, J., Li, N., &amp;amp; Dye, S. 2019, MNRAS, 488, 991&lt;br /&gt;
&lt;br /&gt;
5.	Schaefer, C., Geiger, M., Kuntzer, T., &amp;amp; Kneib, J.-P. 2018, A&amp;amp;A, 611, A2&lt;br /&gt;
&lt;br /&gt;
6.	K. Rojas, E. Savary, B. Clément, M. Maus, F. Courbin, C. Lemon, J. H. H. Chan, G. Vernardos, R. Joseph, R. Cañameras, A. Galan, DOI: 0.1051/0004-6361/202142119&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Strong gravitational lensing (SGL) is a phenomenon in which a massive foreground objecting (lens)  distorting space-time bends the signal coming from background sources. On the sky, this is visible as arc-like structures or double images of the foreground object.&lt;br /&gt;
Strong gravitational lenses (SGLs) are a powerful tool for addressing the dark matter problem and constraining cosmological parameters, while also serving as a natural telescope into otherwise too faint to observe celestial objects. The upcoming large-scale astronomical surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid, are expected to uncover approximately 10^5 SGLs by analyzing datasets of unprecedented scale. However, detecting and accurately modelling these systems is a formidable task, necessitating the development of automated deep learning algorithms.&lt;br /&gt;
This master&amp;#039;s thesis focuses on the development and training of a machine learning model for the detection of SGLs and the segmentation of lensed galaxy images, enabling the differentiation between source (background) galaxies and gravitational lenses permitting the modelling of lensed systems. The dataset utilized comprises simulated images from the Gravitational Lens Finding Challenge, simulated data based on the Euclid survey and simulated data from the Legacy Survey of Space and Time (LSST).&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.	Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, and the data from the Legacy Survey of Space and Time (LSST). &lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and transformers to design a pipeline capable of classifying and subsequently segmenting the lensed galaxy images accurately to predict the parameters of the lensing system.&lt;br /&gt;
&lt;br /&gt;
3.	Strong-Lens classification: Develop a state-of-the-art model for identifying SGLs with a focus on reducing false positives (FP). &lt;br /&gt;
&lt;br /&gt;
4.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
5.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score. Compare the performance of the model with the existing models from the literature. &lt;br /&gt;
&lt;br /&gt;
6.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the other two previously mentioned astronomical survey and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to identify and segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for identifying strong lenses and accurately segmenting the input image into the lens and source galaxies. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of identifying SGLs from the upcoming astronomical surveys such as Euclid and LSST.&lt;br /&gt;
&lt;br /&gt;
2.	A trained machine learning model capable of accurately segmenting lensed galaxy images which would be used for modelling the lensing system and estimating the physical parameters.&lt;br /&gt;
&lt;br /&gt;
3.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
4.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
5.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can contact us at tiago.cortinhal@hh.se or idriss.gouigah@hh.se&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5337</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5337"/>
		<updated>2023-10-19T11:42:37Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|References=https://drive.google.com/drive/folders/1OHbp3pUHygOWheJSdYQFvOUBZ-MPIVdx&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Strong gravitational lensing (SGL) is a phenomenon in which a massive foreground objecting (lens)  distorting space-time bends the signal coming from background sources. On the sky, this is visible as arc-like structures or double images of the foreground object.&lt;br /&gt;
Strong gravitational lenses (SGLs) are a powerful tool for addressing the dark matter problem and constraining cosmological parameters, while also serving as a natural telescope into otherwise too faint to observe celestial objects. The upcoming large-scale astronomical surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid, are expected to uncover approximately 10^5 SGLs by analyzing datasets of unprecedented scale. However, detecting and accurately modelling these systems is a formidable task, necessitating the development of automated deep learning algorithms.&lt;br /&gt;
This master&amp;#039;s thesis focuses on the development and training of a machine learning model for the detection of SGLs and the segmentation of lensed galaxy images, enabling the differentiation between source (background) galaxies and gravitational lenses permitting the modelling of lensed systems. The dataset utilized comprises simulated images from the Gravitational Lens Finding Challenge, simulated data based on the Euclid survey and simulated data from the Legacy Survey of Space and Time (LSST).&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.	Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, and the data from the Legacy Survey of Space and Time (LSST). &lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and transformers to design a pipeline capable of classifying and subsequently segmenting the lensed galaxy images accurately to predict the parameters of the lensing system.&lt;br /&gt;
&lt;br /&gt;
3.	Strong-Lens classification: Develop a state-of-the-art model for identifying SGLs with a focus on reducing false positives (FP). &lt;br /&gt;
&lt;br /&gt;
4.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
5.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score. Compare the performance of the model with the existing models from the literature. &lt;br /&gt;
&lt;br /&gt;
6.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the other two previously mentioned astronomical survey and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to identify and segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for identifying strong lenses and accurately segmenting the input image into the lens and source galaxies. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of identifying SGLs from the upcoming astronomical surveys such as Euclid and LSST.&lt;br /&gt;
&lt;br /&gt;
2.	A trained machine learning model capable of accurately segmenting lensed galaxy images which would be used for modelling the lensing system and estimating the physical parameters.&lt;br /&gt;
&lt;br /&gt;
3.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
4.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
5.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can contact us at tiago.cortinhal@hh.se or idriss.gouigah@hh.se&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5335</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5335"/>
		<updated>2023-10-19T11:07:58Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Strong gravitational lensing (SGL) is a phenomenon in which a massive foreground objecting (lens)  distorting space-time bends the signal coming from background sources. On the sky, this is visible as arc-like structures or double images of the foreground object.&lt;br /&gt;
Strong gravitational lenses (SGLs) are a powerful tool for addressing the dark matter problem and constraining cosmological parameters, while also serving as a natural telescope into otherwise too faint to observe celestial objects. The upcoming large-scale astronomical surveys, such as the Rubin Observatory Legacy Survey of Space and Time (LSST) and Euclid, are expected to uncover approximately 10^5 SGLs by analyzing datasets of unprecedented scale. However, detecting and accurately modelling these systems is a formidable task, necessitating the development of automated deep learning algorithms.&lt;br /&gt;
This master&amp;#039;s thesis focuses on the development and training of a machine learning model for the detection of SGLs and the segmentation of lensed galaxy images, enabling the differentiation between source (background) galaxies and gravitational lenses permitting the modelling of lensed systems. The dataset utilized comprises simulated images from the Gravitational Lens Finding Challenge, simulated data based on the Euclid survey and simulated data from the Legacy Survey of Space and Time (LSST).&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.	Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, and the data from the Legacy Survey of Space and Time (LSST). &lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and transformers to design a pipeline capable of classifying and subsequently segmenting the lensed galaxy images accurately to predict the parameters of the lensing system.&lt;br /&gt;
&lt;br /&gt;
3.	Strong-Lens classification: Develop a state-of-the-art model for identifying SGLs with a focus on reducing false positives (FP). &lt;br /&gt;
&lt;br /&gt;
4.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
5.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score. Compare the performance of the model with the existing models from the literature. &lt;br /&gt;
&lt;br /&gt;
6.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the other two previously mentioned astronomical survey and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to identify and segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for identifying strong lenses and accurately segmenting the input image into the lens and source galaxies. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of identifying SGLs from the upcoming astronomical surveys such as Euclid and LSST.&lt;br /&gt;
&lt;br /&gt;
2.	A trained machine learning model capable of accurately segmenting lensed galaxy images which would be used for modelling the lensing system and estimating the physical parameters.&lt;br /&gt;
&lt;br /&gt;
3.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
4.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
5.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can contact us at tiago.cortinhal@hh.se or idriss.gouigah@hh.se&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5333</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5333"/>
		<updated>2023-10-18T12:10:53Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from an astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
&lt;br /&gt;
https://github.com/LSST-strong-lensing/DESC-Lamp/tree/main&lt;br /&gt;
&lt;br /&gt;
https://www.lsst.org/sites/default/files/docs/sciencebook/SB_12.pdf&lt;br /&gt;
&lt;br /&gt;
https://sites.google.com/view/lsst-stronglensing/home&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=5312</id>
		<title>Generative Approach for Multivariate Signals</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=5312"/>
		<updated>2023-10-12T15:27:46Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=The topic focuses on generative models (VAE) for CAN-bus data and investigating the representation learning capabilities of such techniques&lt;br /&gt;
|Keywords=VAE, Time-series data, Streaming data, MAR&lt;br /&gt;
|TimeFrame=2021 Fall - 2022 Summer&lt;br /&gt;
|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;br /&gt;
&lt;br /&gt;
https://openreview.net/pdf?id=Sy2fzU9gl&lt;br /&gt;
&lt;br /&gt;
https://www.sciencedirect.com/science/article/pii/S092658051930367X&lt;br /&gt;
|Supervisor=Kunru Chen, Abdallah Alabdallah, Thorsteinn Rögnvaldsson&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5311</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5311"/>
		<updated>2023-10-12T14:17:26Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from an astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
&lt;br /&gt;
https://github.com/LSST-strong-lensing/DESC-Lamp/tree/main&lt;br /&gt;
&lt;br /&gt;
https://www.lsst.org/sites/default/files/docs/sciencebook/SB_12.pdf&lt;br /&gt;
&lt;br /&gt;
https://sites.google.com/view/lsst-stronglensing/home&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5307</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5307"/>
		<updated>2023-10-12T14:09:40Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classification&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data.&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from an astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5306</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5306"/>
		<updated>2023-10-12T14:09:27Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classificqtion&lt;br /&gt;
|TimeFrame=2023-2024&lt;br /&gt;
|Prerequisites=Good Programing skills, previous experience with CNN and deep learning, interest in astronomy data. &lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from an astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5305</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5305"/>
		<updated>2023-10-12T13:56:32Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classificqtion&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from an astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5304</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5304"/>
		<updated>2023-10-12T13:56:04Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classificqtion&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from na astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5303</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5303"/>
		<updated>2023-10-12T13:55:27Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classificqtion&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy,Margherita Grespan, Hareesh Thuruthipilly&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from na astronomical survey. This thesis will be supervised in partnership with NCBJ (Poland). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Research Objectives:&lt;br /&gt;
1.      Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Methodology:&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5302</id>
		<title>Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Machine_Learning_for_Segmentation_of_Lensed_Galaxies:_Distinguishing_Source_Galaxies_from_Gravitational_Lenses&amp;diff=5302"/>
		<updated>2023-10-12T13:48:53Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses |Keywords=Semanitc Segmentatio...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Machine Learning for Segmentation of Lensed Galaxies: Distinguishing Source Galaxies from Gravitational Lenses&lt;br /&gt;
|Keywords=Semanitc Segmentation, Astronomical data, classificqtion&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Idriss Gouigah, Eren Erdal Aksoy, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Draft&lt;br /&gt;
}}&lt;br /&gt;
Abstract: Lensed galaxies serve as cosmic magnifying glasses, offering a unique window into the distant universe. Accurate segmentation of lensed galaxy images, separating the source galaxy from the gravitational lens, is crucial for extracting meaningful scientific insights. This master&amp;#039;s thesis aims to develop and train a machine learning model capable of segmenting lensed galaxy images, differentiating between the source galaxy and the gravitational lens. The dataset used for this research will comprise images from the Gravitational Lens Finding Challenge and data obtained from na astronomical survey.&lt;br /&gt;
Research Objectives:&lt;br /&gt;
1.	Dataset Compilation: Preprocess a diverse dataset of lensed galaxy images, encompassing data from the Gravitational Lens Finding Challenge, an astronomical survey, and synthetic data.&lt;br /&gt;
2.	Model Development: Investigate and implement state-of-the-art machine learning techniques, with a focus on convolutional neural networks (CNNs) and deep learning architectures, to design a model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
3.	Source-Lens Separation: Develop a novel model architecture that can effectively differentiate between the source galaxy and the gravitational lens in lensed galaxy images, considering the unique challenges posed by gravitational lensing.&lt;br /&gt;
4.	Training and Validation: Train the model on the curated dataset, employing data augmentation and regularization techniques. Validate the model&amp;#039;s performance using cross-validation and a range of appropriate evaluation metrics, such as accuracy, precision, recall, and F1-score.&lt;br /&gt;
5.	Generalization and Application: Assess the model&amp;#039;s generalization abilities by testing it on different datasets, including data from various astronomical surveys. Evaluate its applicability in real-world astronomical observations and discuss potential applications.&lt;br /&gt;
Methodology:&lt;br /&gt;
1.	Data Preparation: Collect lensed galaxy images from the Gravitational Lens Finding Challenge and the astronomical survey, and preprocess the data.&lt;br /&gt;
2.	Model Design: Explore various deep learning architectures and techniques to develop a robust model tailored to segment lensed galaxy images and distinguish source galaxies from gravitational lenses.&lt;br /&gt;
3.	Training and Validation: Train the model on the prepared dataset, optimizing hyperparameters and monitoring performance throughout the training process. Employ cross-validation to ensure robustness and apply the model to real data.&lt;br /&gt;
4.	Generalization Testing: Evaluate the model&amp;#039;s ability to generalize to different datasets, including those from the survey, to assess its practicality for broader astronomical research.&lt;br /&gt;
5.	Comparative Analysis: Compare the proposed model&amp;#039;s performance with existing methods, highlighting its strengths, weaknesses, and potential contributions to the field.&lt;br /&gt;
Expected Outcomes: This master&amp;#039;s thesis aims to advance the field of astrophysics by providing a novel machine learning approach for accurate segmentation of lensed galaxy images, specifically focusing on separating source galaxies from gravitational lenses. The expected outcomes include:&lt;br /&gt;
1.	A trained machine learning model capable of accurately segmenting lensed galaxy images.&lt;br /&gt;
2.	Insights into the effectiveness of different deep learning architectures for this specific task.&lt;br /&gt;
3.	An evaluation of the model&amp;#039;s generalization capabilities to diverse astronomical datasets.&lt;br /&gt;
4.	Recommendations for the practical application of the model in gravitational lensing studies, enhancing our understanding of the distant universe.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4961</id>
		<title>Tiago Cortinhal</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4961"/>
		<updated>2021-10-11T10:35:48Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cortinhal&lt;br /&gt;
|Given Name=Tiago&lt;br /&gt;
|Title=Msc&lt;br /&gt;
|Cell Phone=+46729773776&lt;br /&gt;
|Position=PhD Stud&lt;br /&gt;
|Email=tiago.cortinhal@hh.se&lt;br /&gt;
|Image=Tiago.jpg&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=https://github.com/TiagoCortinhal/&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=Sharpen&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Generative Models&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Autonomous Vehicles&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Staff]]&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
&amp;lt;!-- __NOTOC__ --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
&amp;lt;!-- {{PublicationsList}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4960</id>
		<title>Tiago Cortinhal</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4960"/>
		<updated>2021-10-11T10:34:55Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cortinhal&lt;br /&gt;
|Given Name=Tiago&lt;br /&gt;
|Title=Msc&lt;br /&gt;
|Cell Phone=+46729773776&lt;br /&gt;
|Position=PhD Stud&lt;br /&gt;
|Email=tiago.cortinhal@hh.se&lt;br /&gt;
|Image=Tiago.jpg&lt;br /&gt;
|url=https://github.com/TiagoCortinhal/&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=Sharpen&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Generative Models&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Autonomous Vehicles&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Staff]]&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
&amp;lt;!-- __NOTOC__ --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
&amp;lt;!-- {{PublicationsList}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4914</id>
		<title>Tiago Cortinhal</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4914"/>
		<updated>2021-09-29T09:17:10Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cortinhal&lt;br /&gt;
|Given Name=Tiago&lt;br /&gt;
|Title=Msc&lt;br /&gt;
|Cell Phone=+46729773776&lt;br /&gt;
|Position=PhD. Candidate&lt;br /&gt;
|Email=tiago.cortinhal@hh.se&lt;br /&gt;
|Image=Tiago.jpg&lt;br /&gt;
|url=https://github.com/TiagoCortinhal/&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=Sharpen&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Generative Models&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Autonomous Vehicles&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Staff]]&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
&amp;lt;!-- __NOTOC__ --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
&amp;lt;!-- {{PublicationsList}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:Tiago.jpg&amp;diff=4913</id>
		<title>File:Tiago.jpg</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:Tiago.jpg&amp;diff=4913"/>
		<updated>2021-09-29T09:16:19Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Tiago uploaded a new version of &amp;amp;quot;File:Tiago.jpg&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4912</id>
		<title>Tiago Cortinhal</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4912"/>
		<updated>2021-09-29T09:13:37Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cortinhal&lt;br /&gt;
|Given Name=Tiago&lt;br /&gt;
|Title=PhD. Candidate, Msc&lt;br /&gt;
|Cell Phone=+46729773776&lt;br /&gt;
|Email=tiago.cortinhal@hh.se&lt;br /&gt;
|Image=Tiago.jpg&lt;br /&gt;
|url=https://github.com/TiagoCortinhal/&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=Sharpen&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Generative Models&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Autonomous Vehicles&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Staff]]&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
&amp;lt;!-- __NOTOC__ --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
&amp;lt;!-- {{PublicationsList}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:Tiago.jpg&amp;diff=4911</id>
		<title>File:Tiago.jpg</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:Tiago.jpg&amp;diff=4911"/>
		<updated>2021-09-29T09:12:48Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4910</id>
		<title>Tiago Cortinhal</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4910"/>
		<updated>2021-09-29T09:12:28Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cortinhal&lt;br /&gt;
|Given Name=Tiago&lt;br /&gt;
|Title=PhD. Candidate, Msc&lt;br /&gt;
|Cell Phone=+46729773776&lt;br /&gt;
|Email=tiago.cortinhal@hh.se&lt;br /&gt;
|Image=Tiago&lt;br /&gt;
|url=https://github.com/TiagoCortinhal/&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=Sharpen&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Generative Models&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Autonomous Vehicles&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Staff]]&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
&amp;lt;!-- __NOTOC__ --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
&amp;lt;!-- {{PublicationsList}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4909</id>
		<title>Tiago Cortinhal</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Tiago_Cortinhal&amp;diff=4909"/>
		<updated>2021-09-29T09:08:58Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Created page with &amp;quot;{{Person |Family Name=Cortinhal |Given Name=Tiago |Title=PhD. Candidate, Msc |Phone=+4672-977 37 76 }} Category:Staff  &amp;lt;!--Remove or add comments --&amp;gt;  &amp;lt;!-- __NOTOC__ --&amp;gt;  ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Cortinhal&lt;br /&gt;
|Given Name=Tiago&lt;br /&gt;
|Title=PhD. Candidate, Msc&lt;br /&gt;
|Phone=+4672-977 37 76&lt;br /&gt;
}}&lt;br /&gt;
[[Category:Staff]]&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
&amp;lt;!-- __NOTOC__ --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
&amp;lt;!-- {{PublicationsList}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Zero-Shot_Learning_for_Semantic_Segmentation&amp;diff=4681</id>
		<title>Zero-Shot Learning for Semantic Segmentation</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Zero-Shot_Learning_for_Semantic_Segmentation&amp;diff=4681"/>
		<updated>2020-10-09T08:40:01Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Zero-Shot Learning for Semantic Segmentation&lt;br /&gt;
|Keywords=Deep Learning, Computer Vision, Semantic Segmentation&lt;br /&gt;
|References=https://arxiv.org/pdf/1707.00600.pdf&lt;br /&gt;
&lt;br /&gt;
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41473.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1906.00817.pdf&lt;br /&gt;
&lt;br /&gt;
https://openaccess.thecvf.com/content_ICCVW_2019/papers/MDALC/Kato_Zero-Shot_Semantic_Segmentation_via_Variational_Mapping_ICCVW_2019_paper.pdf&lt;br /&gt;
&lt;br /&gt;
https://github.com/daooshee/Few-Shot-Learning&lt;br /&gt;
|Prerequisites=Excellent Programming Skills; Excellent Grasp of Deep Learning and Pytorch&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Eren Erdal Aksoy,&lt;br /&gt;
|Author=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Currently we are working of a GAN approach to be able to generate segmentation maps from a sensor modality to another (from/to LiDAR and RGB).&lt;br /&gt;
&lt;br /&gt;
The currently available datasets are not perfectly aligned (SemanticKitti and Cityscape), to try to overcome this problem techniques like Zero-Shot Learning could be applied.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zero-Shot Learning can be seen as a type of Domain Adaptation, where a given set of classes are used for training and we have another set of unseen classes to which we wish to have segmentation as well.&lt;br /&gt;
&lt;br /&gt;
To be able to perform this, a kind of embedding of the classes need to be done, as to be able to adapt the pre-trained model to unseen classes by combining the embeddings of the learnt classes.&lt;br /&gt;
&lt;br /&gt;
The project consists of using the pre-trained GAN generator model and apply Zero-Shot Learning. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  Research Questions:&lt;br /&gt;
    Can Zero-Shot Learning besides of learning unseen classes also improve the overall IoU (Intersection over Union)?&lt;br /&gt;
    Should Zero-Shot or N-Shot be applied in this scenario (N-Shot: N examples are showed during training)&lt;br /&gt;
    Compared with Domain transfer/finetuning techniques which could provide faster/better results?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Zero-Shot_Learning_for_Semantic_Segmentation&amp;diff=4680</id>
		<title>Zero-Shot Learning for Semantic Segmentation</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Zero-Shot_Learning_for_Semantic_Segmentation&amp;diff=4680"/>
		<updated>2020-10-08T18:09:10Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Zero-Shot Learning for Semantic Segmentation&lt;br /&gt;
|Keywords=Deep Learning, Computer Vision, Semantic Segmentation&lt;br /&gt;
|References=https://arxiv.org/pdf/1707.00600.pdf&lt;br /&gt;
&lt;br /&gt;
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41473.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1906.00817.pdf&lt;br /&gt;
&lt;br /&gt;
https://openaccess.thecvf.com/content_ICCVW_2019/papers/MDALC/Kato_Zero-Shot_Semantic_Segmentation_via_Variational_Mapping_ICCVW_2019_paper.pdf&lt;br /&gt;
&lt;br /&gt;
https://github.com/daooshee/Few-Shot-Learning&lt;br /&gt;
|Prerequisites=Excellent Programming Skills; Excellent Grasp of Deep Learning and Pytorch&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Eren Erdal Aksoy,&lt;br /&gt;
|Author=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
Currently we are working of a GAN approach to be able to generate segmentation maps from a sensor modality to another (from/to LiDAR and RGB).&lt;br /&gt;
&lt;br /&gt;
The currently available datasets are not perfectly aligned (SemanticKitti and Cityscape), to try to overcome this problem techniques like Zero-Shot Learning could be applied.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zero-Shot Learning can be seen as a type of Domain Adaptation, where a given set of classes are used for training and we have another set of unseen classes to which we wish to have segmentation as well.&lt;br /&gt;
&lt;br /&gt;
To be able to perform this, a kind of embedding of the classes need to be done, as to be able to adapt the pre-trained model to unseen classes by combining the embeddings of the learnt classes.&lt;br /&gt;
&lt;br /&gt;
The project consists of using the pre-trained GAN generator model and apply Zero-Shot Learning. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  Research Questions:&lt;br /&gt;
    Can Zero-Shot Learning besides of learning unseen classes also improve the overall IoU (Intersection over Union)?&lt;br /&gt;
    Should Zero-Shot or N-Shot be applied in this scenario (N-Shot: N examples are showed during training)&lt;br /&gt;
    Compared with Domain transfer/finetuning techniques which could provide faster/better results?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Zero-Shot_Learning_for_Semantic_Segmentation&amp;diff=4679</id>
		<title>Zero-Shot Learning for Semantic Segmentation</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Zero-Shot_Learning_for_Semantic_Segmentation&amp;diff=4679"/>
		<updated>2020-10-08T18:07:31Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Zero-Shot Learning for Semantic Segmentation |Keywords=Deep Learning, Computer Vision, Semantic Segmentation |References=https://arxiv.org/pd...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Zero-Shot Learning for Semantic Segmentation&lt;br /&gt;
|Keywords=Deep Learning, Computer Vision, Semantic Segmentation&lt;br /&gt;
|References=https://arxiv.org/pdf/1707.00600.pdf&lt;br /&gt;
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41473.pdf&lt;br /&gt;
https://arxiv.org/pdf/1906.00817.pdf&lt;br /&gt;
https://openaccess.thecvf.com/content_ICCVW_2019/papers/MDALC/Kato_Zero-Shot_Semantic_Segmentation_via_Variational_Mapping_ICCVW_2019_paper.pdf&lt;br /&gt;
https://github.com/daooshee/Few-Shot-Learning&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Excellent Programming Skills; Excellent Grasp of Deep Learning and Pytorch&lt;br /&gt;
|Supervisor=Tiago Cortinhal, Eren Erdal Aksoy, &lt;br /&gt;
|Author=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
Currently we are working of a GAN approach to be able to generate segmentation maps from a sensor modality to another (from/to LiDAR and RGB).&lt;br /&gt;
&lt;br /&gt;
The currently available datasets are not perfectly aligned (SemanticKitti and Cityscape), to try to overcome this problem techniques like Zero-Shot Learning could be applied.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zero-Shot Learning can be seen as a type of Domain Adaptation, where a given set of classes are used for training and we have another set of unseen classes to which we wish to have segmentation as well.&lt;br /&gt;
&lt;br /&gt;
To be able to perform this, a kind of embedding of the classes need to be done, as to be able to adapt the pre-trained model to unseen classes by combining the embeddings of the learnt classes.&lt;br /&gt;
&lt;br /&gt;
The project consists of using the pre-trained GAN generator model and apply Zero-Shot Learning. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  Research Questions:&lt;br /&gt;
    Can Zero-Shot Learning besides of learning unseen classes also improve the overall IoU (Intersection over Union)?&lt;br /&gt;
    Should Zero-Shot or N-Shot be applied in this scenario (N-Shot: N examples are showed during training)&lt;br /&gt;
    Compared with Domain transfer/finetuning techniques which could provide faster/better results?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4663</id>
		<title>Generative Approach for Multivariate Signals</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4663"/>
		<updated>2020-10-07T10:48:07Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques&lt;br /&gt;
|Keywords=GAN, CAN data, MAR&lt;br /&gt;
|TimeFrame=2020 Fall - 2021 Summer&lt;br /&gt;
|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/abs/1706.02633&lt;br /&gt;
&lt;br /&gt;
https://openreview.net/pdf?id=rJedV3R5tm&lt;br /&gt;
&lt;br /&gt;
https://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/12-Yu-L-14344.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1511.06434.pdf&lt;br /&gt;
|Prerequisites=Excellent Programming Skills&lt;br /&gt;
Excellent knowledge in Machine Learning and Neural Networks&lt;br /&gt;
|Supervisor=Kunru Chen, Tiago Cortinhal, Thorsteinn Rögnvaldsson,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Control Area Network (CAN) is a protocol that is used to manipulate vehicles. It is multidimensional and consists of control and sensor signals to and from different parts of the equipment. Since this data comes internally from the machine itself, it is stable and cheap to collect it. Previous work has shown that CAN data can be used to build representations for machine activity recognition (MAR) for forklift trucks. However, those representations are limited to only describing the existing data in both realism and diversity. Creating representation by training a vanilla autoencoder has disadvantages when trying to explore the entire space of CAN signals. &lt;br /&gt;
&lt;br /&gt;
Generative approaches have been used mostly in traditional types of data, like images, and have shown to have great capabilities to learn the underlying distribution as well as allowing us to sample new unseen data points. This has shown great results as we can see in https://thispersondoesnotexist.com, or even in pictures to picture translations and style transfers. This generative capability also allows us to perform arithmetic operations on the vector and see the underlying structure of each different “class” of outputs. &lt;br /&gt;
&lt;br /&gt;
Nevertheless, the work done in other data modalities is still sparse but nevertheless growing in interest. In this thesis, the main interest is focused on a very specific type of data that might bring all kinds of hardships and obstacles to overcome. Some of those hardships might come from the type of data we are trying to generate. This needs to be investigated and solutions to overcome these types of situations are a key aspect we will be looking for. &lt;br /&gt;
The students need to develop a GAN-based network to generate CAN data, to evaluate the quality of the generated data, and to use that data in a MAR task.&lt;br /&gt;
&lt;br /&gt;
   Research Questions:&lt;br /&gt;
       Can GANs generate realistic CAN data?&lt;br /&gt;
       Can GANs generate/predict the (near) future CAN signals? &lt;br /&gt;
       Is the latent space an informative representation about the CAN signals?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want more information about this topic you can contact us at kunru.cheh@hh.se and tiago.cortinhal@hh.se or pass by our office at E522!&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4659</id>
		<title>Generative Approach for Multivariate Signals</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4659"/>
		<updated>2020-10-06T08:09:49Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques&lt;br /&gt;
|Keywords=GAN, CAN data, MAR&lt;br /&gt;
|TimeFrame=2020 Fall - 2021 Summer&lt;br /&gt;
|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/abs/1706.02633&lt;br /&gt;
&lt;br /&gt;
https://openreview.net/pdf?id=rJedV3R5tm&lt;br /&gt;
&lt;br /&gt;
https://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/12-Yu-L-14344.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1511.06434.pdf&lt;br /&gt;
|Prerequisites=Excellent Programming Skills&lt;br /&gt;
Excellent knowledge in Machine Learning and Neural Networks&lt;br /&gt;
|Supervisor=Kunru Chen, Tiago Cortinhal, Thorsteinn Rögnvaldsson,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Control Area Network (CAN) is a protocol that is used to manipulate vehicles. It is multidimensional and consists of control and sensor signals to and from different parts of the equipment. Since this data comes internally from the machine itself, it is stable and cheap to collect it. Previous work has shown that CAN data can be used to build representations for machine activity recognition (MAR) for forklift trucks. However, those representations are limited to only describing the existing data in both realism and diversity. Creating representation by training a vanilla autoencoder has disadvantages when trying to explore the entire space of CAN signals. &lt;br /&gt;
&lt;br /&gt;
Generative approaches have been used mostly in traditional types of data, like images, and have shown to have great capabilities to learn the underlying distribution as well as allowing us to sample new unseen data points. This has shown great results as we can see in https://thispersondoesnotexist.com, or even in pictures to picture translations and style transfers. This generative capability also allows us to perform arithmetic operations on the vector and see the underlying structure of each different “class” of outputs. &lt;br /&gt;
&lt;br /&gt;
Nevertheless, the work done in other data modalities is still sparse but nevertheless growing in interest. In this thesis, the main interest is focused on a very specific type of data that might bring all kinds of hardships and obstacles to overcome. Some of those hardships might come from the type of data we are trying to generate. This needs to be investigated and solutions to overcome these types of situations are a key aspect we will be looking for. &lt;br /&gt;
The students need to develop a GAN-based network to generate CAN data, to evaluate the quality of the generated data, and to use that data in a MAR task.&lt;br /&gt;
&lt;br /&gt;
   Research Questions:&lt;br /&gt;
       Can GANs generate realistic CAN data?&lt;br /&gt;
       Can GANs generate/predict the (near) future CAN signals? &lt;br /&gt;
       Is the latent space an informative representation about the CAN signals?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4629</id>
		<title>Generative Approach for Multivariate Signals</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4629"/>
		<updated>2020-09-29T07:34:00Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques	&lt;br /&gt;
|Keywords=GAN, CAN data, MAR&lt;br /&gt;
|TimeFrame=2020 Fall - 2021 Summer&lt;br /&gt;
|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/abs/1706.02633&lt;br /&gt;
&lt;br /&gt;
https://openreview.net/pdf?id=rJedV3R5tm&lt;br /&gt;
&lt;br /&gt;
https://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/12-Yu-L-14344.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1511.06434.pdf&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Excellent Programming Skills&lt;br /&gt;
Excellent knowledge in Machine Learning and Neural Networks &lt;br /&gt;
|Supervisor=Kunru Chen, Tiago Cortinhal, Thorsteinn Rögnvaldsson, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
Control Area Network (CAN) is a protocol that is used to manipulate vehicles. It is multidimensional and consists of control and sensor signals to and from different parts of the equipment. Since this data comes internally from the machine itself, it is stable and cheap to collect it. Previous work has shown that CAN data can be used to build representations for machine activity recognition (MAR) for forklift trucks. However, those representations are limited to only describing the existing data in both realism and diversity. Creating representation by training a vanilla autoencoder has disadvantages when trying to explore the entire space of CAN signals. &lt;br /&gt;
&lt;br /&gt;
Generative approaches have been used mostly in traditional types of data, like images, and have shown to have great capabilities to learn the underlying distribution as well as allowing us to sample new unseen data points. This has shown great results as we can see in https://thispersondoesnotexist.com, or even in pictures to picture translations and style transfers. This generative capability also allows us to perform arithmetic operations on the vector and see the underlying structure of each different “class” of outputs. &lt;br /&gt;
&lt;br /&gt;
Nevertheless, the work done in other data modalities is still sparse but nevertheless growing in interest. In this thesis, the main interest is focused on a very specific type of data that might bring all kinds of hardships and obstacles to overcome. Some of those hardships might come from the type of data we are trying to generate. This needs to be investigated and solutions to overcome these types of situations are a key aspect we will be looking for. &lt;br /&gt;
The students need to develop a GAN-based network to generate CAN data, to evaluate the quality of the generated data, and to use that data in a MAR task.&lt;br /&gt;
&lt;br /&gt;
   Research Questions:&lt;br /&gt;
       Can GANs generate realistic CAN data?&lt;br /&gt;
       Can GANs generate/predict the (near) future CAN signals? &lt;br /&gt;
       Is the latent space an informative representation about the CAN signals?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4628</id>
		<title>Generative Approach for Multivariate Signals</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Generative_Approach_for_Multivariate_Signals&amp;diff=4628"/>
		<updated>2020-09-29T07:33:36Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Created page with &amp;quot;{{StudentProjectTemplate |Summary=. |Supervisor=. |Status=Internal Draft }}&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=.&lt;br /&gt;
|Supervisor=.&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4627</id>
		<title>Feature-wise normalization for 3D medical images</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Feature-wise_normalization_for_3D_medical_images&amp;diff=4627"/>
		<updated>2020-09-29T07:28:45Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Created page with &amp;quot;{{StudentProjectTemplate |Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques	 ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=The topic focuses on generative models (GAN) for CAN-bus data and investigating the representation learning capabilities of such techniques	&lt;br /&gt;
|Keywords=GAN, CAN data, MAR&lt;br /&gt;
|TimeFrame=2020 Fall - 2021 Summer&lt;br /&gt;
|References=https://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/abs/1706.02633&lt;br /&gt;
&lt;br /&gt;
https://openreview.net/pdf?id=rJedV3R5tm&lt;br /&gt;
&lt;br /&gt;
https://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/12-Yu-L-14344.pdf&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1511.06434.pdf&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Excellent Programming Skills&lt;br /&gt;
Excellent knowledge in Machine Learning and Neural Networks &lt;br /&gt;
|Supervisor=Kunru Chen, Tiago Cortinhal, Thorsteinn Rögnvaldsson, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
Control Area Network (CAN) is a protocol that is used to manipulate vehicles. It is multidimensional and consists of control and sensor signals to and from different parts of the equipment. Since this data comes internally from the machine itself, it is stable and cheap to collect it. Previous work has shown that CAN data can be used to build representations for machine activity recognition (MAR) for forklift trucks. However, those representations are limited to only describing the existing data in both realism and diversity. Creating representation by training a vanilla autoencoder has disadvantages when trying to explore the entire space of CAN signals. &lt;br /&gt;
&lt;br /&gt;
Generative approaches have been used mostly in traditional types of data, like images, and have shown to have great capabilities to learn the underlying distribution as well as allowing us to sample new unseen data points. This has shown great results as we can see in https://thispersondoesnotexist.com, or even in pictures to picture translations and style transfers. This generative capability also allows us to perform arithmetic operations on the vector and see the underlying structure of each different “class” of outputs. &lt;br /&gt;
&lt;br /&gt;
Nevertheless, the work done in other data modalities is still sparse but nevertheless growing in interest. In this thesis, the main interest is focused on a very specific type of data that might bring all kinds of hardships and obstacles to overcome. Some of those hardships might come from the type of data we are trying to generate. This needs to be investigated and solutions to overcome these types of situations are a key aspect we will be looking for. &lt;br /&gt;
The students need to develop a GAN-based network to generate CAN data, to evaluate the quality of the generated data, and to use that data in a MAR task.&lt;br /&gt;
&lt;br /&gt;
   Research Questions:&lt;br /&gt;
       Can GANs generate realistic CAN data?&lt;br /&gt;
       Can GANs generate/predict the (near) future CAN signals? &lt;br /&gt;
       Is the latent space an informative representation about the CAN signals?&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4376</id>
		<title>Object Movement Prediction for Autonomous Cars</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4376"/>
		<updated>2019-10-03T14:59:36Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Predicting the movement of objects in the context of autonomous cars&lt;br /&gt;
|References=https://motchallenge.net&lt;br /&gt;
&lt;br /&gt;
https://github.com/abhineet123/Deep-Learning-for-Tracking-and-Detection&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/pdf/1909.07707.pdf&lt;br /&gt;
|Supervisor=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Nowadays, we have several powerful architectures, e.g. YOLO, that allows us to find bounding boxes on the fly. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Single-object tracking focus on the processing of sequences of RGB images to be able to identify and track a given object, which can be costly in terms of memory/computation. The main idea being this project is to use the bounding boxes itself and try to predict its movement based on the n-previous frames. By using this higher-level abstraction of the scene itself we might reduce the complexity and training time required for traditional Single-Object tracking.&lt;br /&gt;
&lt;br /&gt;
To start, we can use Kitti dataset to create such a prediction system and exploit other possible datasets/possible settings as soon as we have a working prototype.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4375</id>
		<title>Object Movement Prediction for Autonomous Cars</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4375"/>
		<updated>2019-10-03T14:50:37Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Predicting the movement of objects in the context of autonomous cars&lt;br /&gt;
|References=https://motchallenge.net&lt;br /&gt;
https://github.com/abhineet123/Deep-Learning-for-Tracking-and-Detection&lt;br /&gt;
https://arxiv.org/pdf/1909.07707.pdf&lt;br /&gt;
|Supervisor=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Nowadays, we have several powerful architectures, e.g. YOLO, that allows us to find bounding boxes on the fly. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Single-object tracking focus on the processing of sequences of RGB images to be able to identify and track a given object, which can be costly in terms of memory/computation. The main idea being this project is to use the bounding boxes itself and try to predict its movement based on the n-previous frames. By using this higher-level abstraction of the scene itself we might reduce the complexity and training time required for traditional Single-Object tracking.&lt;br /&gt;
&lt;br /&gt;
To start, we can use Kitti dataset to create such a prediction system and exploit other possible datasets/possible settings as soon as we have a working prototype.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4372</id>
		<title>Object Movement Prediction for Autonomous Cars</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4372"/>
		<updated>2019-10-03T13:38:00Z</updated>

		<summary type="html">&lt;p&gt;Tiago: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Predicting the movement of objects in the context of autonomous cars&lt;br /&gt;
|References=https://motchallenge.net&lt;br /&gt;
https://github.com/abhineet123/Deep-Learning-for-Tracking-and-Detection&lt;br /&gt;
|Supervisor=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Nowadays, we have several powerful architectures, e.g. YOLO, that allows us to find bounding boxes on the fly. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Single-object tracking focus on the processing of sequences of RGB images to be able to identify and track a given object, which can be costly in terms of memory/computation. The main idea being this project is to use the bounding boxes itself and try to predict its movement based on the n-previous frames. By using this higher-level abstraction of the scene itself we might reduce the complexity and training time required for traditional Single-Object tracking.&lt;br /&gt;
&lt;br /&gt;
To start, we can use Kitti dataset to create such a prediction system and exploit other possible datasets/possible settings as soon as we have a working prototype.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4352</id>
		<title>Object Movement Prediction for Autonomous Cars</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Object_Movement_Prediction_for_Autonomous_Cars&amp;diff=4352"/>
		<updated>2019-10-02T07:38:15Z</updated>

		<summary type="html">&lt;p&gt;Tiago: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Predicting the movement of objects in the context of autonomous cars |References=https://motchallenge.net  https://github.com/abhineet123/Dee...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Predicting the movement of objects in the context of autonomous cars&lt;br /&gt;
|References=https://motchallenge.net&lt;br /&gt;
https://github.com/abhineet123/Deep-Learning-for-Tracking-and-Detection&lt;br /&gt;
|Supervisor=Tiago Cortinhal&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Right now we have architectures like Yolo that are very good at predicting bounding boxes for objects. &lt;br /&gt;
But what if we wanted to use that abstraction to try predict the next time-step position of the objects in it? The idea behind this project would be to try to create a model that could, given the n previous frames, predict the following one.&lt;/div&gt;</summary>
		<author><name>Tiago</name></author>
	</entry>
</feed>