<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/index.php?action=history&amp;feed=atom&amp;title=Music_style_transfer</id>
	<title>Music style transfer - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/index.php?action=history&amp;feed=atom&amp;title=Music_style_transfer"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;action=history"/>
	<updated>2026-04-04T14:07:16Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4927&amp;oldid=prev</id>
		<title>Islab at 15:34, 3 October 2021</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4927&amp;oldid=prev"/>
		<updated>2021-10-03T15:34:06Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 15:34, 3 October 2021&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot; &gt;Line 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algorithms.&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algorithms.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=Deep Learning, Neural Networks, music style, genre, domain, transfer&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=Deep Learning, Neural Networks, music style, genre, domain, transfer&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=Fall &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;2020&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=Fall &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;2021&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|References=[1] Dai, Shuqi, Zheng Zhang, and Gus G. Xia. &amp;quot;Music style transfer: A position paper.&amp;quot; arXiv preprint arXiv:1803.06841(2018).&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|References=[1] Dai, Shuqi, Zheng Zhang, and Gus G. Xia. &amp;quot;Music style transfer: A position paper.&amp;quot; arXiv preprint arXiv:1803.06841(2018).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Islab</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4660&amp;oldid=prev</id>
		<title>YuantaoFan at 18:29, 6 October 2020</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4660&amp;oldid=prev"/>
		<updated>2020-10-06T18:29:34Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 18:29, 6 October 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l21&quot; &gt;Line 21:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 21:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Status=Open&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Status=Open&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Music style transfer [1, 5, 6, 7] can be considered as the counterpart of image style transfer [2]. The aim of this thesis project is to develop a system that, given a piece of music in one genre, changes its style into another genre. For example, this transition can be from classical to jazz, e.g. Alla Turca Jazz by Fazıl Say [3], and Bach Jazz such as BWV.1043 by Taro Hakase [4]. The specific type of music style transfer, in this work, is Composition Style Transfer [1, 5], i.e. preserving the identifiable melody contour of the input pieces, while altering some other score features in a meaningful way, i.e. interpretation in other music style/genre. One of the challenges when it comes to study/research music from a scientific perspective is that music is, by nature, very subjective and it is difficult to evaluate the results objectively. In this work, the genre of the music pieces will be evaluated&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;, objectively, &lt;/del&gt;using a trained genre classifier, which discriminates different genres from each other.&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Music style transfer [1, 5, 6, 7] can be considered as the counterpart of image style transfer [2]. The aim of this thesis project is to develop a system that, given a piece of music in one genre, changes its style into another genre. For example, this transition can be from classical to jazz, e.g. Alla Turca Jazz by Fazıl Say [3], and Bach Jazz such as BWV.1043 by Taro Hakase [4]. The specific type of music style transfer, in this work, is Composition Style Transfer [1, 5], i.e. preserving the identifiable melody contour of the input pieces, while altering some other score features in a meaningful way, i.e. interpretation in other music style/genre. One of the challenges when it comes to study/research music from a scientific perspective is that music is, by nature, very subjective and it is difficult to evaluate the results objectively. In this work, the genre of the music pieces will be evaluated using a trained genre classifier, which discriminates different genres from each other.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;One approach to address music style transfer is using adversarial deep networks [6]. A generator takes a piece of music in a specific genre as input and tries to generate the transferred version of the same piece in another genre. A discriminator then tries to discern between generated music and real music. This way through adversarial training, the generator will hopefully end up generating a genre-transferred version of the inputs. The generated genre-transferred music can be evaluated using a genre classifier. The mentioned architecture is one way of doing a style transfer.&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;One approach to address music style transfer is using adversarial deep networks [6]. A generator takes a piece of music in a specific genre as input and tries to generate the transferred version of the same piece in another genre. A discriminator then tries to discern between generated music and real music. This way through adversarial training, the generator will hopefully end up generating a genre-transferred version of the inputs. The generated genre-transferred music can be evaluated using a genre classifier. The mentioned architecture is one way of doing a style transfer.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>YuantaoFan</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4658&amp;oldid=prev</id>
		<title>YuantaoFan at 14:20, 5 October 2020</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4658&amp;oldid=prev"/>
		<updated>2020-10-05T14:20:44Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left diff-editfont-monospace&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 14:20, 5 October 2020&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{StudentProjectTemplate&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{StudentProjectTemplate&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algorithms.&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algorithms.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=Deep Learning, Neural Networks, music style, genre, domain, transfer  &lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|Keywords=Deep Learning, Neural Networks, music style, genre, domain, transfer&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=Fall 2020&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|TimeFrame=Fall 2020&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|References=[1] Dai, Shuqi, Zheng Zhang, and Gus G. Xia. &amp;quot;Music style transfer: A position paper.&amp;quot; arXiv preprint arXiv:1803.06841(2018).&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|References=[1] Dai, Shuqi, Zheng Zhang, and Gus G. Xia. &amp;quot;Music style transfer: A position paper.&amp;quot; arXiv preprint arXiv:1803.06841(2018).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l9&quot; &gt;Line 9:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 9:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] Fazıl Say, Alla Turca Jazz. https://youtu.be/WWftABQV4Wk&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] Fazıl Say, Alla Turca Jazz. https://youtu.be/WWftABQV4Wk&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt;−&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Taro Hakase et al. https://youtu.be/2OiBX07ImA0&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt;+&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[4] Taro Hakase et al&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;., BWV 1043 Jazz&lt;/ins&gt;. https://youtu.be/2OiBX07ImA0&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[5] Hung, Yun-Ning, et al. &amp;quot;Musical composition style transfer via disentangled timbre representations.&amp;quot; arXiv preprint arXiv:1905.13567 (2019).&lt;/div&gt;&lt;/td&gt;&lt;td class=&#039;diff-marker&#039;&gt; &lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[5] Hung, Yun-Ning, et al. &amp;quot;Musical composition style transfer via disentangled timbre representations.&amp;quot; arXiv preprint arXiv:1905.13567 (2019).&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>YuantaoFan</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4657&amp;oldid=prev</id>
		<title>YuantaoFan: Created page with &quot;{{StudentProjectTemplate |Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algor...&quot;</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Music_style_transfer&amp;diff=4657&amp;oldid=prev"/>
		<updated>2020-10-05T14:17:51Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;{{StudentProjectTemplate |Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algor...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Develop a system that receives a piece of music in one genre and changes/transfers its style into another genre, using machine learning algorithms.&lt;br /&gt;
|Keywords=Deep Learning, Neural Networks, music style, genre, domain, transfer &lt;br /&gt;
|TimeFrame=Fall 2020&lt;br /&gt;
|References=[1] Dai, Shuqi, Zheng Zhang, and Gus G. Xia. &amp;quot;Music style transfer: A position paper.&amp;quot; arXiv preprint arXiv:1803.06841(2018).&lt;br /&gt;
&lt;br /&gt;
[2] Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. &amp;quot;Image style transfer using convolutional neural networks.&amp;quot; Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.&lt;br /&gt;
&lt;br /&gt;
[3] Fazıl Say, Alla Turca Jazz. https://youtu.be/WWftABQV4Wk&lt;br /&gt;
&lt;br /&gt;
[4] Taro Hakase et al. https://youtu.be/2OiBX07ImA0&lt;br /&gt;
&lt;br /&gt;
[5] Hung, Yun-Ning, et al. &amp;quot;Musical composition style transfer via disentangled timbre representations.&amp;quot; arXiv preprint arXiv:1905.13567 (2019).&lt;br /&gt;
&lt;br /&gt;
[6] Brunner, Gino, et al. &amp;quot;Symbolic music genre transfer with CycleGAN.&amp;quot; 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2018.&lt;br /&gt;
&lt;br /&gt;
[7] Brunner, Gino, et al. &amp;quot;MIDI-VAE: Modeling dynamics and instrumentation of music with applications to style transfer.&amp;quot; arXiv preprint arXiv:1809.07600 (2018).&lt;br /&gt;
|Prerequisites=Artificial Intelligence, Data Mining, and Learning Systems courses; good knowledge of machine learning and neural networks; programming skills for implementing machine learning algorithms; interests in music (of many genres)&lt;br /&gt;
|Supervisor=Peyman Mashhadi, Yuantao Fan&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
Music style transfer [1, 5, 6, 7] can be considered as the counterpart of image style transfer [2]. The aim of this thesis project is to develop a system that, given a piece of music in one genre, changes its style into another genre. For example, this transition can be from classical to jazz, e.g. Alla Turca Jazz by Fazıl Say [3], and Bach Jazz such as BWV.1043 by Taro Hakase [4]. The specific type of music style transfer, in this work, is Composition Style Transfer [1, 5], i.e. preserving the identifiable melody contour of the input pieces, while altering some other score features in a meaningful way, i.e. interpretation in other music style/genre. One of the challenges when it comes to study/research music from a scientific perspective is that music is, by nature, very subjective and it is difficult to evaluate the results objectively. In this work, the genre of the music pieces will be evaluated, objectively, using a trained genre classifier, which discriminates different genres from each other.&lt;br /&gt;
&lt;br /&gt;
One approach to address music style transfer is using adversarial deep networks [6]. A generator takes a piece of music in a specific genre as input and tries to generate the transferred version of the same piece in another genre. A discriminator then tries to discern between generated music and real music. This way through adversarial training, the generator will hopefully end up generating a genre-transferred version of the inputs. The generated genre-transferred music can be evaluated using a genre classifier. The mentioned architecture is one way of doing a style transfer.&lt;/div&gt;</summary>
		<author><name>YuantaoFan</name></author>
	</entry>
</feed>