Theory and Practice of Deep Learning Project
a. 조정빈 / 2018142125 / [email protected] / Background research
b. 장원준 / 2018142121 / [email protected] / Background research
c. 김건하 / 2019147517 / [email protected] / Code implementation & training
d. 김태헌 / 2019147502 / [email protected] / Code implementation & training
Style transfer is mostly done by a style transfer network and has lots of different approaches. From the original paper [2016 Johnson et al] which uses a single style transfer network for each style to recent multiple style transfer networks and arbitrary style networks. However, all these methods take one reference image for the desired style. Here, we try to take one reference image for color, and another reference image for the texture and transfer an input image to the desired color and texture respectively.
$$ f(C,S_t,S_c) \rightarrow I $$
Content image > $C$
Texture image > $S_t$
Color image > $S_c$




실험해본 결과 - adain을 하여 decoder를 통해 iq를 생성했을 경우에 content의 색깔도 들어간다는 점이 문제점
→ 1차 변화: decoder iq 학습에 있어서 content loss를 없앤다.