Theory and Practice of Deep Learning Project

구성원

a.   조정빈 / 2018142125 / [email protected] / Background research

b.   장원준 / 2018142121 / [email protected] / Background research

c.   김건하 / 2019147517 / [email protected] / Code implementation & training

d.   김태헌 / 2019147502 / [email protected] / Code implementation & training

회의록

[11/25] 대면 회의

[11/30] 1pm 대면회의

Problem Statement

Style transfer is mostly done by a style transfer network and has lots of different approaches. From the original paper [2016 Johnson et al] which uses a single style transfer network for each style to recent multiple style transfer networks and arbitrary style networks. However, all these methods take one reference image for the desired style. Here, we try to take one reference image for color, and another reference image for the texture and transfer an input image to the desired color and texture respectively.

Task Description

$$ f(C,S_t,S_c) \rightarrow I $$

Notation

Content image > $C$

Texture image > $S_t$

Color image > $S_c$

자료조사

보고서 형식

1. Abstract

2. Introduction

3. Background

Untitled

4. Method

1. one decoder, color loss

Untitled

2. two decoder, no color loss

Untitled

3. two network

Untitled

실험해본 결과 - adain을 하여 decoder를 통해 iq를 생성했을 경우에 content의 색깔도 들어간다는 점이 문제점

→ 1차 변화: decoder iq 학습에 있어서 content loss를 없앤다.