Huang et al. (2018)
Huang Qiang, Jackson, Philip, Plumbley, Mark D. and Wang, Wenwu (2018) Synthesis of images by two-stage generative adversarial networks In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, 15–20 Apr 2018, Calgary, Alberta, Canada.
Abstract
In this paper, we propose a divide-and-conquer approach using two generative adversarial networks (GANs) to explore how a machine can draw colorful pictures (bird) using a small amount of training data. In our work, we simulate the procedure of an artist drawing a picture, where one begins with drawing objects’ contours and edges and then paints them different colors. We adopt two GAN models to process basic visual features including shape, texture and color. We use the first GAN model to generate object shape, and then paint the black and white image based on the knowledge learned using the second GAN model. We run our experiments on 600 color images. The experimental results show that the use of our approach can generate good quality synthetic images, comparable to real ones.
Link to full paper ⤧ Next post Kong et al. (2018a) ⤧ Previous post Bones et al. (2017a)