Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, Part II, 2017, Zhu, Park, Isola, Efros
The Chalk Talk is back from its New Zealand holiday. (It was awesome, by the way… everything from kiwi birds to Milford Sound to penguins on the beach.) If you remember all the way back nearly a month ago, we spent a lot of time reviewing and easing into Generative Adversarial Networks. This time we’re going to devote the whole thing hour to the cycleGAN paper. What it does (image-to-image translation), how it does it (making sure that you can always round-trip the images) and why it works (beats the living heck out of me! Sorta.)
Note that I’ve added a separate paper to the title: DualGAN! Developed concurrently with the cycleGAN paper, it’s pretty much the exact same idea. In fact, if you took an idea and sent it through cycleGAN and then put that into DualGAN, you’d get the same idea back! How cool is that?
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks – This is the paper we’ll cover.
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation – We’ll stick with the previous paper, but this one is good too!
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks – A nice paper on generative adversarial networks (we covered this before.)