0%

Progressive growing of GANs for improved quality, stability and variation

  • Category: Article
  • Created: February 14, 2022 10:01 AM
  • Status: Open
  • URL: https://arxiv.org/pdf/1710.10196.pdf
  • Updated: February 15, 2022 5:07 PM

Highlights

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.

Intuition

  1. The generation of high-resolution images is difficult because higher resolution makes it easier to tell the generated images apart from training images, thus drastically amplifying the gradient problem.
  2. Our key insight is that we can grow both the generator and discriminator progressively, starting from easier low-resolution images, and add new layers that introduce higher-resolution details as the training progresses. This greatly speeds up training and improves stability in high resolutions.
Read more »

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

  • Category: Article
  • Created: February 12, 2022 3:57 PM
  • Status: Open
  • URL: https://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf
  • Updated: February 15, 2022 3:03 PM

Highlights

  1. We present an approach for learning to translate an image from a source domain \(X\) to a target domain \(Y\) in the absence of paired examples.
  2. Our goal is to learn a mapping \(G: X \rightarrow Y\) such that the distribution of images from \(G(X)\) is indistinguishable from the distribution \(Y\) using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping \(F: Y \rightarrow X\) and introduce a cycle consistency loss to enforce \(F(G(X)) \approx X\) (and vice versa).
Read more »

  • Category: Article
  • Created: February 15, 2022 12:16 PM
  • Status: Open
  • URL: https://arxiv.org/pdf/2111.06377.pdf
  • Updated: February 15, 2022 2:03 PM

Highlights

  1. This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
  2. The author develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens.
  3. The author find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task.
Read more »

Google’s Hybrid Approach to Research

  • Category: Article
  • Created: January 17, 2022 4:50 PM
  • Status: Open
  • URL: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/38149.pdf
  • Updated: January 18, 2022 5:03 PM

Background

  1. CS is an increasingly broad and diverse field. It combines aspects of mathematical reasoning, engineering methodology, and the empirical approaches of the scientific method.
  2. CS is an expanding sphere, where the core of the field (Theory, Operating Systems, etc.) continues to grow in depth, while the field keeps expanding into neighboring application areas.

Methods

Research in Computer Science at Google

  1. The goal of research at Google is to bring significant, practical benefits to our users, and to do so rapidly, within a few years at most.
  2. Sometimes, research at Google operates in entirely new spaces, but most frequently, the goals are major advances in areas where the bar is already high, but there is still potential for new methods.
  3. Because of the time-frame and effort involved, Google’s approach to research is iterative and usually involves writing production, or near-production, code from day one. Elaborate research prototypes are rarely created, since their development delays the launch of improved end-user services. Typically, a single team iteratively explores fundamental research ideas, develops and maintains the software, and helps operate the resulting Google services – all driven by real-world experience and concrete data.
Read more »

Hidden Technical Debt in Machine Learning Systems

  • Category: Article
  • Created: January 17, 2022 3:28 PM
  • Status: Open
  • Updated: January 17, 2022 5:02 PM
  • url: https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf

New Metaphors

technical debt: long term costs incurred by moving quickly in software engineering.

smell: In software engineering, a design smell may indicate an underlying problem in a component or system.

Background

  1. This paper argues it is dangerous to think of machine learning’s quick wins as coming for free. Using the software engineering framework of technical debt, the authors find it is common to incur massive ongoing maintenance costs in real-world ML systems.
  2. Developing and deploying ML systems is relatively fast and cheap, but maintaining them over time is difficult and expensive.
  3. The authors argue that ML systems have a special capacity for incurring technical debt, because they have all of the maintenance problems of traditional code plus an additional set of ML-specific issues. This debt may be difficult to detect because it exists at the system level rather than the code level.

Highlights

  1. This paper does not offer novel ML algorithms, but instead seeks to increase the community’s awareness of the difficult tradeoffs that must be considered in practice over the long term.
Read more »