Distill Details "micro-image parameterization": Neural network visualization and style migration weapon!

Source: Internet
Author: User

Recently, the journal Platform Distill published an article by Google researchers, introducing a powerful tool for neural network visualization and style migration: micro-image parameterization. This article describes the tool in several ways.

Image Classification Neural network has excellent image generation capability. techniques such as deepdream [1], style migration [2], and feature visualization [3] Use this capability as a powerful tool to explore the inner workings of neural networks and to advance artistic creation in a small step based on neural networks.

All of these technologies work essentially the same way. The neural networks used in the field of computer vision possess rich internal representations of images. We can use this characterization to describe the features (such as style) that we want the image to have, and then refine the image to have these features. This optimization is possible because the network is negligible for input: we can slightly adjust the image to better fit the desired characteristics and then iteratively apply this fine-tuning in the gradient descent.

In general, we'll parameterize the input image to the RGB value for each pixel, but that's not the only way. Since the mapping from parameter to image is tiny, we can still use gradient descent to optimize alternative parameter settings.

Figure 1: When the image is parameterized, we can use the reverse propagation (orange arrow) to optimize it.

Why is parameterization important?

It may be surprising that parameter settings that change the optimization problem can change the results so significantly, although the objective function that is actually optimized is still the same form. Why is the selection of parameter settings such a significant effect? The reasons are as follows:

(1) Improved optimization: conversion inputs make optimization problems simpler, a technique called "preprocessing", which is an important part of the optimization process. We have found that simple changes in parameter settings can make image optimization easier.

(2) Gravitational basins: when we optimize the input of neural networks, there are often many different solutions that correspond to different local minima. The optimization process falls into a local minimum which is controlled by its gravitational basin (i.e., the area of the optimized surface under the influence of the minimum value). Changing the parameter setting of the optimization problem can change the size of different gravitational basins and affect the possible results.

(3) Additional constraints: Some parameter settings override only the subset of possible inputs, not the entire space. The optimizer under this parameter setting still seeks to minimize or maximize the solution of the target function, but they need to obey the constraints set by the parameter. By choosing the right set of constraints, we can apply a variety of constraints, from simple constraints (for example, the image boundary must be black) to complex and fine-grained constraints.

(4) Implicitly optimizes other objective functions: Parameterization may internally use one and output different objective functions and optimize them. For example, when the input to a visual network is an RGB image, we can parameterize that image to render a diagram for a 3D object and use reverse propagation for optimization during rendering. Since 3D objects have more freedom than images, we typically use random parameterization, which generates images that are rendered from different perspectives.

In the next section of the article, we'll give a few examples to prove the effectiveness of using the above methods, which bring surprising and interesting visual results.

Visual interpretation of Alignment features

Related Colab page: https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/ Differentiable-parameterizations/aligned_interpolation.ipynb

Feature visualization is most commonly used to visualize individual neurons, but it can also be used to visualize neuronal combinations to study how they interact [3]. Instead of optimizing an image to activate a single neuron, it optimizes it to activate multiple neurons.

When we want to really understand the interaction between the two neurons, we can go a step further and create multiple visualizations, gradually shifting the target function from optimizing one neuron to giving more weight to another activation neuron. This is somewhat similar to the potential spatial interpolation of the generated model (such as GAN).

However, there are still some small problems: feature visualization is random. Even if you are optimizing the same object, each time the visualization is different. Generally speaking, this is not a problem, but it does hinder interpolation visualization. If you do this, the resulting visualizations will be non-aligned: visual keys (such as eyes) will appear at different locations in each image. In slightly different objects, lack of alignment will make it harder to identify differences, because differences are masked by more obvious schema differences.

If we look at the animated presentation of the interpolated frames, we can see the problem with independent optimizations:

Figure 2: (1, 3 lines) non-aligned interpolation: The position of a visual key (such as the eye) from one frame to the next is changed. (2, 4 lines) different frames are easier to compare because the visual key points are in the same position.

Figure 3: (top row) starts with an independently parameterized frame, (BOC) then each frame is combined with a single shared parameter setting, and (bottom row) creates a visual-aligned neuron interpolation.

By sharing a parameter setting between frames, we facilitate the natural alignment of the visualization results. Intuitively, shared parameter settings provide a common reference for visual key displacement, but individual parameter settings give each frame its own visual effect based on the interpolation weights. This parameter setting does not change the target function, but it does magnify the gravitational basin (where the visualization is aligned).

This is the first example of a micro-parameterization that can be used as a useful aid in a visual neural network.

Texture-style migration with 3D rendering

Related Colab page: https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/ Differentiable-parameterizations/style_transfer_3d.ipynb

Now we have constructed a framework for efficient reverse propagation to UV mapped textures, which can be used to adapt existing style migration techniques to accommodate 3D objects. Similar to the 2D case, our goal is to re-draw the texture of the original object with the user-supplied image style. is an overview of the method:

The algorithm begins with random initialization of textures. In each iteration, we sampled a random viewpoint pointing to the center of the object's bounding box and rendered two images of it: one is a content image with a raw texture, and the other is a learning image (learned image) with the current optimized texture.

After rendering the content image and the learning image, we optimized the style migration objective function of gatys and other people [2] and mapped the parameterization back to the UV map texture. Repeat the process until the desired content and style are fused in the target texture.

Figure 17: Style migration for various 3D models. Note: Visual keys in the content texture, such as the eye, are displayed correctly in the resulting texture.

Because each view is individually optimized, the optimization in each iteration incorporates all elements of that style. For example, if you choose van Gogh's Starry night as a style image, each single view will be added to the stars. We found that the "memory" that introduced the previous view style would get better results, as shown in. For this reason, we maintain the sliding average invariant of the Gram matrix of the characterization style in the recent sampling viewpoints. Each time we optimize an iteration, we calculate the style loss based on the average matrix, rather than on a particular view.

The final texture combines the elements of the desired style, while preserving the characteristics of the original texture. For example, Van Gogh's "Starry Night" as a model of the style image, the final texture of the Van Gogh's work contains a light and powerful style of strokes. However, although the style image is cold-toned, the fur in the final photo retains the warm orange tones of the original texture. More interesting is the way the rabbit's eyes are treated when the style is migrated. For example, when the style comes from Van Gogh's paintings, the Rabbit's eyes will rotate like a star, and if it is Kangding's work, the Rabbit's eyes will become abstract patterns, but still similar to the original eyes.

Figure 18: Migrating the large one parades on red bottom (Fernand Leger, 1953) to Stanford Bunny (Greg Turk & Marc) with the Fernans Leges painter's paintings Levoy) on the 3D printing results.

Conclusion

For creative Artists or researchers, there is much room for optimizing parametric images. This produces not only very different image results, but also animations and 3D images. We think the possibility of this article only touches on the fur. For example, you can optimize the texture of a 3D object to optimize the material or reflectivity, and even continue to optimize the mesh vertex position in the direction of people like Kato [15].

This paper mainly discusses the parameterization of micro-images because they are easy to optimize and cover most applications. Of course, it is also possible to optimize the parameterization of non-micro or partially micro images by reinforcing learning or evolutionary strategies [17, 18]. The use of non-micro-parameterization to achieve image or scene generation is also very exciting.

Original link: https://distill.pub/2018/differentiable-parameterizations/

Distill Details "micro-image parameterization": Neural network visualization and style migration weapon!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.