Depixeling pixel Art)

Source: Internet
Author: User

(Click here for the original article link)

Author: Johannes Kopf (Microsoft Research Institute) and Dani lischinski (Hebrew University)

Summary

We propose a novel algorithm to export a smooth, resolution-Independent Vector image from a pixel image, which can be scaled up without distortion. Our algorithm completely retains the scaling feature information of the original input pixel image, and generates a shadow with smooth variation through segmented smooth contour curves, and clearly splits the feature information into different regions. In the original image, pixels are divided into square pixels, and the adjacent pixel blocks on the diagonal corner are connected to the original pixel blocks only by one pixel. This causes the weak feature information to be invisible after being zoomed in normally, and whether the diagonal pixel blocks should be connected Becomes unrecognizable. The key to our algorithm is to deal with the relationship between these diagonal pixel blocks. We can reshape these pixel units so that the original diagonal corner should be connected to the pixel units still adjacent to the edge. In this way, the connection relationship information of the diagonal pixel block is retained even if it is enlarged. We reduced the pixel walk, and improved smoothness by fitting the spline curve of the image contour and optimizing the control points of the spline curve.

Keywords
Vectorization of pixel image quality optimization

1 Introduction

A pixel chart is a form of digital representation that describes the image exactly to the pixel level. These images were widely used in all computer and video games before the 1990 s. In addition, icons and small display devices such as mobile phones in older desktop environments apply such images. Due to the hardware restrictions at that time, the artists were forced to plot with a canvas with only a few color classifications, and manually set the position of each pixel, instead of automatically scaling from a higher resolution. Therefore, Classic pixel images are often labeled as primitive aging, which means they have long since disappeared in modern computer graphics. However, in the golden age when video games were prevalent, pixel charts were everywhere. Many have also become cultural symbols of a whole generation, such as "interstellar invasion" and three-color "super little Mary". Thanks to a variety of simulators, these classic electronic games continue to provide joy and run directly on the computer without relying on hardware that has vanished for a long time.

In this paper, we are faced with a very interesting challenge: can we take a sub-picture from a frame of a long-time video game or simulator and vectorize it? In fact, the manual layout of each pixel enables each pixel to carry the richest meaning. This provides sufficient information, so that we can export the corresponding vector chart to be scaled up without going through the sample. Of course, the quantitative nature of a pixel map does present a certain degree of aesthetic, but we believe that the vector map obtained through our method can still maintain the charm of some original images (see figure 1 ).

Result of the nearest neighbor processing (source image: 40x16 pixels)

Figure 1: the original sample amplification results are not good enough, but our algorithm exports a smooth, resolution-Independent Vector image from the source image, which can be used for high-resolution display devices. (Source image copyright belongs to Nintendo Co., Ltd)

Previously, the Vectoring Technology was designed for natural landscape images and based on the fragment and edge detection filters. It is not suitable for solving micro-feature information of pixel images. These methods are typical pixel clustering and convert Partition boundaries into smooth curves. However, in a pixel chart, each pixel has an important significance. As a result, the detailed information is lost when the previous vectoring algorithm is applied to the pixel graph (see figure 2 ).

Photozoom4 (general image fill sampling amplification) hq4x (Specific pixel image amplification) Adobe Live trace (vectoring)

Figure 2: results obtained using the methods described above. Compare the result in Figure 1.

Over the past decade, we have proposed many specific pixel image amplification methods, which we will refer to in the next section. These technologies can often handle well. However, because of their locality, the results still have serrations. And they do not deal with the adjacent relationship of diagonal pixel blocks. Also, the amplification factor of all these methods is fixed to 2 times, 3 times, or 4 times.

In this paper, we introduce a novel processing method, which is suitable for the amplification of a single pixel in the feature information of a pixel image. We first deal with the relationship between the oblique and diagonal pixel blocks, and then fit the spline curve of the image contour and optimize the control points of the spline to achieve smooth and de-sawtooth. The vector result of an image can be rendered on a display device of any type of resolution.

We successfully applied our algorithms to the massive pixel image processing exported from outdated electronic games and desktop icons, and also used for frame processing in the Super Nintendo simulator. We have extensively compared various pixel image amplification methods, from vectoring to general image filling sampling amplification and specific pixel image amplification methods. In addition, the examples in this paper are attached to the supplementary materials.

2. Preliminary work

The preliminary work of this thesis can be divided into three parts. In figures 2 and 9, there are additional materials. We use our algorithms in each part to compare a meaningful algorithm.

Conventional image filling, sampling, and Amplification

"Classic" image fill sampling amplification, applying a linear filter, or exporting it to the analytical interpolation theory or the signal processing theory. Examples using filters, such as "nearest neighbor, bicubic, and lancosz [wolberg 1990]. These filters do not make any assumptions about the original data. The only condition is the bandwidth limit. As a result, after processing the image, the highlights of the mutation and the obvious boundary become blurred.

In the past decade, many complex algorithms have been introduced and strong assumptions have been made for original input images. For example, we assume that we discuss natural image statistics [Fattal 2007] or self-similarity [glasneret al.2009]. These methods are beyond the arguments in this article. However, in most cases, these (natural) images do not contain color-quantified micro-pixel images. Therefore, these methods often do poorly in processing these input images.

Pixel graph Enhancement Technology

In recent years, many specific pixel image amplification algorithms have been available [Wikipedia 2011]. Most of them are produced in virtual communities and are not published in scientific journals; however, open-source implementations are everywhere. All these algorithms are based on pixels and fixed integer amplification factors.

The first such algorithm we know is epx. Eric Johnston transplanted lucasarts games to early Macintosh computers in 1992, the resolution of this computer is roughly twice that of the original Platform [Wikipedia 2011]. This algorithm uses a simple method to enlarge the image resolution: each pixel is initially replaced with a pixel block of the same color of 2x2; then, if the color of the adjacent pixels on the left of the source image is the same, replace the pixels on the left of the new 2x2 pixel block with the pixels on the left of the source image, the other points of the new pixel block, and so on.

This algorithm is simple enough to be applied to real-time systems and often performs well. However, if the edge direction is quantified to only 12 different directions, the result may be a sawtooth effect. Another limitation is the strict locality of this algorithm, which makes it difficult to solve the adjacent relationship of oblique diagonal pixels. The limitations are shown in Figure 9 (lower right.

These algorithms are based on the same idea, but they use more complex logic to determine the color of 2x2 pixel blocks. The most famous ones are Egle (Dirk Steven S), 2 xsai [liauw Kie fa 2001] And scale2x [mazzonleini 2001], all of which are through more neighboring points or using extended colors. Several lightweight implementations are killed by different names, such as superegle and super2xsai. The genetic limitations of these same family algorithms are that they only allow doubling. A larger amplification means repeated execution multiple times. However, this policy will significantly reduce the image quality as the amplification factor increases, because these methods assume that non-sawtooth input leads to the output.

In this type of algorithms, the latest and most complex solution is the hqx family [stepin 2003]. This algorithm detects 3x3 pixel blocks at a time and compares the middle pixel with its eight nearest neighbors. If each neighbor is classified as a color similar or not similar, different combinations of 256 are generated. Then, a real interpolation mode is generated for each Combined Query table. This method can achieve a variety of effects, such as obvious boundaries. The results of this method are indeed of high quality. However, due to the strict locality, this algorithm still cannot solve some connection relationship models, and may still produce serrations. The lookup table is only used for processing 2 times, 3 times, and 4 times of amplification factors.

Image Vectoring

There are countless pieces of work that can be automatically exported from images. These methods are similar to our methods. However, most of the default intent is to process larger natural images, rather than pixel images. Their core point is that the vectoring method relies on Segmentation and Boundary Detection Algorithms to cluster pixels into larger partitions to fit vector curves. These clustering tools do not work well on Pixel charts. Because the feature information of a pixel chart is very weak, all boundaries are step-by-step, and there is no continuous gradient. Therefore, these algorithms are very likely to lose the tiny feature information of the pixel graph when partitioning the pixel cluster.

Another challenge for these algorithms is to process eight-direction links to pixels. Many pixel graphs only display the connection relationship through the corner points of the pixel blocks. Conventional graphical vectoring tools cannot handle this situation well and are very likely to interrupt the connection relationship feature information. Below, we only mention some representative vector processing methods.

Selinger [2003] describes an algorithm called "potrace", which is used to track black and white images and present results with tiny images. However, this method cannot process color images. Color pictures must be quantified first, and the components are resolved to different black and white channels, and then tracked separately. As a result, the boundaries of the result graph penetrate into each other.

Figure 3: algorithm overview in this article. (A) input image (16x16 pixels ). (B) initialize a similar graph with oblique boundary. The blue lines are in the shadow area, so they can be safely removed. While the red lines are removed, which will result in changes (c) oblique boundary calibration (d) based on the similarity map of the calibration, remodeling the pixel block to reflect the relationship (E) spline Fitting visible boundary (f) final result graph with Spline Optimization Fitting to reduce the Sawtooth (source image copyright belongs to Nintendo Ltd)

Lecot and l'evy [2006] proposes a system ("ardeco") for vectorizing grating graphs. Their algorithms use a set of vector bases and one-or two-step Approximation to approximate the best results. This decomposition gives a sharding algorithm, which cannot achieve satisfactory results in processing pixel graphs.

Lai et al. [2009] proposed a method to automatically export a progressive grid from a grating chart. This method also relies on the sharding algorithm and cannot process a plain image.

Orzan et al. [2008] describes the diffusion curve of the image division and the color on both sides of the curve. They also provided an algorithm to automatically generate the description of this diffusion curve. However, their formula relies on the edge detection. These filters cannot achieve good results in pixel graph processing. The most likely reason is that the size of these images is small, and the corresponding edge detector can only provide limited support. Xia et al. [2009] describes a technology used to divide a grating image in a triangle at the pixel level. However, it still relies on edge detection.

A wide range of commercial graphics tools, such as Adobe Live trace [Adobe Company 2010] and vector magic [vector magic company 2010], are used for automatic vectorization of raster graphics. The Exact Theory of the original algorithms is not public, but they do not perform well when used in pixel graphs. You can refer to the comparison and supplementary materials in this article.

3 Algorithm

Our purpose is to convert a pixel image into a vector image unrelated to the resolution. A smooth and variable shadow is generated through a segmented smooth contour curve, the feature information is clearly divided into different regions. Although this is also the goal of the general image vectoring algorithm, the unique features of the pixel MAP bring us extraordinary challenges:

1. Each pixel is meaningful. For example, the color of a single pixel is very different from that of the surrounding pixel. It is a typical feature information that must be retained (for example, the eyes of a character role)

2. The line and curve are connected in 8 directions of pixel width. For example, the black profile of the ghost in 3-. These pixels seem to be linked together, but they do not look like they are magnified.

3. localized fuzzy configuration: for example, when considering a 2x2 detection board model of two different colors, it is impossible to clearly indicate which two diagonal pairs should be associated, the section that forms the feature curve (see Figure 3-a ghost's mouth and ears) has been studied and divided into foreground and background in binary images. A simple solution has been proposed [Kong and Rosenfeld 1996]. The problem is how to deal with more complex multi-color situations.

4. It is difficult to distinguish the feature information that needs to be processed continuously from those that do not need to be processed manually. For example, in Figure 3-a, how do we know that the mouth must keep fluctuating, and the ghost's contour should be smooth?

3.1 Summary

The main basis of the vector description in this article is the Quadratic B-spline curve, which is used to define the piecewise smooth contour between partitions. Once the curve is calculated, a picture can be rendered using the standard tool [nehab and drawing PE 2008; jeschke et al.2009. Therefore, our main computing task is to determine the exact geographic location of coordinates and these outlines. Similar to other vectoring algorithms, the final analysis is to detect boundaries and fit curves. However, for the above reasons, this process is very complicated.

Because of the small size and extremely limited color palette of the pixel chart, the calibration boundary is very simple: any two adjacent pixels have different colors, and should be separated by the contour lines. However, the problem is that the edge segments are connected as one, and the eight-direction connection relationship and whether or not the connection relationship are well defined.

Consider a rectangular Raster image with a length of W + 1 and a width of H + 1, indicating a w x H image. Each pixel corresponds to a grid unit. The horizontal and vertical adjacent grids share an edge with the current grid, while the diagonal corner near the grid shares a vertex with the current grid. When the image is enlarged, the diagonal corner near the grid seems to be not connected to the current grid, while the adjacent grid that shares an edge is still connected. Therefore, the first step of our processing method is to modify the original square pixel unit so that only the adjacent pixels of one vertex share one edge in the vertical direction. This process is described in detail in section 3.2. We use a number of careful design-inspired methods to introduce how to determine whether oblique diagonal pixels are connected.

After remodeling, we can determine the visible boundary by comparing the significant differences between adjacent pixels. We specify that these edges are visible because they constitute the visible contour of our final vector description; in comparison, the remaining edges are kept in the smooth shadow area. To generate a smooth contour curve, we use a Quadratic B-spline to fit the visible boundary, which is described in section 3.3. However, since the position of the Curve Control Point is quantified by the height of low-resolution pixels, the processing results are still jagged. We therefore optimized the shape of the curve to reduce the Sawtooth effect, but kept the contour section of the curve that intentionally has a higher bending feature information, which is described in section 3.4.

(A) heuristic rules for sparse pixel array models (B) pixel Island model heuristic rules

Figure 4: How to Solve the diagonal Boundary Problem in the similarity graph: curve: not shown here, as shown in Figure 3-B. Sparse pixel array: The red part is more sparse than the green one. This heuristic method keeps the red raster edge in touch. Pixel islands: the rule inspired by this situation is to keep edge connections, because otherwise a single pixel Island will be created.

Finally, we use the radial basis function to perform color interpolation rendering. This is a step-by-step boundary sensitive processing, and the impact of each pixel is not propagated to the other side of the contour boundary.


3.2 remodeling of pixel units

The goal of the first step is to reshape the pixel unit so that adjacent pixels with similar colors share the border. To determine which pixels share the boundary, we have created a similar image, with each pixel having a graphical node. At first, each node is connected to all its eight adjacent points. Then, we remove all adjacent edges shared by nodes with different colors. Based on the standards applied in the hqx algorithm [stepin 2003], we compared the YUV channel values of adjacent nodes and considered them not similar based on the differences between YUV and YUV over 48/255, 7/255 and 6/255 respectively.

Figure 3-B shows the results of this processing for similar images. This figure contains many diagonal links. Our goal is to eliminate all these cross edges to obtain a regular image similar to Figure 3-C. After further processing the generated normalized image, we can get the attribute of the shared edge of the neighboring pixel unit we expect to see, as shown in Figure 3-D.

Two situations need to be distinguished:

1. If the 2x2 pixel block is all connected together, it is part of the continuous shadow area. In this case, the two diagonal lines can be safely erased without affecting the final result. These links are the blue parts similar to Figure 3-B.

2. If the 2x2 pixel block only contains the diagonal line and there is no horizontal or vertical line, removing any one will affect the final result. In this case, we must be careful to erase the selected connections. These links are similar to the red part of Figure 3-B.

It is impossible to determine which lines to erase from the perspective of a 2x2 pixel block. Only the red part of the 2x2 pixel block in Figure 3-B cannot tell whether the pixel block is bright or dark. However, it can be detected by detecting nearby pixel blocks farther away. Obviously, hidden pixel blocks need to be retained to form a longer linear feature, while bright pixel blocks are part of the background.

Determine which connections are related to the Gestalt rule, and intentionally simulate the features shown by people manually. This is a very difficult task, but we have summarized three simple heuristic rules to effectively solve many connection problems. These situations can be found from the supplementary materials in this article. We assign a weight for each heuristic rule, and finally select the connection method that maximizes the weight. If the weights of the two diagonal connections are the same, both are removed. These heuristic rules are described as follows:

Curve rules: If two pixels are part of a long curve feature, they should be connected. The curve here is defined as a sequential combination of the boundary of the node with the connection weight of 2 in a similar graph. We calculate the curve length by joining the curve in two diagonal links. If the endpoint of the diagonal line does not have a node with the weight of 2, the shortest length of the curve is 1. this rule determines which diagonal line is connected to maximize the curve length so that the line is retained and the other line is erased. Figure 3-B shows two examples: Assuming the red diagonal lines on the right have been investigated, and then the two red diagonal lines on the left are examined. The pixel block that contains the black part is part of the curve with a length of 7, while the white pixel block is not (only 1), so according to this heuristic rule, connect the pixel block containing the black part and obtain the weight value 6 (see Figure 3-C, 6 is the length from the curve to here ).

Sparse pixel array rules: For two colors (not only black and white), people tend to use a more sparse color as the foreground by default, and another color as the background. In this case, we connect the foreground pixel block by default (imagine a dotted line ). This rule calculates the two diagonal lines that can connect to the least grid to determine which diagonal lines to connect. For example, an 8x8 data center with a diagonal line is shown in Figure 4-. This rule connects the red diagonal lines in the middle, while the green lines do not. Because the two diagonal lines are connected separately, the red points of the foreground color are less than the green points.

Pixel island rules: we try to avoid broken images into many small parts. Therefore, we avoid creating unconnected pixel islands. If one of the two diagonal lines cannot connect to any node, and the other can connect to a node with a weight of 1, this means that if the two lines do not consider the connection, a pixel Island will be generated. In order to avoid this, the weight of this diagonal link is calculated as 5. See Figure 4-B.

Now we can deal with the connection problem in the similar graph. After the image is normalized, we can consider further reshaping the pixel cell graph. The remodeling method is as follows: second-class division of each edge in a similar graph is connected to the node connected by the Half Edge in sequence based on the half edge. Then, the remodeling unit graph can be seen as a generalized KNN graph. Each KNN unit consists of all the points with the shortest distance from the center node of the unit compared with other nodes. We can simplify this graph by removing all vertices shared by only two pagerps and pulling them into a straight line. The illustration shows the simplified version of the generalized KNN illustration and the corresponding similar graph. Figure 3-C uses the simplified KNN graph to obtain the three-dimensional graph. It is worth noting that the coordinates at the center of the node are exactly an integer multiple of the half of the length and width of the pixel unit. Later, we will use this to make model matching for similar graphs.

Illustration: The left side is the exact generalized canvas, and the right side is the simplified version obtained using the above method.

The shape of the KNN unit is completely determined by its adjacent unit in the similar graph. Due to the limited number of shapes of different Uris, a relatively efficient algorithm can be used: scanning a line to traverse a similar graph, matching a 3x3 square each time, then, paste the corresponding KNN unit template that matches the square edge in detail. In this method, we directly use the simplified version of the canvas instead of the exact version.

3.3 derivation of Splines

Reshapes the unit graphics to solve all connection problems and obtain rough results. However, its boundary has many corners and is not smooth enough. Due to the previous quantization processing, it still looks like a piece. We solve these problems by identifying visible boundaries. The piecewise straight lines of visible boundary vertices shared by two clearly different units are connected and replaced by a Quadratic B-spline [de boor 1978]. Here, we only determine the node value by the number of visible edges. The control points of B-spline curves are initialized to these nodes.

When the three splines converge to a common vertex, we can choose two of them to form a t-joint at this point. The advantage of doing so is to get a simpler and smoother graph. Question: Which two smooth connections should we choose?

We first confirm whether each visible edge that contains a common vertex needs to be hidden or the contour boundary. Hidden edges separate two elements with similar colors. The similarity here refers to the difference in color between the two graphic units. It is not sufficient to define the sharing boundary as the contour boundary. The contour boundary indicates that elements with different colors can be distinguished. Specifically, in our specific implementation, the YUV distance between two adjacent elements cannot exceed 100/255. That is to say, if a three-boundary of a shared vertex has one hidden boundary and two contour boundary, we usually choose to smoothly connect the two contour boundary. See Figure 5-. If this rule cannot be solved, we simply process it and determine which two sides of the three sides have an angle close to 180 °, and connect the two sides smoothly. See Figure 5-B.

Figure 5: Resolve the T-join at the marking node. The spline is colored in yellow. (A) in this case, the left side of the Y-type belongs to the hidden border, so the other two sides are connected to a spline. (B) Three completely different color areas converge at one point. The two sides of the angle closest to 180 ° are connected into a spline.

There is also a small problem: the B-spline curve only approaches their control points, rather than inserting the values of these points. Therefore, in a T-shaped connection, we must adjust the endpoint of a curve that needs to be smoothly connected to match the endpoint of another curve that needs to be smoothly connected.

3.4 curve Optimization

Fitting a B-spline significantly improves the smoothness of the result graph. However, the result graph still has a sawtooth shape (see Figure 6-B ).

(A) input (13x15 pixels) (B) spline initialization (c) spline Optimization

Figure 6: Eliminate the Sawtooth by minimizing the Curve Curvature.

Therefore, we further improve smoothness by optimizing and controlling the vertex position. The optimization rule is to minimize the total energy of each vertex:

Pi indicates the position of node I. The energy of a node is defined as the sum of the smoothness and position of the point:

The two have the same effect on the total energy. Smoothness is calculated by curvature. Therefore, we define the smoothness energy:

Here, R (I) is the region of the curve affected by PI, and K (s) is the curvature of the s point. We can use a fixed sampling interval to calculate only the integer type.

To prevent too many deformation results, we need to further limit the position of the control point. We define the location energy as follows:

Is the initial position of node I. Increase the index on the right of the equation to 4, allowing the node to have a relatively small range of space for activity, but it can significantly increase the location energy of a large error, easy to remove.

Note that the energy function does not distinguish whether it is a sawtooth feature or the prominent feature information similar to the Sawtooth information that needs to be retained, such as the corner. In the previous case, smooth processing is required, but in the latter case, it must be avoided. We use equation 3 to detect the highlighted feature information similar to serrations, and exclude these areas directly from the overall image around the highlighted feature information. Due to the quantitative nature of the reshaped unit image, highlight feature attributes can only represent a limited number of special models, as shown in figure 7.

Figure 7: corner model detected by our algorithm. The original square pixel raster is displayed in gray. Because of the composition of similar graphs, the node position is an integer multiple of the half-Pixel Grid in both the horizontal and vertical directions, it is very convenient and direct to detect these models.

Therefore, we must simply process these models (including rotation and reflection transformation processing) in remodeling unit graphics ). After detecting a model, part of the spline between nodes in the model is immediately excluded from the overall graph. Figure 8 shows the detection model in an example sub-screen and highlights some curves that are directly excluded from the overall image.

Note: Figure 8 shows this structure in the original paper. To keep the original image appearance, the horizontal layout is not modified.

Figure 8: corner detection: (a) nodes in a corner detection model are marked in red. Some curve fragments containing these red nodes are excluded from smooth processing and highlighted in black.

Since the energy functions are non-linear and define a very smooth potential energy surface, we can use a very simple relaxation process for optimization. In each iteration, we perform a random node set test and optimize the node based on the actual situation. For each node, we try to center the current point several times, move within a random Small Radius Range, and retain the location where the potential energy of the current node is minimized.

After optimizing the position of the spline node, the shape of the pixel units around the spline may significantly change. Next, we will use the harmonic diagram [Eck et al.1995] to calculate all vertices that are not covered in the optimization discussion (or that are not included in the image's long, wide, and restricted boundary ). This method can minimize the distortion of elements and resolve this problem to a simple sparse linear system (for more information, see hormann [2001]).

3.5 Rendering

Our vector descriptions can be rendered using standard vector rendering techniques, such as systems described by nehab and distance PE [2008. The diffusion solution can also be used to render these vectors. Here, you can place the color source in the center of the element and prevent the diffusion from penetrating the spline. Jeschke et al. [2009] describes a diffusion system that can be rendered in real time. In this article, we use a slow but simple implementation: here we take the center of an element as the center, and set the truncation tail Gaussian influence function (σ = 1, the radius is 2 pixels ), set the influence degree of visible elements outside this area to 0. In the end, the color of each point is calculated based on the influence weight of each point.

4. Effect

We apply this algorithm to a large number of electronic games and other software. Figure 9 represents our results and compares them with various amplification techniques. In the supplementary materials, we provide more detailed results and control groups.

Figure 9: Sample results of comparing our algorithms with various algorithms. Please zoom in to the appropriate size to PDF for details. You can also view the filling material to obtain more comparison and result samples. (Source image: keyboard, 386 copyright belongs to Microsoft; Help, Yoshi, toad copyright belongs to Nintendo Co., Ltd.; Bomberman copyright belongs to Hudson software Co., Ltd.; axe Battler copyright belongs to SEGA company; intruders are copyrighted by Taito)

The performance of our algorithms depends on the size and number of curves exported from the original input. Even if you haven't invested a lot of energy in algorithm optimization, the effect is quite good. The following table lists 54 examples of the supplemental materials. The time spent on processing a single-core CPU of 2 to 4 GHz:

Of course, the focus of our current discussion is not to achieve real-time speed, but to optimize the effects of our algorithms on the input of animation materials, such as video games. For this experiment, we export the frame sequence from the video game simulator and then use our algorithms to process all frames. Two key steps for instantaneous consistency are pixel connection and node Location Optimization of similar graphs. After a series of tests, we found that our heuristic rules are robust and the processing of feature information is as expected, even with slight changes in the animation effect. During optimization, the node location is actually highly constrained, because on the right side of the equation of the location energy, the index is very large, which is 4. Because of this, our results are not separated from the original input and are consistent with the original input. Figure 10 shows the enhancement using our method. In the supplementary materials, we provide enhanced videos corresponding to higher resolution sequence frames and compare the results with those of other technologies.

Result of the nearest neighbor processing. Our result is the result of hq4x.

Figure 10: Apply pixel enhancement in a dynamic environment. In actual processing, the original output of the simulator is increased by 4 times. Please zoom in to the appropriate size in PDF to View Details (the source image copyright belongs to Nintendo Co., Ltd)

4.1 Restrictions

Our algorithms are specially designed for manually describing pixel maps. Since the middle of 1990s, video game devices and computers have more colors than hand graphics. In these systems, the designer must first process multi-color images or even photos at high resolution, and then apply the downsample processing to the corresponding game resolution. To some extent, this processing effect is closer to natural images than the original input targeted by our algorithms. For these images, the thick boundary produced by our algorithms is not always suitable. Figure 11 shows an example of an algorithm applied to such an image.

Input (24x29 pixels) Our results

Figure 11: unsuccessful example. Anti-sample input is hard to be processed using our algorithms. "Alas, doomed..." (the copyright of the source image belongs to the id software company)

Another restriction is that our spline is sometimes smooth to handle a bit too much, than 9 shows the "386" chip corner. Our corner detection model is based on heuristic rules, and sometimes it cannot fully fit with human visual perception. One possible extension in the future is to allow increasing the order of the B-spline vector to create prominent features in the vector description. For example, when a long line encounters an angle.

Although many of the input in our experiment uses some form of edge anti-walk, we have not tested pixel graphs with strong jitter models, such as detection board models, creating traces of overlapping shadows. Our algorithms also need to be improved to process these inputs.

5 conclusion

We present an algorithm for exporting resolution-Independent Vector descriptions from pixel graphs. Our algorithms solve the division and connection relationship judgment problem of the pixel raster, and reshape the pixel raster to ensure the sharing edge between the connected rasters. We export partitions that contain smooth shadow changes and use segmented smooth contour curves to clearly separate these areas. We have proved that the conventional image enhancement and vectoring algorithms cannot process pixel images very well, and our algorithms have good effects on various pixel image input.

There are still many directions for the subsequent work. Obviously, it is useful to optimize the real-time performance of algorithms and apply them to simulators. Some of the ideas mentioned above are helpful to general image vectoring techniques. Another interesting direction is to improve the processing of input images for reverse image moderation. This can be solved by rendering smooth boundary curves instead of contour curves containing vertices. A new interesting topic is the study of real-time enhanced sampling for Pixel animation. If we zoom in from a micro-input and output it on a high-resolution device, the positions of each point are extremely quantified, which leads to the picture interruption. In addition, many modern display devices run at a higher refresh frequency earlier, and they need to generate intermediate frames to improve the animation effect.

Thanks

We would like to thank Holger winnem's oller for his pertinent comments and help us create some control groups. This work was also partly sponsored by the Israel Science Foundation established by the Israeli text and Science Institute.

References (omitted)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.