Four tricks for fast blurring in software and hardware (favorites)

Source: Internet
Author: User

Four tricks for fast blurring in software and hardware

With the ever-increasing resolutions made possible by modern 3D graphics cards, computer games and real-time graphics are acquiring a uniform "Look," imposed largely by technical limitations and specified unimaginative Programming. i'm talking about that sharp-edged, polygonal look. great advances in texture resolution and per-Pixel Lighting have helped to fill in the previusly fuzzy polygon interiors, but the edges and feel of the image often reten main frustratingly synthetic and "clean."

Imaginative lighting, such as moving shadows, bump maps, and specular highlights, can help add variety to the rendered scene. these topics have been dealt with extensively elsewhere. however, one option that is often overlooked is blurring elements of the scene; for example, depth-of-field and focusing effects bring certain elements of an image into the foreground, hide unnecessary clutter, and reduce the need to resort to over-used fogging tricks. blurring is a costly process, but a fundamentally simple and useful one.

This article presents a few tricks which can help make real-time blurring possible, and hopefully will provide enough material to inspire you to invent your own hybrid techniques.

The basics of blurring

There are always ways to blur an image, but at a basic level it always comes down to low-pass filtering of the image -- this can be achieved in always ways, often by convolution of the image with a filter. it's instructive to think about the basics of this blurring process so you can appreciate how the tricks work, and what their has comings are.

Figure 1 shows a source image, and the results of blurring it with two simple filters: the first is a box filter, and the second is a Gaussian type equivalent to a Photoshop Gaussian blur of 1.0 pixels. the Gaussian filter gives a "softer," more aesthetically pleasing look, but the box filter has computational advantages that I'll discuss later.

Figure 1. The effect of blurring an image with (from left to right) a box filter and a photoshop-style Gaussian filter. The kernels of each these filters are given below.

Algorithmically, the filtering process is achieved by calculating a weighted average of a cluster of source pixels for every destination pixel. the kernels shown in Figure 1 dictate the weights of source pixels used to perform this average. informally, you can think of this as "sliding" The Small kernel image over the source image. at each position of the kernel, the average of the product of all the kernel pixels with the source pixels they overlap is computed, yielding a single destination pixel. the code to do this in the most obvious fashion is given in Listing 1, and got strated in Figure 2. note that all the examples given here assume a monochrome source image; colored images can be handled simply by dealing with each channel (red, green, and blue) independently.

Figure 2. An example of how a single destination pixel in a blurred image results from the weighted average of several source pixels.

Doing it quickly

The blurring algorithm described thus far is simple, but slow. for large images, and for large kernels, the number of operations rapidly becomes prohibitively large for real-time operation. the problem is particle ly acute when extreme blurring is required; either a small-kernel filter must be applied when times iteratively, or a filter with a kernel of a size comparable with the image must be used. that's approximately an N4 operation using the code in Figure 1 -- clearly no good.

The rest of this article describes a few tricks -- the first two entirely software-related, the latter two using the power of 3D graphics cards -- which help make real-time blurring possible.

Trick one: Box filter to the rescue

Look again at the box-filtered image in Figure 1, and at the kernel. it's a constant value everywhere. this can be used to great advantage. the general filtering operation used by listing 1 is described by this mathematical expression (you can skip to the end of this section for the payoff if you don't like math ):


Equation 1. here X, Y specifies the coordinate of the destination pixel, S is the source image, D is the destination image, Ker is kernel, and 2 k + 1 is the size (in pixels) of the kernel.

Allowing for a constant kernel value, this becomes:


Equation 2. Equation 1 rewritten for kernel with constant value c. Values of C other than 1 allow the brightness of the destination image to be changed.

However, the costly double sum per destination pixel still remains. this is where some nifty precomputation comes in handy. the key is to represent the source image in a different way. rather than storing the brightness of each pixel in the image, we precompute a version of the image in which each pixel location holds the total of all the pixels above and to the left of it (see figure 3 ). mathematically, this is described by Equation 3:


Equation 3. Image P at a point X, Y contains the sum of all the source pixels from 0, 0 to X, Y.

Note that this means that you need to store more than the usual 8 bits per pixel per channel -- The summed brightnesses toward the bottom right of the image can get very large.

Once this precomputation has been completed, the expression for the box-filtering process can be rewritten entirely in terms of sums starting at 0:


Equation 4. equation 2 rewritten with sums from 0, where p is the precomputed image from Equation 3.

Figure 3. the values in the table on the left represent a source image. each entry in the table on the right contains the sum of all the source pixels above and to the left of that position.

This equation gives exactly the same result as the basic box Filtering Algorithm in equation 2; the trick is that each of the double sums in equation 4 is just a single look-up into the precomputed image p. this means that the blurring operation for each destination pixel is already CED to four image look-ups, a few additions and subtractions, and a divide by a constant value (which can also be turned into a lookup with a simple multiplication table ). even more significantly, this algorithm's speed is independent of kernel size, meaning that it takes the same time to blur the image no matter how much blurring is required. code which implements this algorithm is given in Listing 2. it's slightly complicated by having to deal with edge cases, but the core of the algorithm is still simple. some impressive focusing and defocusing effects can be achieved with this code alone. it is special suited to static images (because you only have to perform the precomputation step once) such as front-end elements and text/fonts.

Http://www.gamasutra.com/features/20010209/evans_pfv.htm)

Another link:Real-time "Tron 2.0" Glow in half-life(Using CG)

Http://www.jester-depot.net/websvn/filedetails.php? Repname = blendersvn & Path = % 2 ftrunk % 2 frelease % 2 fplugins % 2 fsequence % 2fblur. C & Rev = 4 & SC = 1

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.