An easy-to-understand OpenGL ES article, opengles

Source: Internet
Author: User

An easy-to-understand OpenGL ES article, opengles

There are many ways to process images on a computer or mobile phone, but the most efficient method so far is to effectively use the graphics processing unit, or GPU. Your phone contains two different processing units, CPU and GPU. The CPU is a versatile operator and has to deal with everything. The GPU can concentrate on one thing, that is, perform floating point operations in parallel. In fact, image processing and rendering are a lot of floating point operations on the pixels that will be rendered to the window.

By using the GPU effectively, the image rendering capability on the mobile phone can be improved by hundreds or even thousands of times. If it is not GPU-based processing, real-time HD video filters on mobile phones are unrealistic or even impossible.

Shader is a tool that we use. The coloring tool is a small program written in the coloring language based on the C language. There are many coloring languages, but if you are developing OS X or iOS, you should focus on OpenGL or GLSL. You can apply the concept of GLSL to other more specialized languages (such as Metal. The concepts we will introduce here correspond well to the custom kernel matrix in Core Image, although they are somewhat different in syntax.

This process may be daunting, especially for new users. The purpose of this article is to expose you to some of the basic information necessary to write the image processing shader, and to take you on the road to writing your own image processing shader.

What is a shader?

In OpenGL ES, you must create two kinds of pasters: vertex shaders and fragment shaders ). The two paintors are two halves of a complete program. You cannot create only one of them. To create a complete coloring program, both of them must exist.

The vertex shader defines how ry is processed in 2D or 3D scenarios. A vertex refers to a point in a 2D or 3D space. In image processing, there are four vertices: each vertex represents an angle of the image. The Vertex coloring tool sets the vertex position and sends parameters such as the position and texture coordinates to the fragment coloring tool.

The GPU then uses the fragment shader to calculate the final color of each pixel of an object or image. Images are actually just a collection of data. The image document contains the color components and transparency values of each pixel. Because the formula for each pixel is the same, the GPU can streamline the job process for more effective processing. Processing on the GPU with the correct optimized shader will give you a hundred times the efficiency of image processing with the same process on the CPU.

  

Example of our first colorant vertex Colorant

Okay, we have enough to talk about the shader. Let's look at a real coloring program in practice. Here is a basic vertex shader in GPUImage:

NSString *const kGPUImageVertexShaderString = SHADER_STRING( attribute vec4 position; attribute vec4 inputTextureCoordinate;  varying vec2 textureCoordinate;  void main() {     gl_Position = position;     textureCoordinate = inputTextureCoordinate.xy; } );

Let's take a look:

attribute vec4 position;

Like all languages, the designers of the coloring er language also create special data types for common types, such as 2D and 3D coordinates. These types are vectors. We will go deeper later. Back to the code of our application, we created a series of vertices. One of the parameters we provided for each vertex is the position of the vertex in the canvas. Then we have to tell our vertex shader that it needs to receive this parameter and we will use it for something later.

 

attribute vec4 inputTextureCoordinate;

Now you may be wondering why we need a texture coordinate. Didn't we just get our vertex position? Aren't they the same?

In fact, they are not necessarily the same thing. Texture coordinates are part of texture ing. This means you will use it when you want to perform a filter operation on your texture. The coordinates in the upper left corner are (0, 0 ). The coordinates in the upper right corner are ). If we need to select a texture coordinate inside the image rather than the edge, the texture coordinate we need to set in our application will be different from this, like (. 25 ,. 25) in the upper left corner of the image, the width of each image is 1/4 to the right. In our current image processing application, we want the texture coordinates to be consistent with the vertex position, because we want to overwrite the entire length and width of the image. Sometimes you may want these coordinates to be different, so remember that they are not necessarily the same coordinates. In this example, the vertex coordinate space is extended from-1.0 to 1.0, while the texture coordinate is from 0.0 to 1.0.

 

varying vec2 textureCoordinate;

Because the vertex shader is responsible for communicating with the fragment shader, we need to create a variable to share the relevant information with it. In image processing, the only information required by the fragment shader is the pixel that the vertex shader is processing.

 

gl_Position = position;

Gl_Position is a built-in variable. GLSL has some built-in variables. We will see one of them in the example of the fragment shader. These special variables are part of the programmable pipeline, and the API will look for them and know how to associate them. In this example, we specify the vertex position and feed it back to the rendering pipeline from our program.

 

textureCoordinate = inputTextureCoordinate.xy;

Finally, we take out the X and Y positions of texture coordinates in this vertex. We only care about the first two parameters in inputTextureCoordinate, X and Y. This coordinate first exists in the vertex shader through four attributes, but we only need two of them. Let's take out the desired attributes and assign them to a variable that will communicate with the fragment shader, instead of giving more attributes back to the fragment shader.

In most image processing programs, vertex pasters are similar. Therefore, the next part of this article will focus on fragment pasters.

 

Fragment shader

After reading our simple Vertex coloring tool, let's take a look at the simplest part coloring tool: a pass-through filter:

varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; void main()  {    gl_FragColor = texture2D(inputImageTexture, textureCoordinate);}

This shader does not actually change anything in the image. It is a pass-through shader, which means that we input each pixel and then output exactly the same pixel. Let's look at the sentence:

varying highp vec2 textureCoordinate;

Because the fragment shader applies to each pixel, we need a method to determine which pixel/fragment we are currently analyzing. It needs to store the X and Y coordinates of pixels. What we receive is the texture coordinates currently set in the vertex shader.

uniform sampler2D inputImageTexture;

 

To process the image, we receive an image reference from the application and treat it as a 2D texture. This data type is called sampler2D because we need to sample a vertex from this 2D Texture for processing.

gl_FragColor = texture2D(inputImageTexture, textureCoordinate);

This is the first method unique to GLSL we have encountered: texture2D. As the name suggests, create a 2D texture. It uses the attribute we previously declared as a parameter to determine the color of the processed pixel. This color is then set to another built-in variable, gl_FragColor. Because the only purpose of the fragcolor is to determine the color of a pixel, gl_FragColor is essentially the return statement of our fragment shader. Once the color of the clip is set, the part coloring tool does not need to do anything else. Therefore, you will not execute any statements after this.

As you can see, a large part of the color writer is to understand the color language. Even if the coloring language is based on the C language, there are still many strange and subtle differences that make it different from the common C language.

 

Precision Qualifiers)

Let's take a look at our direct coloring tool. You will notice that one property is marked as "varying" and the other is marked as "uniform ".

These variables are input and output in GLSL. It allows communication from the input of our application, as well as between the vertex shader and the fragment shader.

In GLSL, there are actually three tags that can be assigned to our variables:

  • Uniforms
  • Attributes
  • Varyings

Uniforms is a way to communicate with your shader. Uniforms is designed for constant input values in a rendering loop. If you are applying a brown filter and you have specified the intensity of the filter, these are things that do not need to be changed during rendering. You can use it as the Uniform input. Uniform can be accessed in both vertex and fragment pasters.

Attributes can only be accessed in the vertex shader. Attribute is an input value that varies with each vertex, such as the vertex position and texture coordinate. The Vertex coloring er uses these variables to calculate positions, calculate some values based on them, and then pass these values to the fragment coloring er in varyings mode.

Last, but equally important, is the varyings label. Varying will appear in both vertex and fragment pasters. Varying is used to transmit information in the vertex coloring tool and fragment coloring tool, and must have a matching name in the vertex coloring tool and fragment coloring tool. The value is written to varying in the vertex shadow and then read in the fragment shadow. The value written into varying is inserted into each pixel of two vertices in the form of interpolation in the fragment shader.

Let's take a look at the example of a simple shader we wrote earlier. In the vertex and fragment pasters, varying declares textureCoordinate. We write the varying value in the vertex coloring tool. Then we pass it into the fragment shadow and read and process it in the fragment shadow.

The last thing we should pay attention to before we proceed. Check the created variables. You will notice that texture coordinates have an attribute called highp. This attribute is responsible for setting the variable precision you need. Because OpenGL ES is designed to be used in systems with limited processing capabilities, the accuracy limit can be added to improve efficiency.

If you do not need high precision, you can set it, which may allow processing more values in a clock loop. On the contrary, in texture coordinates, we need to ensure accuracy as much as possible, so we actually need extra precision.

Precision modification exists in OpenGL ES because it is designed for use on mobile devices. However, it does not exist in earlier versions of OpenGL. Because OpenGL ES is actually a subset of OpenGL, you can almost always directly transplant OpenGL ES projects to OpenGL. If you do this, remember to remove the precision modifier from your desktop coloring tool. This is important, especially when you plan to port a project between iOS and OS X.

Vector

In GLSL, you will use many vector and vector types. Vectors are a tricky topic. They seem intuitive on the surface, but they are often confused when we use them because they have many purposes.

In the GLSL environment, a vector is a special data type similar to an array. Each type has fixed elements that can be saved. After a thorough study, you can even obtain the exact types of values that can be stored in arrays. However, in most cases, it is enough to use a common vector type.

There are three vector types you will often see:

  • Vec2
  • Vec3
  • Vec4

These vector types contain a specific number of floating point numbers: vec2 contains two floating point numbers, vec3 contains three floating point numbers, and vec4 contains four floating point numbers.

These types can be used in various data types that may be changed or held in the coloring tool. In the fragment shader, the X and Y coordinates are the information you want to save. (X, Y) is suitable for storing in vec2.

Another thing you may want to keep track of during image processing is the R, G, B, And A values of each pixel. These can be stored in vec4.

Matrix

Now that we know about vectors, we will continue to understand matrices. Matrices are similar to vectors, But they add an additional layer of complexity. A matrix is an array of floating point numbers, rather than a single simple floating point array.

Like a vector, you will often process the following matrix objects:

  • Mat2
  • Mat3
  • Mat4

Vec2 stores two floating point numbers, and mat stores the same value as two vec2 objects. It is not necessary to pass a vector object to a matrix object. You only need to have enough floating point numbers to fill the matrix. In mat2, You need to input two vec2 or four floating point numbers. Because you can name a vector, and you only need to take charge of two objects instead of four instead of directly passing floating point numbers, we recommend using encapsulated values to store your numbers, this is more conducive to tracking. Mat4 is more complex, because you have 16 numbers instead of 4.

In our mat2 example, we have two vec2 objects. Each vec2 object represents a row. The first element of each vec2 object represents a column. When building your matrix object, it is important to ensure that each value is placed in the correct rows and columns. Otherwise, the correct results will not be obtained when you use them for computation.

Now that we have a matrix and a vector to fill the matrix, the question is: "What should we do with them? "We can store vertices and colors or other information, but what if we want to do something cool by modifying them?

Vector and matrix operations, that is, elementary Linear Algebra

The best resource I have found about how linear algebra and matrices work is a better explanation for this website. From this websiteStolenThe reference is as follows:

The survivors of linear algebra have become physicists, graphics programmers, or other masochistic practitioners.

In general, matrix operations are not "difficult", but they are not interpreted by any context, so it is difficult to generalize why someone wants to deal with them. I want to know how they help us implement incredible things after giving them background in graphic programming.

Linear Algebra allows you to operate on many values at a time. Suppose you have a set of numbers, and you want to multiply each number by 2. You generally calculate values one by one. However, you can perform the same operation on each number in parallel.

Here is a terrible example, CGAffineTransforms. Affine conversion is a simple operation. It can change the size, position, or rotation angle of the shape (such as a square or rectangle) with parallel edges.

In this case, you can sit down and take out the pen and paper and calculate the conversion by yourself, but this is actually meaningless. GLSL has many built-in functions to perform these complex functions for computing and conversion. Understanding the ideas behind these functions is the most important.

GLSL Special Functions

In this article, we will not repeat all GLSL built-in functions, but you can find good related resources on Shaderific. Many GLSL functions are derived from the basic mathematical operations in the C language mathematical library, so it is a waste of time to explain what sin functions do. We will focus on some more profound functions to achieve the purpose of this article and explain how to make full use of GPU performance.

Step (): GPU has a limitation that it does not have good processing condition logic. What GPUs like to do is to accept a series of operations and apply them to everything. The branch causes significant performance degradation on the fragment shader, especially on mobile devices. Step () can alleviate this limitation to some extent by allowing conditional logic to be implemented without generating branches. If the value of the step () function is smaller than the threshold value, step () returns 0.0. If the value is greater than or equal to the threshold value, 1.0 is returned. By multiplying the result with the value of your colorant, the value of the colorant can be used or ignored, rather than using the if () statement.

Mix (): The mix function combines two values (such as color values) into a variable. If we have two colors: red and green, we can use the mix () function for linear interpolation. This is often used in image processing, such as controlling the intensity of effects through a set of unique settings in applications.

* Clamp ():One of the more consistent aspects of GLSL is that it prefers to use normalized coordinates. The value of the color component or texture coordinate it wants to receive is between 0.0 and 1.0. To ensure that our value does not exceed this very narrow area, we can use the clamp () function. Clamp () checks and ensures that your values are between 0.0 and 1.0. If your value is less than 0.0, it sets the value to 0.0. This is to prevent some common errors, such as when you accidentally pass in a negative number, or other values that completely exceed the formula range.

 

Example of a more complex shader

I know that the flood of mathematics will soon overwhelm you. If you can keep up with me, I would like to give you a few examples of the beautiful coloring tool, this will make more sense, so that you have the opportunity to drown in the GLSL tide.

Saturation adjustment

    

This is a fragment iterator for saturation adjustment. This coloring tool is from graph coloring tool: Theory and Practice. I strongly recommend the entire book to anyone interested in the coloring tool.

Saturation is the term used to indicate the brightness and intensity of a color. The saturation of a bright red sweater is much higher than that of the gray sky when Beijing smog occurs.

We have some optimizations to use with reference to human perception of color and brightness. Generally, human beings are more sensitive to brightness than color. Over the years, an optimization method for compressing software volume is to reduce the memory used for storing colors.

Human beings are not only sensitive to brightness than color, but also in brightness, we are more sensitive to certain colors, especially green. This means that when you look for ways to compress images or change their brightness and color in some way, it is important to focus more on the green spectrum, because we are most sensitive to it.

 varying highp vec2 textureCoordinate;  uniform sampler2D inputImageTexture; uniform lowp float saturation;  // Values from "Graphics Shaders: Theory and Practice" by Bailey and Cunningham const mediump vec3 luminanceWeighting = vec3(0.2125, 0.7154, 0.0721);  void main() {    lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);    lowp float luminance = dot(textureColor.rgb, luminanceWeighting);    lowp vec3 greyScaleColor = vec3(luminance);        gl_FragColor = vec4(mix(greyScaleColor, textureColor.rgb, saturation), textureColor.w);      }

 

Let's look at the code of the part coloring er in a row:

varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture;  uniform lowp float saturation;

Once again, because this is a part shader to communicate with the basic vertex shader, We need to declare a varyings variable for the input texture coordinates and the input image texture, in this way, we can receive the information we need and filter the information. In this example, a new uniform variable needs to be processed, that is, the saturation. The saturation value is a parameter we set from the user interface. We need to know how much saturation the user needs to display the correct color quantity.

 

const mediump vec3 luminanceWeighting = vec3(0.2125, 0.7154, 0.0721);

This is where we set the vectors of three elements to save the color proportion for our brightness. The three values must be 1, so that we can calculate the brightness between 0.0 and 1.0. Note that the value in the middle represents the green value, with a color proportion of 70%, while Blue only uses 10% of it. Blue is not very good for our presentation. It makes sense to put more weights on green.

 

lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);

We need to sample the coordinates of specific pixels in our image/texture to obtain color information. We will change it a little bit, instead of directly returning it as if we wanted to pass through the filter.

 

lowp float luminance = dot(textureColor.rgb, luminanceWeighting);

This line of code will make those who have not learned linear algebra or who have learned it before but seldom used it look unfamiliar. We are using Vertex multiplication in GLSL. If you remember that you used the dot operator to multiply two numbers in school, you can understand what happened. The dot multiplication calculation takes vec4, which contains the texture color information as the parameter. The last unwanted element of vec4 is discarded and multiplied by the corresponding brightness weight. Then all three values are taken out and combined to calculate the overall brightness value of this pixel.

 

lowp vec3 greyScaleColor = vec3(luminance);

Finally, we combine all the clips. To determine what each new color is, we use the nice mix function we just learned. The mix function combines the gray value we just calculated with the initial texture color and the obtained saturation information.

This is a great and easy-to-use coloring tool that allows you to use the four lines of code in the main function to convert an image from color to gray or from Gray to color. Not bad, isn't it?

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.